id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
238820975 | pes2o/s2orc | v3-fos-license | Comparison of Ultrasound-guided vs Blind Transversus Abdominis Plane Block in Gynecological Abdominal Surgeries for Postoperative Analgesia in Tertiary Care Center: A Randomized Prospective Single-blind Study
Ab s t r Ac t Background and aims: The transversus abdominis plane (TAP) block is a recently described approach which blocks the nerves of the anterior abdominal wall. We compared the duration of analgesia and efficacy of ultrasound-guided vs conventional block on immediate postoperative pain in patients undergoing gynecological abdominal surgeries. Materials and methods: Eighty-two patients undergoing gynecological abdominal surgeries under spinal anesthesia were randomized to undergo ultrasound-guided (n = 41) vs anatomical landmark-guided TAP block (n = 41). The pain severity using the visual analog scale (VAS) score at rest and on movement were noted at various time intervals up to 24 hours. We compared the total duration of analgesia (TDA) and the total consumption of analgesics (TCA) in both groups. SPSS version 21 was used. Demographic data were analyzed using the Student’s t-test and other parameters using paired t-test. Results: Mean VAS scores both at rest and on movement were significantly higher in the anatomical landmark-guided TAP block in the first 8 hours postoperatively. The TDA was prolonged significantly (18.88 ± 6.18 hours) and TCA was less (0.95 ± 0.67 g) in the ultrasound group as compared to the other group with TDA of 8.38 ± 2.58 hours and TCA of 2.54 ± 0.71 g. Conclusion: Ultrasound-guided TAP block provided a significantly longer duration of analgesia as compared to the anatomical landmark-guided TAP block and a significant decrease in consumption of rescue analgesics.
IntroductIon
Nowadays the concept of fast track surgery has developed with the aim of enhanced recovery with minimal complications and reduced hospitalization. 1 Opioid-sparing multimodal analgesia (like neuraxial block, regional block, nonsteroidal anti-inflammatory drugs (NSAIDs), paracetamol) is an essential part of enhanced recovery after surgery. Epidural analgesia was the standard method for postoperative pain management in abdominal surgeries till now but recent literature does not support the same. 2 Regional nerve block especially useful in patients with coagulopathy, poor cardiopulmonary reserve, and in hemodynamically unstable patients where epidural technique would be contraindicated. Only NSAIDs and paracetamol are not sufficient but are the supplements to the other modes of analgesia.
Transversus abdominis plane (TAP) block has been described as an effective technique as a part of multimodal analgesia to reduce postoperative pain and opioid consumption after lower abdominal surgeries. 3 Though Carney et al. 4 had observed the analgesic benefits of transversus abdominis block in total abdominal hysterectomy (TAH) by anatomical landmark method and Atim et al. 5 observed the same with ultrasound, there is no study on the comparison about efficacy and duration of analgesia of ultrasoundguided vs conventional transversus abdominis block in patients undergoing gynecological abdominal surgeries.
We thus conducted a randomized prospective single-blind study to compare the duration of analgesia and efficacy of ultrasound-guided vs conventional transversus abdominis block on postoperative pain relief up to 24 hours, hypothesizing that conventional TAP block will also demonstrate similar efficacy of pain relief which would be of benefit in centers where USG is not available.
MAterIAls A n d Methods
After Institutional Ethical Committee approval, CTRI registration (CTRI/2018/05/013811) and written informed consent, 82 (41 in each group) adult female patients of ASA physical status I to II undergoing elective abdominal gynecological surgeries (TAH and exploration for ovarian cystectomy, postpartum tubal ligation) under spinal anesthesia were recruited for the study for the period of 6 months. This study was designed to be a randomized prospective single-blind study. Exclusion criteria were: refusal by the patient, morbid obesity, surgical scar or distorted anatomy at the site of injection, redo surgeries, known allergy to local anesthetics. Forty-one patients in each group would be needed after assuming the probability of alpha error as 1% and the power of the study as 80%, confidence interval of 99%. Patients were allocated randomly by sealed envelopes, according to a computer-generated sequence of random numbers, to undergo ultrasound-guided (Group U) vs anatomical landmark-guided TAP block (Group A).
The primary objective of the study was to compare the duration of postoperative analgesia in both groups. Secondary objectives were to compare the efficacy with respect to VAS score at various time intervals, 24-hour consumption of rescue analgesics, hemodynamic stability, and complications like hematoma, local anesthesia systemic toxicity, and visceral injury in both the groups. Written informed consent was signed by the patients who are willing to participate in the study.
On the day of operation, after confirming starvation status the patient was taken to operation theater and intravenous (i.v.) fluid started. Standard ASA monitoring was used for all patients. Heart rate (three-lead ECG), noninvasive arterial pressure, and oxygen saturation were continuously monitored perioperatively. All patients received routine subarachnoid block with 3.5 cm 3 of injection bupivacaine heavy with 0.5 cm 3 of injection fentanyl as additive with 25 G spinal needle under all aseptic precautions. Bilateral transversus abdominis block was given postoperatively as part of multimodal analgesia. Receding sensory and motor block was assessed by two-segment regression and ankle movement, respectively. All the blocks were performed postoperatively after confirmation of motor block regression (by observing ankle movement) by a senior anesthesiologist having experience of >5 years in regional anesthesia.
The drug was given with 20 G Angiocath stylet, using the in-plane technique. The ultrasound probe was prepared in a sterile manner.
External oblique, internal oblique, and transverses abdominis muscles were visualized between the subcostal margin and iliac crest (Fig. 1A).
Once the tip of the needle was placed in a space between the internal oblique and transversus abdominis muscles, a test dose of 1-2 mL of saline 0.9% was given to visualize the needle tip location. A probe was adjusted continuously to visualize a bright hyperechoic shaft and tip (Fig. 1B).
When the needle tip was in the correct plane 20 cm 3 of 0.25% injection bupivacaine, was administered on each side under direct USG guidance, after negative aspiration of blood.
The drug spread was visualized as an ellipsoid shape dark shadow forming between the aponeurosis of the internal oblique and the transversus abdominis muscles (Fig. 1B).
The total dose of bupivacaine was 2 mg/kg and the total volume was not >40 mL.
Anatomical Landmark-guided (Blind) TAP Block (Group A)
Blind TAP block was given with anatomical landmark method. 20 G Angiocath stylet was inserted in a lumbar triangle of petit (bounded by latissimus dorsi posteriorly, external oblique anteriorly, iliac crest inferiorly, and internal oblique muscle at the floor) above the highest This field block involves the injection of local anesthetic deposition between the internal oblique and transversus abdominis muscle. After confirmation of double loss of resistance technique and backflow with 1-2 mL of normal saline, 20 cm 3 of 0.25% injection bupivacaine was administered with intermittent aspiration on both sides each.
The anesthesiologist who performed the block was not involved in postoperative data collection. Parameters like pain severity using a visual analog scale (VAS) score at rest and knee movement every 2, 4, 8, 12, 18, and 24 hours postoperatively, hemodynamic at 2, 4, and 8 hours postoperatively, and complications for 24 hours were assessed. Patients were instructed how to make use of a 10 mm VAS graded from 0 (no pain) to 10 (most severe pain) preoperatively.
Injection paracetamol 15 mg/kg was given intravenously as the first rescue analgesic in both the groups when VAS score >4. We decided to give injection diclofenac as a second rescue analgesic if pain relief was not achieved with paracetamol also noted consumption of injection paracetamol in 24 hours (Flowchart 1).
stAtIstIcAl AnAlysIs
Data were statistically described in terms of mean (±SD), frequencies (number of cases), and percentages when appropriate. Comparison of quantitative variables between the study groups was done using an unpaired t-test. For comparing categorical data, a Chi-square test was performed. An exact test was used instead when the expected frequency is <5. The confidence interval considered was 99%. A probability value (p value) <0.05 was considered statistically significant. All statistical calculations were done using computer programs Microsoft Excel 2013(Microsoft Corporation, NY, USA) and SPSS (Statistical Package for the social science, SPSS Inc., Chicago, IL, USA) version 21.
results
Patients in both the groups were comparable in terms of demography, ASA status, total duration of anesthesia, and surgery; a summary of which has been shown in Table 1.
From the analysis of hemodynamic parameters in terms of pulse rate, systolic BP, and diastolic BP up to 2, 4, and 8 hours postoperatively, it was found that all of them were comparable and not statistically significant in both the groups.
Injection paracetamol 1 g was the first line analgesics and injection diclofenac was the second line analgesics. The total duration of analgesia (TDA) was noted in both the groups as the time from block till they received first rescue analgesics.
We found that the TDA and total consumption of analgesics (TCA) were statistically significant (p < 0.05) in both groups. The TDA was prolonged significantly in the USG-guided TAP block group (Group U) (18.88 ± 6.18 hours) as compared to the anatomical landmark-guided TAP block group (Group A) (8.38 ± 2.58 hours) (Fig. 2). We observed that total consumption of analgesia is less in surgical patients who underwent exploration (U1 and A1) cases in comparison with patients who underwent TAH (U2 and A2) cases. So, we did a further subgroup analysis and we found a statistically significant difference in TDA and total consumption of analgesics (TCA) in subgroups (Figs 3 and 4). In anatomical landmark-guided TAP block (Group A) requirement of injection paracetamol (total consumption of analgesics) was more than 2.54 ± 0.72 g in comparison with 0.95 ± 0.67 g in USG-guided TAP block (Group U) which was statistically significant. In both groups, patients did not require a second rescue analgesic at all. Patients of postpartum tubal ligation which were included in exploration cases did not require paracetamol for 24 hours.
Mean of VAS score both at rest and movement (knee flexion) was comparable and higher in anatomical landmark-guided TAP block group (Group A) than in USG-guided TAP block group (Group U) at 2, 4, and 8 hours postoperatively which was statistically significant. After 8 hours when the patient received injection paracetamol in the anatomical landmark-guided TAP block group (Group A). The mean of VAS was comparable in both the groups but not statistically significant (Fig. 5). None of the patients had any complications in both the groups which can be attributed to TAP block.
dIscussIon
Epidural analgesia was the common pain relief technique for postoperative analgesia in the past for many surgical procedures. But now with more and more use of anticoagulants as prophylaxis and the advent of a concept of fast track recovery, the risk-benefit ratio for epidural analgesia is still a question. So less invasive techniques like nerve blocks with minimal complications are being considered.
Transversus abdominis block is a fascial plane block. It was introduced in anesthesia practice by Rafi in 2001 using the traditional landmark of a lumbar triangle of petit. 6 This block requires a larger volume of local anesthetics to deposit it in-between the aponeurosis of the internal oblique and transversus abdominis. Transversus abdominis plane block innervates the nerves of the anterolateral abdominal wall including the parietal peritoneum. The analgesic effect of TAP block may last longer maybe because of the less vascular plane at that site as the absorption of local anesthetics into the circulation depends primarily on the vascularity of the site of deposition. 7 The mean TDA was 18.67 hours in our study with ultrasound while Mankikar et al. 8 found a mean duration of 9.53 hours after TAP block in cesarean patients. This duration can further be prolonged by additives like clonidine. 9 Many studies have proved until now that the TAP block provided effective analgesia during the first 24 hours after surgeries of lower abdominal or pelvic surgical procedures 3,4 in which they had included a limited number of patients for each surgical procedure and comparisons were performed with a control group receiving systemic analgesia.
Transversus abdominis plane block can be successfully given using the anatomical landmarks in the lumbar triangle of petit by the double loss of resistance technique and by confirmation of backflow. McDonnell et al. 3 and Carney et al. 4 found a decrease in the postoperative VAS score after the block was given by the anatomical landmark method in abdominal surgeries and TAH, respectively.
But when we perform the TAP block blindly, the drug can be incorrectly deposited in the subcutaneous layer or within the muscle planes, which explains the less efficient anesthesia. 10,11 Weintraud and colleagues 12 have reported that diffusion of the local anesthesia solution occurred in the right plane only in 14% when the block was performed blindly. In this study also, we found a mean duration of analgesia of only 7 hours (vs 18 hours by USG) after blind TAP block. Ultrasound allows an increase in the duration of analgesia and a decrease in consumption of rescue analgesics in 24 hours as it is under real-time guidance which allows precise location of space and administration of the drug under vision with fewer complications.
In the current study, we found that the TDA was prolonged significantly in the USG group (18.88 ± 6.18 hours) as compared to the blind group (8.38 ± 2.58 hours) and total consumption of analgesics (TCA) were less in Group U (0.95 ± 0.67 g) than in Group A (2.54 ± 0.71 g). This result was very similar to Mankikar et al. 8 who found that with USG-guided TAP block, the TDA was prolonged from 4.1 to 9.53 hours, and consumption of analgesia was also reduced in cesarean patients. In our study, we proved statistically that the total consumption of analgesics is less in exploration cases even after blind block. Hence, even blind block in the unavailability of USG machine is very useful for providing postoperative analgesia in patients with less dissection like exploration.
Recently, Aveline et al. 13 have compared USG-guided vs blind TAP block in hernia patients. They found that patients who received USG-guided TAP block expressed significantly less pain at rest on VAS score at 4, 12, and 24 hours and postoperative morphine requirement was also less in the first 24 hours. Sunita et al. 14 noted the time to rescue analgesia was more in the group who received USG-guided hernia block (7.22 hours) as compared to the group who received blind (6.80 hours) block.
In the current study, pain intensity at rest and movement was lower in ultrasound-guided TAP block. Pain score was further reduced in exploration cases of ovarian cystectomy and postpartum tubal ligation than in TAH which could be attributed to more amount of tissue dissection in TAH. Analgesic demand was decreased in patients who benefited from a USG-guided TAP block, as observed by the consumption in both the groups. Similar results were found by Petersen et al. 15 in patients of laparoscopic cholecystectomy.
The most important benefit of giving TAP block, especially USGguided, as part of multimodal analgesia for postoperative pain relief, is the total avoidance of opioids and a decreased consumption of other analgesics like NSAIDs, tramadol, and even paracetamol. All supplementary analgesics have side effects-nausea and vomiting being the common one, which decreases the postoperative satisfaction of the patient and also increases the postoperative stay.
It is a very useful technique in patients with coagulopathy, poor cardiopulmonary reserve, and hemodynamically unstable patients. Local infiltration can also be given for postoperative pain relief but its action does not last long. The patient has more compliance with single-shot nerve blocks rather than giving multiple top-ups or using a patient controlled analgesia (PCA) pump with epidurals.
Transversus abdominis plane block has been associated with complications like local site infection, local anesthetic toxicity, peritoneal perforation, bowel injury, etc. 16 No such complications were observed in our study except in one case where a rectus sheath hematoma was found, which was later found to be related to surgical complications. 16 Blind TAP block has been documented with one case of liver puncture 17 and colon injury has been observed after a blind inguinal block. 18 Our study has several limitations. More number of patients need to be given USG-guided TAP block to get more appropriate results. Availability of the ultrasound machine by itself could be a problem in an institute other than a tertiary care center. Transversus abdominis plane block is limited only to somatic anesthesia of the abdominal wall; hence, newer techniques (like quadratus lumborum block variants) have been proposed to accomplish somatic as well as visceral analgesia. We did not include the type of incision, such as, transverse lower abdominal or vertical in our study. The point worth noting is that the vertical incision involves a greater number of dermatomes. We monitored VAS for pain score rating which is a subjective parameter.
conclusIon
This randomized single-blind study demonstrated that the USGguided TAP block provides a longer duration of postoperative pain relief and reduced consumption of rescue analgesics till 24 hours as compared with conventional anatomical landmark-guided blind blocks after gynecological abdominal surgeries. Though TAP block can be safely given by conventional landmark-guided method, it provides the same degree of a duration of analgesia as USG-guided method only in minor cases that involve fewer dermatomes. | 2021-08-27T16:44:31.689Z | 2021-07-30T00:00:00.000 | {
"year": 2021,
"sha1": "eb92df6899b59c459a238c845d4cdfc95b299ec6",
"oa_license": null,
"oa_url": "https://doi.org/10.5005/jp-journals-10049-0092",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5d4008ae4ea3533982de81fcbed95a6632892124",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18499740 | pes2o/s2orc | v3-fos-license | Peri-Hemorrhagic Edema and Secondary Hematoma Expansion after Intracerebral Hemorrhage: From Benchwork to Practical Aspects
Spontaneous intracerebral hemorrhage (SICH) is the most lethal type of stroke. Half of these deaths occur within the acute phase. Frequently observed deterioration during the acute phase is often due to rebleeding or peri-hematomal expansion. The exact pathogenesis that leads to rebleeding or peri-hemorrhagic edema remains under much controversy. Numerous trials have investigated potential predictor of peri-hemorrhagic edema formation or rebleeding but have yet to come with consistent results. Unfortunately, almost all of the “classical” approaches have failed to show a significant impact in regard of significant clinical outcome in randomized clinical trials. Current treatment strategies may remain “double-edged swords,” for inherent reasons to the pathophysiology of sICH. Therefore, the right balance and possibly the combination of current accepted strategies as well as the evaluation of future approaches seem urgent. This article reviews the role of disturbed autoregulation following SICH, surgical and non-surgical approaches in management of SICH, peri-hematoma edema, peri-hematoma expansion, and future therapeutic trends.
Spontaneous intracerebral hemorrhage (SICH) is the most lethal type of stroke. Half of these deaths occur within the acute phase. Frequently observed deterioration during the acute phase is often due to rebleeding or peri-hematomal expansion. The exact pathogenesis that leads to rebleeding or peri-hemorrhagic edema remains under much controversy. Numerous trials have investigated potential predictor of peri-hemorrhagic edema formation or rebleeding but have yet to come with consistent results. Unfortunately, almost all of the "classical" approaches have failed to show a significant impact in regard of significant clinical outcome in randomized clinical trials. Current treatment strategies may remain "double-edged swords," for inherent reasons to the pathophysiology of sICH. Therefore, the right balance and possibly the combination of current accepted strategies as well as the evaluation of future approaches seem urgent. This article reviews the role of disturbed autoregulation following SICH, surgical and non-surgical approaches in management of SICH, peri-hematoma edema, peri-hematoma expansion, and future therapeutic trends.
Keywords: iCH, intracerebral hemorrhage, brain injury, cerebral edema, intracranial pressure inTRODUCTiOn Spontaneous intracerebral hemorrhage (sICH) accounts for approximately 13-17% of all strokes; however, sICH carries substantial mortality and morbidity, approaching approximately 50% within 3 months and severe disability in the majority of survivors. Half of these deaths occur within the acute phase (1). Neurological deterioration during the acute phase may be due to hematoma expansion or peri-hemorrhagic edema growth (2). Since hematoma growth tends to occur within the first 24 h and edema formation within the first 72 h from symptoms onset, intervention during this time period may modify long-term outcome (2). Thus, the dynamic nature of early sICH represents a management challenge and opportunity for intervention. In this review, we discuss the pathogenesis and the role of different proposed pathways that have been explored to contribute to sICH progression. Peri Hemorrhagic Edema and Secondary Hematoma Expansion after ICH Frontiers in Neurology | www.frontiersin.org January 2017 | Volume 8 | Article 4
PATHOGeneSiS Biology
The pathophysiology leading to hematoma expansion and edema progression remains poorly understood. sICH is believed to result from rupture of lipohyalinoic arteries followed by secondary arterial rupture at the periphery of the enlarging hematoma, in an "avalanche" fashion (2). This model was first proposed by C. Miller Fisher in the early 1970s (2,3). Hematoma expansion may reflect additional leakage, extended spatial distribution of the initial hemorrhage, or both. Based on this model, mechanical disruption may be considered the most important neuropathological correlate for the expanding hematoma (2). Hematoma expansion leads to secondary injury mechanisms, which accentuates tissue destruction. Yet, exact pathophysiological mechanisms are unclear. Prediction of risk factors for hematoma expansion and subsequent secondary injury might provide a first step toward development of effective therapies. Hematoma expansion and edema generation do not appear related to a single mechanistic pathway or risk factor, but rather several pathways/factors thought to act in synergy. Early preclinical models proposed the concept of "peri-hemorrhagic ischemia" surrounding the primary hematoma (2, 4-7). However, subsequent metabolism and flow studies demonstrated that such peri-hematoma changes were far from universal (7-10). Perihematomal changes lead to cytotoxic edema and neuroinflammatory mediators (11,12).
Role of Disturbed inflammation
Numerous human and preclinical studies suggest a link between inflammation, peri-hematoma edema formation, and hematoma expansion. These studies particularly shed light on a direct role of neutrophil activation, free-radical formation, and the expression of interleukin-6 (IL-6) and tumor-necrosis alpha (TNF-α) (13)(14)(15). Several rat model studies have also shown that formation of the peri-hemorrhagic penumbra can be mediated by various neuroprotective elements such as N-methyl-d-aspartate receptor antagonism. The latter blunts excitatory amino acid-mediated neuronal death and diminishes microglia-mediated neuronal injury (11,12,16). Studies have also linked elevated plasma concentration of cellular fibronectin (c-FN) and inflammatory mediators IL-6 and TNF-α in the early phase of hematoma enlargement (13)(14)(15). However, the clinical utility of matrix metalloproteinase (MMP), c-FN, TNF-α, or IL-6 blood concentrations in early ICH remains unclear. Another distinct pathway that supports the role of neuro-inflammation in hematoma expansion includes thrombin-induced activation of inflammatory cascade; the latter being an important regulator of cellular activation through binding to the protease-activated receptors (PARs) expressed on platelets, leukocytes, and endothelial cells (ECs) (17)(18)(19)(20), along overexpression of MMP (17)(18)(19). The latter promotes extracellular matrix proteolysis, attack the basal lamina, and results in degradation of c-Fn (17)(18)(19)21). The expression of such inflammatory processes seem to coincide chronologically with the peak of peri-hemorrhagic edema formation and secondary hematoma expansion; when its maximal potential is often reached by 3-5 days from the initial ictus of hematoma formation (2,10,22,23).
Role of Disturbed Autoregulation
Disturbed autoregulation and uncontrolled perfusion pressure in hypertension may act as a driving force for hematoma expansion and peri-hemorrhagic edema formation. Numerous studies have suggested that blood pressure elevation may worsen ICH by providing continued force for hematoma expansion and potentially worsening outcomes (24,25). However, aggressively blood pressure lowering after sICH may be counterintuitive. Elevation in mean arterial pressure may be a natural response to preserve cerebral perfusion. Qureshi et al. (26) describe three distinct phases of metabolic changes with respect to autoregulation: hibernation, seen during the first 48 h with reduction of CBF, and metabolism occurring in bilateral cerebral hemispheres; reperfusion, which may last up to 14 days with heterogeneous areas of cerebral hypo-and hyperperfusion; and finally, normalization, with resolution and development of normal cerebral flow pattern except in non-viable brain tissue (3,(26)(27)(28)(29)(30)(31)(32). Numerous models demonstrated that acute blood pressure reduction is associated with decreased diffusion on brain imaging (21,33). However, studies have found no clear clinical implication of these findings (34,35). Major randomized clinical trials (ATACH, INTERACT, and INTERACT-2) have explored the relationship of blood pressure reduction and clinical outcomes in ICH. While no sustained long term outcome benefit has been found for aggressive blood pressure management, interventions do appear to be safe (36)(37)(38). More recently, the ATACH 2 trial further re-affirmed that intense BP control (target 110-139 mmHg) did not result in an incremental benefit or lower rate of death or disability than standard reduction to a target of 140-179 mmHg (21,(33)(34)(35)39).
Role of Hemostasis
While homeostatic therapies seem promising, through prevention of hematoma enlargement, clinical trials examining use of blood products (in particular recombinant factor VIIa) remains inconclusive. While initial preliminary data suggested that Factor VIIa may be safe (40,41), results from a phase-3 randomized controlled trial showed that although recombinant factor VIIa use after ICH resulted in significant reduction in hematoma volume but no reduction in severe disability or death compared to placebo at 3 months (42). If fact, recombinant factor VIIa use after ICH was associated with higher risk of arterial thromboembolic adverse events (43). The current AHA/ASA guidelines have since concluded that recombinant factor VIIa remains investigational and should not be used in sICH (44). While there is no disagreement in regard of coagulopathy reversal for patients' who develop acute intracerebral hemorrhage while on anticoagulant therapy, the role of platelet transfusion remains controversial. A recent multicenter randomized controlled trial suggested (PATCH) suggested that platelet transfusion is inferior to standard of care for patients who develop intracerebral hemorrhage while on antiplatelet therapies, and thus cannot be recommended (45).
Surgical Hematoma evacuation
Surgical evacuation of the hematoma, and on whether this is beneficial, remains under investigation. Under select circumstances, various surgical approaches may be undertaken. This may include conventional craniotomy, stereotactic guidance with aspiration and thrombolysis, image-guided stereotactic endoscopic aspiration, and decompressive craniectomy. The overall aim of surgical intervention is to remove the source of hemorrhage, eliminate the localized or global mass effect of the hematoma, and eliminate the toxic effects of blood degradation products. To date, two major randomized controlled trials (STITCH I and STICH-II) explored surgical vs non-surgical management of ICH (46,47). However, those trials failed to show an outcome benefit over conservative treatment. However, one of the largest meta-analysis which also included the STICH-II data suggested an overall benefit for surgery for select subgroups of patients, including those with poorer prognosis at presentation, those with secondary deterioration attributed to hematoma expansion, and those with superficial ICH without intraventricular extension (48).
Recently, minimally invasive and stereotactic surgeries have emerged as an alternative to craniotomy for hematoma evacuation. The more recently published ICES (intraoperative stereotactic computed tomography-guided endoscopic surgery) study suggested that early computerized tomographic image-guided endoscopic surgery is a safe and effective method in select cases to remove acute intracerebral hematomas, with a potential to enhance neurological recovery (49). Similarly, the MISTIE trial (minimally invasive surgery plus alteplase) in intracerebral hemorrhage evacuation appeared overall safe and promising in ICH (50). However, many questions remain regarding the surgical optimization of the endoscopic technique, the patients' selection, and the timing of surgery. The role of minimally and endoscopic surgery will continue to evolve as more centers continue to gain experience with this promising approach.
COnCLUSiOn
Current treatment strategies may remain "double-edged swords. " For example; surgical intervention may reduce hematoma volume but may also lead to decompression of the surrounding "peri-hemorrhagic penumbra tissue" with subsequent re-accumulation of bleeding. Likewise, hemostasis might stop cerebral bleeding yet compromise normal circulation. Blood pressure reduction decreases hematoma expansion but may also decrease cerebral perfusion and other vital organ blood flow. Therefore, balance of current accepted strategies and the evaluation of future approaches seem critical. This topic will continue to evolve as our understanding of the pathogenesis of sICH and secondary hematoma expansion continue to evolve.
AUTHOR COnTRiBUTiOnS
M-AB and MJ contributed to the preparation and drafting of this manuscript. All authors have read and approved this manuscript in its final form. | 2017-05-04T13:18:44.421Z | 2017-01-19T00:00:00.000 | {
"year": 2017,
"sha1": "dd6edb506644ef6670ec3d9c324261ace0b4eda2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2017.00004/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd6edb506644ef6670ec3d9c324261ace0b4eda2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
31131106 | pes2o/s2orc | v3-fos-license | How Criterion Scores Predict the Overall Impact Score and Funding Outcomes for National Institutes of Health Peer-Reviewed Applications.
Understanding the factors associated with successful funding outcomes of research project grant (R01) applications is critical for the biomedical research community. R01 applications are evaluated through the National Institutes of Health (NIH) peer review system, where peer reviewers are asked to evaluate and assign scores to five research criteria when assessing an application's scientific and technical merit. This study examined the relationship of the five research criterion scores to the Overall Impact score and the likelihood of being funded for over 123,700 competing R01 applications for fiscal years 2010 through 2013. The relationships of other application and applicant characteristics, including demographics, to scoring and funding outcomes were studied as well. The analyses showed that the Approach and, to a lesser extent, the Significance criterion scores were the main predictors of an R01 application's Overall Impact score and its likelihood of being funded. Applicants might consider these findings when submitting future R01 applications to NIH.
Introduction
The National Institutes of Health (NIH) is the world's leading biomedical and behavioral research organization and spends about three-quarters of its nearly $30.1 billion budget on extramural grant research funding to support research in universities, medical schools and research institutions [1]. Peer review is the cornerstone of the NIH's extramural research program. Applications for research funding from NIH's extramural research program are vetted through the peer review process [2]. Over the years, the NIH has made periodic efforts to improve its peer review system to ensure fairness and efficiency in evaluating grant applications. The most recent effort began in June of 2007 [3]. The enhancements to the NIH peer review system were implemented, in phases, beginning in 2009 [4]. The key modifications Coordination II (IMPACII), the database of record for information collected from NIH extramural grant applications, awards and applicants during the receipt, review and award management process. For each application, data were obtained on whether the application was funded, its final Overall Impact score, and its five research criterion scores, which were delinked from the reviewers providing the scores. The research criterion scores were calculated for each a Overall Impact score averages only include discussed applications.
b Other application and applicant characteristics evaluated, but not shown here due to space limitations, are: Council round of review, human or animal subject concerns, solicitation type (unsolicited, program announcement or request for application), locus of review (Center for Scientific Review v. other NIH Institutes and Centers), review group type (standing study section v. special emphasis panel), direct costs requested, # of years of support requested, the NIH administering Institute or Center (IC), the geographical region of the institution and the previous NIH funding history of the applicant. c A new application is a type 1 application. A type 2 application is a renewal, also known as competing continuation. A type 3 application can be a competing revision for additional support to expand the scope of study or can be a non-competing administrative supplement application for additional support to cover increased costs. A type 9 application is a renewal for which the awarding institute or center changes. d An application submitted for the first time is an A0 application or an initial submission. A previously submitted unfunded A0 application resubmitted for new funding consideration is an A1 application or a first resubmission. A previously unfunded A1 application resubmitted for new funding consideration is an A2 application or a second resubmission. The policy on resubmission in place for applications submitted during the study period, FY 2010-FY 2013, can be found at http://grants.nih.gov/grants/guide/notice-files/NOT-OD-09-003.html. e A new investigator is defined as a principal investigator who has not previously competed successfully as a principal investigator for a substantial independent research award. A new investigator who is within 10 years of completing his/her terminal research degree or is within 10 years of completing medical residency (or equivalent) is considered an early stage investigator. A principal investigator who is not a new investigator is an experienced investigator. A list of NIH grant activities that do not disqualify a principal investigator from being considered as a new investigator can be found at http:// grants.nih.gov/grants/new_investigators/index.htm. f An application including only one principal investigator (PI) is a single PI application. An application including more than one principal investigator is a multiple PI (MPI) application. g An application involving (1) only human subjects for research is a humans only application, (2) only animal subjects for research is an animals only application, (3) both human and animal subjects for research is a humans and animals application, and (4) neither human nor animal subjects for research is a no humans or animals application. h An application's rank is based on the rank order of the application's submitting organization or institution with respect to the total amount of NIH research grant funding received by that organization compared to all other organizations over the five year period prior to the fiscal year of the application. The lower the rank, the higher is the previous level of funding from NIH. i The type of the institution or organization submitting the application.
j Race of a principal investigator is the racial category that was self-reported by the principal investigator. Applications whose principal investigator reports more than one race category or applications with multiple principal investigators who report different race categories are included in the 'Other' category. k Ethnicity of a principal investigator is the ethnicity selection that was self-reported by the principal investigator. Applications with multiple principal investigators who report different ethnicities are included in the 'MPI Multiple Ethnicities' category. l Gender of a principal investigator is the gender selection that was self-reported by the principal investigator. Applications with multiple principal investigators who report different genders are included in the 'MPI Multiple Gender' category. m Degree represents the highest degree attained by a principal investigator. Applications with multiple principal investigators reporting more than one degree type are included in the 'MPI Multiple Degree Types' category. The "Other" degree category includes degree types such as veterinary, dental and unknown degrees. n Age of a principal investigator is calculated by subtracting the principal investigator's birth year from the application's fiscal year. Applications with multiple principal investigators who report different age group categories are included in the 'MPI Multiple Age Groups' category. Those with an erroneous birth date (less than 24 or greater than 90) or missing birth date are included in the 'Unknown' age category. criterion by averaging all individual criterion scores available for a particular application. In addition, data were extracted on other characteristics related to the application (such as whether it was a new or renewal application), the applicant (such as applicant demographics and personal NIH funding history) and the applicant's institution (such as the institutional funding history with NIH). All demographic data were self-reported, on a voluntary basis, by the applicants. Data on the SRG where the application was reviewed were also obtained. See Table 1 for a full list of variables evaluated for each application. Descriptive summary statistics, as well as correlations between the five criterion scores and the Overall Impact score were produced.
Models
Two general models were developed: 1) the Impact model, a linear regression model with the Overall Impact score serving as the dependent variable; and 2) the Funding model, a logistic regression model with the likelihood of being funded serving as the dependent variable. The five research criteria were used as the main predictors in both models, controlling for other application and applicant characteristics delineated in Table 1. Both models controlled for the FY of the application to account for changes in the distribution of Overall Impact scores or funding patterns over time. Hierarchical random effects models, with applications clustered by SRG, were employed to account for possible differences in scoring behavior and funding outcomes between peer review groups. In addition to controlling for the potential clustering of scores by SRG, the use of random effects, by way of intraclass correlations, allowed for the decomposition of the total variation in the models into two categories: within-SRG variation and between-SRG variation [16][17][18]. Three sub-models were developed in a step-wise fashion to assess the marginal contribution of each set of characteristics in both general models. Sub-model A focused on the five research criterion scores, including any significant interactions between them. Sub-model B added the other control variables to sub-model A. Sub-model C was identical to sub-model B, but removed the criterion scores. Sub-model C served to illustrate how the various application and applicant characteristics appeared to be associated with the Impact score and relative odds of funding when the quality of the application, as measured by the criterion scores, was not taken into account.
Because the ND applications are not assigned Overall Impact scores, only the 71,651 applications that were discussed in SRG meetings and assigned Overall Impact scores from FY 2010 to FY 2013, were used to fit the Impact model. ND applications were not removed from the Funding model because their funding outcomes were known, and data on the five research criterion scores were still available. However, applications precluded from being considered for funding were removed, i.e., those with unresolved human subject or animal concerns and resubmitted applications that had a previous version funded. Removing these applications left 111,533 R01-equivalent applications for the Funding model.
Data analyses were performed using Stata 13 (StataCorp). Model estimates and their 95% confidence intervals (CIs) were computed. The Funding model results were expressed as odds ratios. For ease of interpretation, the coefficients of the criterion score estimates were inverted in the Funding model, so that odds ratios greater than unity should be interpreted as the magnitude of the increase in odds of funding due to a one unit decrease (improvement) of the given criterion. Results were considered statistically significant if they had a P-value of less than 0.05, using 2-sided testing.
The NIH Office of Human Subjects Research Protections was consulted and determined this work to be classified as a program evaluation that did not require human subjects research review by an Institutional Review Board.
Results
Fig 1 shows the distribution of the Overall Impact score and criterion scores in the form of boxplots. The criterion scores for Approach had the greatest variability and highest (or worst) scores, with an interquartile range (IQR) of 2.0 and median of 4.3. The criterion scores for Significance and Innovation both had IQRs of 1.2 and medians of 3.0. Investigator(s) and Environment criterion scores were clustered in the low score ranges with median scores of 2.0 and IQRs of 1.0, indicating that most applications received excellent marks for Investigator and Environment. Table 2 provides the correlations between the criterion scores for each of the five research criteria and the Overall Impact score. All criteria had moderate to high correlations with one another, ranging from 0.55 between Significance and Environment to 0.75 between Investigator(s) and Environment. Environment had the lowest correlation with the Overall Impact score, whereas Approach had the highest correlation with the Overall Impact score (0.44 and 0.84, respectively). Table 1 shows that the average Overall Impact scores and funding rates varied widely according to different application characteristics. For example, new (type 1) applications had an average Overall Impact score of 37.1 and funding rate of 14.2% while renewal (type 2) applications fared better, with an average Overall Impact score of 30.9 and funding rate of 30.1%. Initial submissions (A0s) had an average Overall Impact score of 38.1 and funding rate of 11.2%, whereas resubmissions (A1s) had a more favorable average Overall Impact score and funding rate (31.7 and 30.6%, respectively). Applications from Early Stage Investigators (ESIs) had an average Overall Impact score of 38.4 and a 17.6% funding rate, whereas applications from experienced investigators had a better average Overall Impact score and funding rate (33.9 and 18.8%, respectively). Applications submitted by white principal investigators (PIs) had an average Overall Impact score of 34.8 and a funding rate of 19.0%; in contrast, applications submitted by black PIs had poorer outcomes (average Impact score: 38.1; funding rate: 11.8%). Male PIs had Overall Impact scores and funding rates of 35.3 and 17.9%, respectively, whereas female PIs had corresponding worse scores and funding rates of 36.2 and 16.4%, respectively. Fig 2 shows boxplot distributions of the Overall Impact score by IC, with IC names masked. Median scores ranged considerably by IC, from 33 to 50.5. IQRs ranged from 15 to 22 across ICs. Fig 3 shows the percentage of reviewed applications that were funded by each IC. This rate ranged widely from 7.1% to 28.9%. The rank order of the Overall Impact scores and funding rates by ICs, shown in Figs 2 and 3, respectively, do not match as might be expected: ICs that had better (lower) ranges of Overall Impact scores did not necessarily have higher funding levels. This is due, in part, to differences in the number of applications received and available grant funding dollars between the different ICs, and demonstrates the importance of controlling for IC, particularly in the Funding model. S1 and S2 Tables are similar to Table 1, except that they show summary statistics for discussed and ND applications, respectively. In comparing the two tables, ND applications had worse (higher) mean criterion scores for all five research criteria, compared to discussed applications. Furthermore, the Approach criterion had the worst mean scores for both discussed and ND applications. Among discussed applications, the Approach criterion was more variable, with a higher standard deviation than the other criterion scores, underscoring the former criterion's importance in predicting the Overall Impact score amongst discussed applications. In contrast to discussed applications, which had an overall 29.8% funding rate over the study period, ND applications had almost no chance of being funded (only one ND application was funded in FY 2010-2013).
The Impact model and Funding model results are shown in Tables 3 and 4, separated by sub-model. In sub-model A, with independent variables limited to the criterion scores, all were highly significant in the Impact model, with the coefficients in rank order for Approach, Significance, Innovation, Investigator(s) and Environment estimated at 7.6 (95% CI, 7.5-7.7), 3.4 (3.3-3.5), 1.4 (1.3-1.5), 1.0 (0.9-1.0) and -0.2 (-0.3--0.1), respectively. That is, a one point improvement in the Approach score was associated with a 7.6 point improvement in the Overall Impact score, controlling for the other criterion scores. The Funding model results for submodel A had coefficients in the same rank order, with odds ratio estimates of 6.2 (5.9-6.5), 2.1 (2.0-2.2), 1.5 (1.4-1.6), 1.0 (1.0-1.1) and 0.9 (0.8-0.9), respectively, e.g., for every one point improvement in the Approach score, the odds of funding increased by a factor of 6.2. There was a highly significant interaction between Approach and Significance in both the Impact and How Criterion Scores Predict NIH Peer-Reviewed Application Outcomes applications and 94.7% of unfunded applications, for an overall correct prediction rate of 89.3%. The intraclass correlation coefficient, which measures the amount of variation accounted for by SRGs, was 4.2% in the Impact model and 17.8% in the Funding model; i.e., an application's criterion scores were much better indicators of its review and funding outcomes than the SRG in which it was reviewed. In sub-model B, which adds the full set of application and applicant controls to sub-model A, the coefficients of the criterion scores were largely unchanged. For the Funding model, the only major departure from sub-model A was that the Investigator(s) odds ratio coefficient increased to 1.4 (1. 3-1.5), showing that applications with better Investigator(s) criterion scores were associated with better odds of funding once the other application and applicant characteristics were taken into account. Many of the application control factors had statistically significant relationships to the Overall Impact score and odds of funding. Of note, renewal applications were predicted to have Overall Impact scores 0.7 (-0.8--0.6) points lower (better) than otherwise identical new applications and their odds of funding were predicted to be 1.4 (1.3-1.5) times better. First resubmission applications (A1s) were predicted to have Overall Impact scores 1.3 (-1.5--1.2) points lower and odds of funding 2.2 (2.1-2.3) times greater than otherwise identical initial submissions (A0s). Applications submitted by ESIs were predicted to have Overall Impact scores 1.2 (-1.5--0.8) points lower and odds of funding 2.6 (2.2-3.1) times greater than otherwise identical applications from experienced investigators. Applications submitted by black PIs had Overall Impact scores 0.6 (0.1-1.1) points higher or worse than applications submitted by white PIs with the same measured characteristics, though there was no statistically significant difference in odds of funding. Applications submitted by female PIs had slightly better Overall Impact scores (0.2 [-0.3--0.1] points lower) than those submitted by male PIs, but the odds of funding were not statistically different, all else equal. See Tables 3 and 4 for the full set of control variables. Sub-model B improved the model fit and predictive accuracy of sub-model A by a very small amount, approximately one percentage point in each case.
Differences amongst subgroups in the application and applicant control variables increased substantially in sub-model C, which omits the criterion scores from the full model, sub-model B. Renewal applications were predicted to have Overall Impact scores 3.5 (-3.7--3.3) points lower and odds of funding 2.2 (2.1-2.3) times greater than new ones. First resubmission applications were predicted to have Overall Impact scores 5.6 (-5.8--5.4) points lower and odds of funding 3.7 (3.6-3.8) times greater than initial submissions. In contrast to sub-model B, applications submitted by ESI's were predicted to have Overall Impact scores 1.3 (0.7-1.9) points higher or worse than experienced applications and their funding advantage was reduced to an odds ratio of 1.5 (1.4-1.7). Therefore, the ESI advantage in Overall Impact scores and funding odds was observed only after controlling for the criterion scores. Applications submitted by black PIs and female PIs appeared less likely to be funded, with the odds ratios of black PIs and female PIs falling to 0.7 (0.6-0.8) and 0.9 (0.9-0.9), respectively, and becoming statistically significant in absence of the criterion scores. The amount of variation explained by sub-model C was low (R 2 = 16.9%) and the overall correct prediction rate was lower, 80.7% (only 9.6% for funded applications and 97.7% for unfunded applications).
Discussion
The Impact and Funding model results demonstrate that the criterion scores are the best predictors of an application's Overall Impact score and its likelihood of receiving funding. The model fit statistics support this observation. The R 2 , or variation explained, and correct prediction rate only improved by one percentage point when going from models which included only the criterion scores, to those which included all the other application and applicant control factors. Furthermore, when the criterion scores were removed from the full model, the variation explained and correct prediction rate fell off markedly, and the control variables increased in magnitude and many became statistically significant. Among the criterion scores, there was a clear hierarchy in terms of each criterion's relationship with the Overall Impact score and funding odds. In both the Impact model (which contained only discussed applications) and the Funding model (which contained both discussed and non-discussed applications), the Approach score had the strongest association, with more than double the effect of the next largest predictor, the Significance score. The predictive effect of the Environment score was very small and went in a counterintuitive direction, with better Environment scores having worse Overall Impact scores and funding odds, all else equal. This finding suggests that some applications with poor Overall Impact scores can be associated with strong Environment scores, even after controlling for the other criterion scores. Furthermore, in another set of models (not shown here) where whether an application was discussed or not served as the dependent variable, the criterion score coefficients followed the same rank order, with Approach being by far the largest predictor of whether or not an application was discussed.
The criterion scores were moderately to strongly correlated with one another. This is because highly meritorious applications tended to score well on all five criteria, and vice versa for less meritorious applications. As in Lindner et al. [15], these relatively high correlations raised concerns of multicollinearity (MC). MC does not cause bias when estimating coefficients in a correctly specified model, but it can increase the variability of the estimates [19]. This problem was mitigated by the large number of applications in the model [20], which decreased the variance inflation factor (VIF) of each research criterion. VIF measures how much the variance of an estimated regression coefficient is increased because of collinearity with the other independent variables. The literature on MC typically points to VIF scores of more than 4 as potential signs of multicollinearity problems, though this is only a rule of thumb [21]. No VIF score for the criterion scores was above 2.2 in any of the models.
The summary statistics revealed relatively large differences in Overall Impact scores and funding outcomes between applications with different characteristics, such as the difference between funding rates for new and renewal applications. Sub-model C, which controlled for different application characteristics simultaneously, still exhibited these large differences. However, the multivariate models which took into account the application's criterion scores explained many of the apparent differences in outcomes among different sorts of applications. One notable exception is the fact that ESI applications (and to a lesser extent other applications submitted by New Investigators) had a small advantage in the Impact model and a large advantage in the Funding model. This finding is reflective of NIH policy which strives to support new investigators on new R01-equivalent awards at success rates comparable to that of established investigators submitting new applications.
Consistent with the findings of Ginther et al. [11], the present study found large differences in NIH R01 funding rates by race in the absence of the measured influence of criterion scores. Criterion scores were introduced in FY 2010, and thus were not available for the applications evaluated by Ginther. Differences in outcomes by gender were also discovered in the summary data of the present study. These demographic differences diminished or disappeared once the criterion scores were included in the full models. However, bias cannot be ruled out, particularly in the first stage of peer review, where small but statistically significant differences remain in the Impact model. To ensure fairness, NIH is undertaking an extensive review of potential bias in the peer review system (see http://acd.od.nih.gov/prsub.htm). In contrast to the Impact model, the Funding model showed almost no differences in funding outcomes by demographics once all the measured characteristics of the application were taken into account.
Conclusion
The research criterion scores, specifically the Approach and, to a lesser extent, the Significance score, are the most important predictors of an R01 application's Overall Impact score and its likelihood of being funded. Other factors, such as the New Investigator status of the application, are associated, particularly with funding outcomes. But the model results show that the quality of the application, as measured by the criterion scores, is the best predictor of an application's eventual success. Applicants might consider these findings when submitting future R01 applications to NIH. | 2017-01-15T08:35:26.413Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "4a90de461087ca54ed83b4afbfcc995df45e9707",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0155060&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a90de461087ca54ed83b4afbfcc995df45e9707",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17646347 | pes2o/s2orc | v3-fos-license | Three measures of physical rehabilitation effectiveness in elderly patients: a prospective, longitudinal, comparative analysis
Background Rehabilitation success is measured by instruments that assess performance of activities of daily living. Guidelines on the use and choice of these instruments are lacking. The present study aimed to analyse prognostic indicators of physical rehabilitation effectiveness in elderly patients according to three rehabilitation impact indices. Methods Prospective, longitudinal study in a post-acute care unit. The study included rehabilitation-eligible deconditioned elderly in-patients prospectively admitted to post-acute care (n = 685, aged 83.2 ± 8.3 years, mean length of stay 15 ± 9.2 days). Data Collection: Premorbid health status variables (PHSV): age, sex, comorbidity (Charlson index), medical history (heart failure, pulmonary disease, cerebrovascular disease, dementia), previous living situation and pre-admission functional status (premorbid Lawton and Barthel indices). Admission health status variables (AHSV): main diagnoses, referral source, physical (Barthel-adm) and cognitive function (Pfeiffer test), undernutrition and dysphagia. Outcome Measures: Absolute functional gain (AFG, admission-to-discharge Barthel change), relative functional gain (RFG, achieved percentage of potential gain) and rehabilitation efficiency index (REI, AFG over length of stay). Univariate analysis considered these parameters, along with PHSV and AHSV. Multivariate logistic regression analysis was performed for AFG ≥20, RFG ≥35 % and REI ≥ 0.50. Results Greater AFG was associated with 14 variables, 8 PHSV (57.1 %) and 6 AHSV (42.8 %); greater RFG with 9 variables, 3 PHSV (33.3 %) and 6 AHSV (66.6 %); and REI with 9 variables, 4 PHSV (44.4 %) and 5 AHSV (55.5 %). Mean AFG value was 34.5 ± 15.8 in patients who achieved complete recovery (RFG 100 %, n = 189, 27.5 %) and 35.3 ± 15.0 (p = 0.593) in the remaining patients (n = 311, 45.4 %). In multivariate analysis, only Barthel-adm was related to all three rehabilitation impact indices. Conclusions Both premorbid and acute-process variables have a greater impact on AFG and REI, compared to RFG. Although AFG gives information about the degree of reduction in dependence, it does not provide clinical information about post-rehabilitation functional status (mean AFG values did not differ between patients with and without complete recovery). A future implication for evaluating rehabilitation effectiveness in elderly patients is to recommend RFG corrected by premorbid Barthel score, which is less affected by previous health conditions, as the optimum method to assess the degree to which maximum potential improvement was achieved. Electronic supplementary material The online version of this article (doi:10.1186/s12877-015-0138-5) contains supplementary material, which is available to authorized users.
Introduction
Elderly patients admitted to hospitals, whether for acute or decompensated chronic disease, often present with loss of functional capacity that can lead to dependency and disability [1][2][3]. This complicates recovery from an acute episode, impeding return to their previous living situation and requiring physical rehabilitation and additional health care resources post-discharge [4][5][6][7][8]. Many predictors of functional recovery have been described in elderly patients [9,10] but varied according to patient complexity and population characteristics, which also may affect functional improvement and the time required to achieve it. Rehabilitation success is measured by the scores achieved on instruments that assess performance of activities of daily living. Rehabilitation Impact Indices (RIIs) are composed of the scores on these instruments, and include functional status before admission, upon admission to rehabilitation and at discharge, as well as the length of the rehabilitation programme.
One of the best-known RIIs is "absolute functional gain" (AFG) [9,11,12], which assesses the difference between admission and discharge functional scores. Other authors use "relative functional gain" (RFG) [9,10], also called the Heinemann index or Montebello Rehabilitation Factor score [9,11,13], which expresses functional recovery as a percentage of the maximum potential improvement. Recently, some authors have labelled this parameter more simply as "rehabilitation effectiveness" [14]. Maximum potential improvement is the return to an individual's premorbid functional score. In previously healthy subjects, this should coincide with the maximum score on the evaluation instrument; in elderly patients, premorbid status may not reach this maximum score. In these cases, a modified parameter has been used, and called "the RFG corrected for premorbid functional status" or the "corrected Heinemann index" [15]. Finally, the "rehabilitation efficiency index" (REI) expresses the average increase per day in a functional assessment score; this outcome considers speed of recovery [9,14,16,17].
In the elderly, previous functional status may reflect comorbidities or other prior conditions that affect health status. In these cases, the addition of one or more new disabilities associated with a new clinical condition further reduces the patient's functional status at the start of the rehabilitation. Therefore, RIIs that are calculated on the basis of the absolute functional values at the start of the rehabilitation programme, such as the AFG, may be more influenced by variables that affect previous health status. On the other hand, RIIs that can be calculated using recent loss of functional capacity, such as the RFG, may evaluate only the loss related to the acute process and therefore would be less affected by the patient's previous health status.
The specific aims of this study were to assess the application of these RIIs in hospitalized elderly patients with impaired functional capacity, to identify differences between the results obtained by the three RIIs, and to discuss ways to ensure their accurate and appropriate use.
Study population and setting
Patients aged 75 years and older, with recent loss of functional capacity due to acute disease, decompensated chronic disease, or surgical procedure were included. The interdisciplinary care team conducted a comprehensive geriatric assessment [18] for each patient within 72 h of admission and developed an individualized care plan.
Patients were eligible for referral to the rehabilitation programme if medically stable (absence of acute infection, absence of symptomatic worsening of chronic disease and absence of acute confusion) and judged able to participate in physical therapy, as indicated by initial comprehensive geriatric assessment performed by an interdisciplinary care team. A further inclusion criterion was the potential to achieve rehabilitation goals, manage medical conditions and develop a discharge plan in approximately two weeks, based on pre-admission geriatric assessment of the patient's clinical characteristics [4,6,8]. The exclusion criterion was absence of functional decline, based on the comprehensive assessment; these patients were admitted for transitional care and monitoring, completion of medical treatment, or medication management but were not referred for any specific rehabilitation therapy.
As part of the interdisciplinary care plan developed upon admission to the programme, a rehabilitation specialist prepared a plan to improve functional capacity, mobility, and independence in basic activities of daily living. Initially, rehabilitation involved in-bed therapy and active-assisted movement of peripheral and respiratory muscles, including isometrics exercises, depending on patient tolerance and cooperation. As patients progressed, the programme included training on transfers and re-education on assisted gait. Finally, patients with highly favourable outcomes received training on stairs and a gait circuit. Patients were discharged on the basis of clinical course, and home-based physical therapy was considered when indicated.
Data collection
During the 18-month study period, 753 patients were admitted to the unit. All patients consented to study participation and received a comprehensive geriatric assessment. The standard geriatric assessment of each individual includes the recording of premorbid health status variables (PHSV), an initial group of variables related to health and/or functional status before the health incident that lead to hospital admission, as well as admission health status variables (AHSV), which reflect the deterioration in health and/or functional status at the time of admission to the rehabilitation programme and therefore indirectly show the severity of the acute process that led to admission. The PHSV data include age, sex, comorbidity (Charlson index [19]), medical history (heart failure, pulmonary disease, cerebrovascular disease and dementia), previous living situation (alone, with family, assisted living centre) and functional status before hospital admission [instrumental and basic activities of daily living measured by Lawton [20] and Barthel index [21] (BI-premorbid)]. To add clinical meaning to the analysis of the BI-premorbid parameter, the present study divided the scores into four categories (0-20, 21-40, 41-60, and 61-100, from most to least dependent), as previously determined by Granger et al. [21].
BI-adm was assessed within 72 h of admission in all cases as part of the systematic comprehensive geriatric assessment. BI-premorbid was also routinely completed using anamnesis and patient interviews, confirmed by family and previous medical reports as needed. The index was again administered 24 h prior to discharge (BI-disch). All these measurements were recorded and discussed by the interdisciplinary team, as part of the daily clinical activity of our rehabilitation ward. Finally, discharge destination and social worker intervention (assessment, resource information for caregivers and discharge planning) were recorded.
Functional status and RIIs
The AFG score was calculated to reflect the BI change post-rehabilitation [9,11,12] (BI-disc -BI-adm). An AFG ≥20 was considered a clinically important difference [12]. The RFG corrected with premorbid functional status was calculated as follows: [RFG = (BIdisc -BIadm)/ (BIpremorbid -BIadm) × 100]. RFG shows the percentage of the premorbid functional capacity recovered at discharge, in relation to what had been lost at admission to rehabilitation [9,11,15]. A RFG ≥35 % means that the patient has recovered at least one third of the functional loss observed; therefore, some authors have described this value as a point beyond which rehabilitation can be considered clinically effective [10,17,23].
The present study considered four patient groups based on the published RFG value and clinical criteria. Group I (RFG = 0) included patients who died, lost functional capacity while in the unit or were urgently transferred to hospital for an acute event or worsening clinical status; negative and undetermined RFG values were assigned to this group for purpose of analysis. Rehabilitation effectiveness was considered low to moderate in Group II (RFG 1-34 %); high in Group III (RFG 35-99 %), and as achieving complete recovery in Group IV (RFG 100 %) [17,23]. Finally, REI was calculated as follows: REI = (BI-disc -BI-adm) / days of stay. This index shows functional gain (BI points gained) per day [9,17]. A negative value indicates worsened functional status during the rehabilitation unit stay; 0 to 0.49 shows low rehabilitation efficiency; 0.50 to 1 reflects moderate rehabilitation efficiency; and >1 indicates high efficiency [17,[23][24][25]. A REI ≥0.50 was considered a clinically important difference [17].
Statistical analysis
Univariate analysis was used to find associations between patient characteristics and results on the three different RIIs. Qualitative variables were compared by Chi-square or Fisher exact test, as appropriate, and quantitative variables by Student T test or single-factor analysis of variance (ANOVA) with Tukey multiple comparison correction. Finally, variables significantly associated with each RII were included in a binary multivariate logistic regression to obtain an optimal model for predicting recovery, based on the cut-off value established for each dependent variable. Given that quantitative variables behave differently according to the dependent variable analysed, different cut-off values were compared and tested with the dependent variables (AFG ≥20, RFG ≥35 % and REI ≥0.50, respectively) in order to assess linearity or to find the best cut-off points before including them in the model. For each model, OR (95 %CI), discrimination power and calibration were calculated. Analyses were done using SPSS 18.0 (IBM Corporation, SPSS, INC., Chicago, IL, USA).
Ethics
National and international research ethics guidelines were followed, including the Deontological Code of Ethics, Declaration of Helsinki (Fortaleza 2013) and Spain's confidentiality law concerning personal data (Ley Orgánica 15/ 1999, 13 December). Detailed, understandable information was provided to patients and family members, and oral informed consent to participate was obtained from all potential participants during their hospital stay, before beginning the post-acute unit's comprehensive geriatric assessment. In patients with dementia, oral informed consent was obtained from the main caregiver. The institution's Clinical Ethics Committee approved the study and the informed consent process used (Comité Etico de Investigación Clínica Parc de Salut Mar: reference number 6370/I).
STROBE Guidelines for reporting observational research cohorts were followed [26] (Additional file 1).
Results
After completion of the comprehensive assessment, 68 patients were excluded from analysis because there was no evidence of functional decline. The characteristics of the 685 included patients are shown in Table 1.
At discharge (Table 1), patients had achieved major improvements in functional capacity (BI-adm = 28.00 ± 19.90; BI-disch = 57.80 ± 27.31), with a mean AFG increase of 25 BI points and a mean value of 61.7 % on the RFG and 2 points on the REI. At discharge, 21.2 % of patients were classified as having severe dependency; it should be noted that patients who died, were rehospitalized due to complications or were discharged to longterm care facilities were included in this group. Table 2 shows the patient groups by RFG and the respective AFG and REI values. Mean AFG and REI values differed significantly between the four patient groups, classified according to the level of change achieved in the RFG; the exception was the AFG comparison in Groups III and IV, which showed similar mean values.
As summarized in Table 3 by RII and in Table 4 by primary diagnosis, when effectiveness of the rehabilitation process was evaluated using the AFG, 14 variables were associated with greater AFG, eight of them (57.1 %) from the PHSV group (age < 85y, absence of previous history of heart failure, absence of previous history of pulmonary disease, absence of previous history of dementia, absence of comorbidity; higher premorbid Lawton and BI scores, living alone) and six (42.8 %) from the AHSV group (referral source different than acute care geriatric unit, lower score in BI-adm, better cognitive function, absence of dysphagia, absence of respiratory diagnosis, main diagnosis of endocrinopathy at admission). When increased RFG values were used to evaluate effectiveness, nine variables were significantly related, three of them (33.3 %) from the PHSV group (age under 85y, absence of previous history of dementia, higher scores in previous Lawton index) and six (66.6 %) from the AHSV group (higher BI-adm scores, better cognitive function, absence of dysphagia, absence of undernutrition, absence of respiratory diagnosis, main diagnosis of endocrinopathy at admission). Finally, the highest REI values also were associated with nine variables, four (44.4 %) from the PHSV group (absence of previous history of dementia, absence of comorbidity, higher BI-premorbid scores, living alone) and five (55.5 %) from the AHSV group (BI-adm score 41-60, absence of dysphagia, absence of undernutrition, absence of respiratory diagnosis, main diagnosis of endocrinopathy at admission). As shown in Tables 3 and 4, only five of the total 24 variables studied (20.8 %) were related to all three RIIs (history of dementia, respiratory disease, dysphagia, main diagnosis of endocrinopathy, functional capacity at admission).
Differences between groups for each of the three RIIs are shown in Table 3. On the AFG, the BI-adm 61-100 group had a significantly lower mean score than the other three groups; on the RFG, the BI-adm 0-20 group differed significantly from the other three, and on the REI the only significant difference was observed between the BI-adm 0-20 and 41-60 groups.
In multivariate analysis, four variables were related to achieving a minimal clinically important difference in AFG (≥20): Charlson index ≤3, absence of dysphagia, instrumental activities of daily living (Lawton index) and a BI-adm score between 11 and 50. In patients with an AFG >50, a ceiling effect was observed. These variables were included in both the PHSV and AHSV analysis. In multivariate results, absence of a respiratory diagnosis and a BI-adm score >11 were associated with a higher probability of high, or even complete, functional recovery (RFG ≥35 %) and the same two variables together with the absence of dysphagia were associated with a clinically important difference in REI (≥0.50). Therefore, BI-adm was the only variable related to all three RIIs (Table 5).
Discussion and conclusions
Although prognostic factors affecting rehabilitation outcomes and RII scores are known and have been studied by others, clear guidelines on the use and choice of the available RIIs are lacking. The novelty of the present study is its focus on the indications for using these indices in geriatric patients. We would highlight the different prognostic factors identified in the three different RIIs studied, and specifically, their advantages and limitations in elderly patients.
All three RIIs were useful to evaluate changes in functional capacity; however, each uses different formulas that can be affected by certain variables but not by others (Tables 3 and 4). The AFG was associated with a high number of variables in both the PHSV and AHSV groups; most of these associations have been reported by other authors [9,24,25]. In addition, functional capacity at admission was inversely associated with AFG (Table 3). In other words, less functional capacity was related to higher AFG, and vice versa. This must be taken into account when analysing rehabilitation outcomes in patients with slight or moderate disability because, in these cases, the AFG is limited by the maximum possible score on the scale that is used (100 points in the case of the BI). For example, the maximum AFG that a patient with a BI-adm value of 80 points can achieve is 20 (the ceiling effect of a 100-point scale). Therefore, the use of AFG as the only parameter of rehabilitation effectiveness in patients with light-moderate dependency could underestimate the results [9,10,27]. Given that the BI is not a continuous interval scale, AFG calculation using this index can be affected because a change in score does not have a consistent meaning in all patients. A 20-point AFG score is excellent for a patient who had lost 20 points prior to hospital admission; it represents 100 % recovery. However, it could be a disastrous outcome for a person who had lost 80 points from their BI-premorbid score. On the other hand, AFG only reflects an increase in "points on the scale" but does not provide any information about the maximum possible score on that scale; therefore, it is impossible to know whether the patient has achieved the maximum possible improvement. Therefore, some authors have suggested that AFG only indicates rehabilitation effectiveness in "reducing the level of dependency" [25].
Finally, it is difficult to establish the detected level of change in BI scores (AFG) that can be considered clinically relevant. Some authors have shown that a 20-point increase in the AFG has a favourable prognostic effect on long-term functional capacity and survival [12]; this limit has been used in other studies (including our own) as the threshold for clinically significant change [12,[23][24][25]. For the REI, according to other authors [17], ≥0.50 points can be considered clinically significant.
The RFG was significantly related with more factors from the AHSV group than from the PHSV, perhaps because the RFG minimizes the effect of prior conditions on rehabilitation outcome due to its reliance on previous functional capacity as the maximum potential to be reached. This reduces "the goal" to the level of the patient's approximate situation before the hospital admission, independent of medical history, comorbidities or previous level of independence. At the same time, RFG shows the proportion of functional capacity that has been lost recently, due to the acute process that led to hospitalization, without taking into account whether the disability existed before hospital admission. In other words, it shows the portion of disability that is potentially reversible. On the other hand, the RFG has no "ceiling effect"; AFG values decrease as BI-adm increases, while RFG increases according to BI-adm (Table 3). Finally, patients in Group IV, who achieved complete recovery (RFG = 100 %), had the same mean AFG values as others (Group III) who did not reach the same level of recovery but did recover more than one third of their lost functional capacity (RFG = 35-99 %) ( Table 2). This suggests a lack of information from the AFG about the extent to which the patient fully recovers premorbid functional status. Taking all of these considerations into account, we conclude that RFG contributes more valuable -and more qualitative-information to evaluating the rehabilitation process in elderly patients, who often present with chronic diseases and prior disability.
Our study has several limitations that should be taken into account. A RFG score ≥35 % has been described as a threshold of clinical rehabilitation effectiveness [10,17,23] and no higher distribution has been published to date. In our sample, however, most of the patients (72.9 %) achieved a higher RFG, and a substantial percentage (27.5 %) scored 100 %; therefore, the authors followed clinical criteria to establish the RFG score groups in the present sample. As this stratification had not been previously described and could lead to non-homogeneous groups, this could be considered a limitation of our study. In addition, RFG values can be difficult to compare between studies because they vary according to whether or not the values were adjusted for the BI-premorbid score using the theoretical maximum score (100 points) or the actual (individualized) BI-premorbid score. Obviously, this has the potential to generate confusion in attempts to generalize the use of this parameter and may be a confounding factor for comparative analysis.
Mathematically, it is debatable whether a variable (BI-adm) already included in the formula of the same outcome variable (RFG) can be considered as a predictor. Nonetheless, we must take into account the RFG's inclusion of the BI-disch score in addition to BI-adm and BI-premorbid; inclusion of BI-adm as an independent variable was based on a reasonable suspicion (supported by the modelling) that a patient's capacity for recovery could be influenced by the functional status at admission. It is logical to think that patients in a worse initial functional state would have less capacity for recovery. For this reason, we included the BI-adm in the model as an independent variable. A new assessment after functional capacity has improved could be of interest to test the power For purposes of analysis, negative or indeterminate values resulting from calculations were converted to 0. (Patients who died, lost functional capacity during their stay in the unit or were transferred to acute care hospitals for an acute event or worsening clinical status are included in this group) c There were significant differences in mean values between all the groups (p < 0.001), except for the AFG between groups III and IV (p = 0.593) of these RII scores in comparison with other performance scales, such as Tinetti score, Short Physical Performance Battery or Gait Speed. REI was related to nine variables, four from the PHSV group and five from the AHSV group. In the present study, mean REI was 2.0, which is higher than that reported in other similar settings [16,17,24,28]. A possible explanation is the low prevalence of neurological diagnoses in our study population (Table 1). It has been demonstrated that patients with neurological disorders have lower REI values than patients with orthogeriatric or other diagnoses [9,28]. Otherwise, REI is clearly influenced by length of stay, which was shorter in the present study than in others [24,25]; this could be due to the inclusion criterion requiring that participants meet conditions amenable to discharge within two weeks; these individuals may have had better prognosis than other study populations. Another aspect that could have contributed to the high REI observed is that the functional status of patients admitted to an acute care unit may be underestimated because of barriers such as catheters, intravenous lines, and bed-rails that are prevalent in conventional hospitalization. In the first days after patients are transferred to a geriatric rehabilitation unit, rapid functional improvement tends to occur as hospital care can be combined with integrated care designed to promote autonomy and minimize dependency (common dining, social spaces, etc.). In the hospital setting, REI value is highly dependent on the length of the hospital stay. This must be taken into account because length of stay can be influenced by comorbid conditions that may interrupt rehabilitation [16,29] and by individual variables (social, personal, or family issues) that are not directly related to rehabilitation therapy [25,28]. To avoid this potentially confounding factor, some authors have suggested that REI should be calculated on the basis of the entire period during which the patient receives rehabilitation therapy. Using this approach, the denominator is the number of days of rehabilitation therapy, from start to finish, regardless of where it is provided [9]. In our sample, BI-adm had a mean value of 28 points, similar to that obtained by other authors in similar settings [24,25,29]. As shown in Table 5, PHSV (comorbidity and Lawton index) were only related with AFG and not with the rest of the RIIs. Similarly, two of the AHSV variables (absence of dysphagia and absence of respiratory diagnosis) were related to both AFG and REI in multivariate analysis. Evidence of the AFG ceiling effect was obtained in the BI-adm >50 group. Only two variables were predictors of functional recovery (RFG ≥35 %): functional capacity at admission and the absence of a diagnosis of respiratory disease (both from the AHSV group). Only BI-adm was related to all three RIIs. The BI cut-off values that had predictive value reflect severe dependency. This may be due to the study population, which was severely incapacitated upon admission but had a better prognosis than populations in other studies, which would reduce the cut-off values used to discriminate worse functional prognosis (BI ≤10). A large proportion of the patients came from an acute care geriatric unit, where respiratory diagnoses are very frequent; a poor functional prognosis for these patients has been described [30], and supports our observation of a better functional recovery in the absence of respiratory disease. Finally, we would note that AFG was the parameter that was correlated with the greatest number of prognostic variables, but has the disadvantage of a ceiling effect and also does not provide evidence of the final functional outcome of rehabilitation therapy. REI is highly conditioned by the mean length of stay, which could be influenced by multiple factors that do not depend on the rehabilitation process. RFG seems to be more associated with variables that reflect health status at admission (severity of the recent acute process) and less affected by previous health status. Also, RFG provides more precise information about the degree to which a patient returns to his or her premorbid status. A future implication of the present study is that these considerations should be taken into account when selecting parameters to determine rehabilitation effectiveness in elderly patients. | 2017-06-18T00:48:49.558Z | 2015-10-29T00:00:00.000 | {
"year": 2015,
"sha1": "91889beb736764f64a13401c0ef5ed3494a9b7ea",
"oa_license": "CCBY",
"oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-015-0138-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c4bd6f1ca37f5e5f99ccb7eb400f8c74de26ad8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267543595 | pes2o/s2orc | v3-fos-license | Management of Hepatocellular Carcinoma in 2024: The Multidisciplinary Paradigm in an Evolving Treatment Landscape
Simple Summary Hepatocellular carcinoma (HCC) is a highly aggressive malignancy with global impact, especially in the context of the rising epidemic of metabolic-dysfunction-associated steatotic liver disease and alcohol-related liver disease. The treatment landscape for HCC is evolving and has changed significantly in the last few years with new treatment options for patients, while the multidisciplinary team model of care remains critical. This review aims to summarize the various treatment options for patients at all stages of HCC, highlighting the growing array of systemic therapies with multi-modal options, including the potential of combining locoregional therapies with immunotherapy across different stages of the disease. With the increasing recognition of the importance of patient-centered care, the future paradigm of HCC care includes the incorporation of non-hospice palliative care in the multidisciplinary team care model to improve patient quality of care, quality of life, and overall outcomes. Abstract Liver cancer is the third most common cause of cancer-related deaths worldwide, and hepatocellular carcinoma (HCC) makes up the majority of liver cancer cases. Despite the stabilization of incidence rates in recent years due to effective viral hepatitis treatments, as well as improved outcomes from early detection and treatment advances, the burden of HCC is anticipated to rise again due to increasing rates of metabolic dysfunction-associated steatotic liver disease and alcohol-related liver disease. The treatment landscape is evolving and requires a multidisciplinary approach, often involving multi-modal treatments that include surgical resection, transplantation, local regional therapies, and systemic treatments. The optimal approach to the care of the HCC patient requires a multidisciplinary team involving hepatology, medical oncology, diagnostic and interventional radiology, radiation oncology, and surgery. In order to determine which approach is best, an individualized treatment plan should consider the patient’s liver function, functional status, comorbidities, cancer stage, and preferences. In this review, we provide an overview of the current treatment options and key trials that have revolutionized the management of HCC. We also discuss evolving treatment paradigms for the future.
Introduction
Liver cancer is the sixth most common cancer, but its aggressive nature and poor prognosis raise it to the third highest cause of cancer-related deaths [1].Hepatocellular carcinoma (HCC) makes up approximately 75% of primary liver cancer cases, with a smaller proportion due to cholangiocarcinoma.The global burden of HCC is highest in Asia and sub-Saharan Africa due to the high prevalence of chronic hepatitis B (HBV) in those regions.Men are more commonly affected and have a higher mortality than women, with liver cancer being the leading cause of cancer death among men in over 20 countries [1].
Most patients develop HCC in the setting of cirrhosis, and the risk factors for HCC include HBV, hepatitis C (HCV), alcohol-related liver disease (ALD), and excess body weight and type 2 diabetes, which is often associated with metabolic-dysfunction-associated steatotic liver disease (MASLD), as well as aflatoxin exposure.The prevalence of various risk factors for HCC is region-specific, with viral causes being more common in the East and non-viral etiologies more common in the West.In Japan and Egypt, HCV is the primary driver of HCC [2,3].In Asia and Africa, chronic HBV, a primary cause of HCC, occurs either through vertical transmission perinatally or horizontal transmission (exposure to infected blood from child to child, unsafe medical and injection practices, or unscreened blood transfusion).Due to HBV vaccination programs and hepatitis treatment options, a decline in the incidence of HCC has been noted in the East in recent years [4].Despite this, the burden of HCC is anticipated to rise again due to the increasing rates of MASLD and ALD [5].
The treatment landscape for HCC is evolving and requires a multidisciplinary approach, with input from hepatologists, hepatobiliary/transplant surgeons, medical oncologists, diagnostic and interventional radiology, and radiation oncology.In addition, the early incorporation of a palliative care team in the treatment of HCC patients is increasingly recognized as an important component of patient-centered care and improving patient quality of care (Figure 1).This multidisciplinary approach is important as HCC management is not a "cookie cutter" process or a "one size fits all" solution.This will involve a management approach that is individualized and customized, taking into consideration multiple patient factors, including the state of the liver (cirrhosis vs. no cirrhosis) and liver function (compensated or decompensated), the size, location, and extent of the cancer, any co-morbidities, the functional status of the patient, and patient preferences.The Barcelona Clinic Live Cancer (BCLC) staging system takes many of these factors into consideration (Figure 2).Liver cancer tumor biology is also heterogeneous, with some tumors exhibiting more indolent behavior and others being more aggressive.Diagnosis and management require a collaborative team approach, with treatment customized for each patient.This personalized strategy ensures the best treatment options within the setting of their chronic liver disease to optimize outcomes.Given the complexity of care for these patients, crossing multiple disciplines, it is not surprising that outcomes are improved with discussions at a multidisciplinary liver tumor board [6,7].Patients treated by a multidisciplinary tumor board, ideally with all specialists in a co-located clinic, are diagnosed at earlier stages, have decreased times to treatment, higher rates of therapy receipt, increased access to curative treatments, and improved overall survival.Moreover, patient satisfaction is improved [8][9][10].
Treatment can be divided between curative and non-curative approaches.For patients with localized HCC without cirrhosis or with cirrhosis but without clinically significant portal hypertension, resection is the recommended approach; however, recurrence rates are high in this setting, with a 50-70% recurrence after 5 years [11][12][13].Liver transplantation (LT) is considered for non-resectable patients due to liver dysfunction, portal hypertension, or multi-tumor involvement.Locoregional therapies (LRTs) are part of the treatment armamentarium for patients with intermediate-stage disease but can also be used as a bridge to transplantation in early-stage disease.Patients with advanced or extrahepatic disease should be considered for the various systemic therapy options available.Treatment can be divided between curative and non-curative approaches.For patients with localized HCC without cirrhosis or with cirrhosis but without clinically significant portal hypertension, resection is the recommended approach; however, recurrence rates are high in this setting, with a 50-70% recurrence after 5 years [11][12][13].Liver transplantation (LT) is considered for non-resectable patients due to liver dysfunction, portal hypertension, or multi-tumor involvement.Locoregional therapies (LRTs) are part of the treatment armamentarium for patients with intermediate-stage disease but can also be used as a bridge to transplantation in early-stage disease.Patients with advanced or extrahepatic disease should be considered for the various systemic therapy options available.
Best supportive care, including consideration of the incorporation of palliative care, is the preferred option for patients with poor performance status or decline in liver function regardless of whether the treatment goals are curative or non-curative/palliative.The integration of palliative care (including non-hospice care) is particularly important given the complexity and challenges of the dual diagnosis of HCC and cirrhosis in this patient population.This patient group can bear considerable physical (ascites, variceal bleed, hepatic encephalopathy, sarcopenia, and frailty), psychosocial, and financial burdens as well Treatment can be divided between curative and non-curative approaches.For patients with localized HCC without cirrhosis or with cirrhosis but without clinically significant portal hypertension, resection is the recommended approach; however, recurrence rates are high in this setting, with a 50-70% recurrence after 5 years [11][12][13].Liver transplantation (LT) is considered for non-resectable patients due to liver dysfunction, portal hypertension, or multi-tumor involvement.Locoregional therapies (LRTs) are part of the treatment armamentarium for patients with intermediate-stage disease but can also be used as a bridge to transplantation in early-stage disease.Patients with advanced or extrahepatic disease should be considered for the various systemic therapy options available.
Best supportive care, including consideration of the incorporation of palliative care, is the preferred option for patients with poor performance status or decline in liver function regardless of whether the treatment goals are curative or non-curative/palliative.The integration of palliative care (including non-hospice care) is particularly important given the complexity and challenges of the dual diagnosis of HCC and cirrhosis in this patient population.This patient group can bear considerable physical (ascites, variceal bleed, hepatic encephalopathy, sarcopenia, and frailty), psychosocial, and financial burdens as well Best supportive care, including consideration of the incorporation of palliative care, is the preferred option for patients with poor performance status or decline in liver function regardless of whether the treatment goals are curative or non-curative/palliative.The integration of palliative care (including non-hospice care) is particularly important given the complexity and challenges of the dual diagnosis of HCC and cirrhosis in this patient population.This patient group can bear considerable physical (ascites, variceal bleed, hepatic encephalopathy, sarcopenia, and frailty), psychosocial, and financial burdens as well as potential stigmatization.Therefore, particular emphasis is placed on both multidisciplinary and interdisciplinary care in this patient population.
The purpose of this review is to provide an overview of the current treatments of HCC with a focus on patient selection and outcomes for each treatment option.The upcoming changes in the paradigm of HCC care will also be reviewed including the advancement of systemic therapy and its use as a multi-therapy/multi-modal approach.
HCC Prevention and Surveillance
Treatment with antiviral therapy for both HBV and HCV does significantly decrease the risk for HCC in patients with or without cirrhosis [14].HBV vaccination for the prevention of chronic HBV infection also has been shown to reduce the risk of HCC.For MASLD, other than controlling risk factors that are modifiable, such as obesity and diabetes type 2/insulin resistance, there are no clear effective HCC prevention interventions available at this time [15].However, a recent meta-analysis of nine studies assessed the incidence and risk of HCC following bariatric surgery [16].The pooled rate/1000 person-years was 0.05 (95% CI: 0.02-0.07) in bariatric surgery patients and 0.34 (95% CI: 0.20-0.49) in the control group, while the incidence rate ratio was 0.28 (95% CI: 0.18-0.42).In addition to providing durable weight loss, bariatric surgery may also be associated with a decreased risk of HCC.
HCC surveillance is targeted toward populations that are considered at-risk.The American Association for the Study of Liver Diseases (AASLD) Practice Guidance recommends that all patients with cirrhosis of any etiology and non-cirrhotic chronic HBV undergo surveillance every 6 months [17].For HCV with stage 3 fibrosis or non-cirrhotic MASLD, the annual incidence of HCC is < 0.2%; thus, there is insufficient risk to warrant regular surveillance at this time for this patient group.The standard approach to HCC surveillance as recommended by the AASLD is abdominal ultrasound with AFP at 6-month intervals.If imaging with abdominal ultrasound is suboptimal, contrast-enhanced imaging with MRI is recommended.
The implementation of surveillance programs involving standardized screening protocols, recall procedures, and quality control measures is essential to decrease HCC-related deaths.HCC surveillance is associated with early diagnosis and improved survival; however, approximately 20% of cirrhotic patients only undergo semi-annual surveillance.The underuse of surveillance in clinical practice remains an ongoing challenge, particularly in patients with ALD and MASLD-related cirrhosis and patients not seen regularly by gastroenterologists [18,19].
Studies have evaluated both patient-reported barriers and primary care provider practice patterns and barriers regarding HCC surveillance.Lack of knowledge, financial limitations, scheduling difficulties, and transportation issues are some of the patient-reported barriers that are significantly associated with less frequent receipt of HCC surveillance, while primary care providers have reported misconceptions about their knowledge of surveillance [20].Both patient-centered interventions as well as provider education are needed to improve HCC surveillance in clinical practice.Further discussion of HCC prevention and surveillance is beyond the scope of this review.
Resection
Patients are candidates for resection if they do not have cirrhosis or have BCLC 0/Child-Pugh (CP) A cirrhosis without portal hypertension.(Table 1).Resection with partial hepatectomy is potentially curative with a solitary tumor of any size with no evidence of gross vascular invasion.To be considered for resection, a patient needs to have an appropriate tumor location and adequate liver reserve and liver remnant.Given the relatively restrictive criteria, only about one-third of patients evaluated for resection will be able to undergo curative-intent surgery [21].Minimally invasive techniques, involving both laparoscopic and robotic approaches, have become the standard surgical approach as they allow for more robust liver remnants, less surgical risk and complications, and faster post-surgical recovery.Portal vein embolization is an additional technique to allow more patients access to surgery.Portal vein embolization impedes blood flow to the part of the liver to be resected, and redirects portal blood flood to the non-tumor-bearing liver, thus inducing hypertrophy of the future liver remnant.Survival rates for liver resection in well-selected patients are very encouraging, with approximately 70% at 5 years [11,13].Even in patients with large tumors, resection still has the potential for cure, albeit with reduced survival rates compared to smaller tumors [12].However, recurrence rates approach 50-70% at five years [11][12][13].Most recurrences occur in the first two years, but there is a bimodal distribution with a second peak between years 4 and 5 [27].The presence of vascular invasion and/or multifocal disease places patients at an even higher risk of recurrence; therefore, resection is not recommended in these cases.
In contrast to guidelines from the AASLD and the European Association for the Study of the Liver, the Eastern and Italian multisociety guidelines recommend consideration for resection in well-selected cirrhotic patients with good liver function who have oligonodular HCC (2-3 nodules).They advise that such cases receive extensive review by the multidisciplinary board and may be considered on a case-by-case basis [28].This is based on one randomized control trial and multiple observational studies.This randomized control trial showed a longer survival rate after liver resection of patients outside of Milan criteria compared to TACE at 1 year (76% vs. 52%) and at 5 years (51% vs. 18%) [29].
In addition, for patients with macrovascular invasion (MVI), while generally considered a contraindication for resection by the AASLD and EASLD guidelines, the Eastern and Italian multisocietal guidelines would again consider liver resection in selected patients [28].This is based on studies showing postoperative mortality rates of 3-6% and survival rates of 3 and 5 years at 17-49% and 10-30%, respectively [30][31][32][33].The Italian society guidelines note that the site of the portal MVI with more peripheral branch involvement is associated with better prognosis, and that survival advantage after surgery compared to nonsurgical approaches has been reported only in patients with MVIs that do not extend to the portal trunk [32,34,35].
Transplant
For patients who are not resection candidates, LT should be considered if the tumor is within the Milan criteria or United Network for Organ Sharing (UNOS) stage T2.LT has both a high survival rate and a low recurrence rate, making it an ideal option for patients who are candidates.In addition to potentially curing the HCC, it also addresses the underlying liver disease, which is often the driver of mortality.The recurrence rates after LT are around 10%, which is much lower than with resection or ablation [25].The Milan criteria are the most widely accepted criteria to determine if a patient should be considered for transplant for HCC and are defined by one lesion measuring between 2 cm and 5 cm or up to 3 lesions, none greater than 3 cm with no vascular involvement or extrahepatic spread.When patients within these parameters receive a transplant, their outcomes are similar to those of patients who are transplanted for non-malignant reasons [26,36].In addition, consideration for LT requires the biomarker AFP to be <1000 ng/mL.Per UNOS policy, patients with AFP ≥ 1000 ng/mL are not eligible for MELD exception points and will require LRT with a decrease in AFP to <500 ng/mL.
If the patient is not within the Milan criteria, various LRTs can be used to downstage and bridge a patient to transplantation.The UNOS downstaging protocol (UNOS-DS), which includes patients whose total sum diameter is up to 8 cm, allows patients to gain MELD (model for end-stage liver disease) exception points if the patient can be downstaged with liver-directed therapies [37].Most patients can be downstaged from the UNOS-DS criteria, and their survival and recurrence rates are excellent [38,39].In a large cohort study of transplant recipients, 10-year recurrence rates were slightly higher for patients who were downstaged compared to patients who were initially within the Milan criteria (20.6% vs. 13%), but lower compared to patients who were not downstaged at all (41%) [40].For patients who can be downstaged, the 5-year overall survival (OS) is greater with LT compared to locoregional or systemic therapies (77.5% vs. 31.2%);therefore, LT is preferred if it is an option [41].
MELD exception points for transplant are only awarded if an HCC lesion is at least 2 cm in size.For patients with cirrhosis but with a lesion < 2 cm in size, the recommendation is close observation initially.Once the tumor reaches 2 cm, and is therefore eligible for MELD exception points, the patient can undergo LRT while undergoing transplant evaluation or during the waiting period on the transplant list.
Ablation
Ablation alone with radiofrequency ablation (RFA) or microwave ablation (MWA) may be considered a potential curative treatment for HCC lesions up to 3 cm when LT and resection are contraindicated.With low complication rates and cost-effectiveness, ablation is associated with survival outcomes similar to resection for small tumors, making it an attractive treatment option [22,42,43].Lesions less than 3 cm in size are ideal since the effectiveness and survival after ablation are inversely proportional to size with a significant difference in survival when using a cutoff of 3 cm [44].The NCCN guidelines reserve ablation for patients who are not surgical candidates.The AASLD recommends ablation as an alternative option to surgery if patients have very early-stage HCC (BCLC-0)/UNOS stage T1 and transplant is not being considered [17].
Evaluation of tumor location is key for thermal ablation.Treatment with ablation can be approached percutaneously or laparoscopically.With the percutaneous approach, the lesion needs to be easily accessible for image-guided placement of the ablation probes.Proximity to major blood vessels and major bile ducts should be avoided given the heat sink effect in which an incomplete treatment occurs due to the cooling effect of major vessels.Dome lesions or those close to a main bile duct are also not ideal for ablation as the heat can cause thermal injury to the diaphragm or bile ducts.Ablation is not as effective for larger tumors due to the need for adequate margins, and, therefore, is only performed if a tumor is less than 3 cm in size with three or fewer separate tumors.
Locoregional Therapy for Downstaging and Bridge to Transplant
For patients undergoing LT, LRT with ablation, Yttrium-90 radioembolization (Y90), transarterial chemoembolization (TACE), or stereotactic body radiation therapy (SBRT) should be performed to treat the HCC lesion as a bridge to transplantation.The choice of LRT depends on the location, size, and number of HCC lesions.Ablative therapies, as previously mentioned, are commonly used for smaller lesions less than 3 cm in size and should not be located close to other organs, major vessels, and bile ducts.TACE provides a two-fold therapeutic approach: The first approach is arterial blockade which reduces or eliminates blood flow to the tumor, thus causing tumor ischemia and tumor necrosis.The second is the administration of a highly concentrated dose of chemotherapy to the lesion.Y90 involves the passage of a catheter through the hepatic artery, localized to the area of the tumor, where Y90 microspheres are released.These microspheres then slowly emit radiation into the tumor.All arterially directed therapies are relatively contraindicated in patients with bilirubin ≥ 3mg/dL due to the risk of hepatotoxicity and liver decompensation.LRT is relatively contraindicated in patients with higher CP B and CP C, with some case-by-case exceptions.SBRT has the advantage of treating small lesions, especially in "difficult-toreach" locations, and can be used for "difficult-to-treat" lesions when TACE or Y90 have not been effective.Moreover, SBRT is able to accurately deliver a focused high-dose treatment to the targeted tumor, thus minimizing toxicity to normal surrounding organ structures.2).Although TACE has historically been the primary treatment of choice for stage BCLC B/intermediate HCC, Y90 has become an accepted alternative therapeutic option.The decision on which intra-arterial-directed therapies to use will be dependent on center expertise and access.The goals of treatment are palliative, focusing on tumor control while maintaining quality of life and minimizing treatment-related toxicity.
TACE
TACE is recommended as first-line therapy for BCLC B/intermediate HCC in patients without vascular involvement.Improvements in OS have been clearly demonstrated in a meta-analysis of randomized controlled trials comparing TACE and best supportive care [45,46].Liver tumors receive the majority of their blood supply from the hepatic artery.Both TACE and Y90 take advantage of the neovascularization of tumors and deliver chemotherapy or radiation treatment directly to the cancer by isolating the supplying hepatic artery using interventional radiology techniques.
TACE involves a two-step approach of intra-arterial injection of cytotoxic drugs to the cancer with subsequent embolization using an embolic agent that cuts off blood supply to the cancer, resulting in tumor necrosis.This conventional approach typically involves the use of doxorubicin or cisplatin emulsified in lipiodol (an oil-based radio-opaque contrast agent used as both a chemotherapeutic carrier and an embolic agent).By delivering chemotherapeutics directly to the tumor, this method delivers higher concentrations to the tumor without systemic toxicities.Subsequent new techniques involving the administration of drug-eluting beads (DEB) (embolic microsphere containing cytotoxic drugs) can be directed into the hepatic artery, allowing more sustained high concentrations directed to the tumor bed.Multiple randomized controlled trials comparing the OS, efficacy, and safety of conventional TACE with DEB-TACE have not shown any significant differences between the two techniques [47][48][49].
If the portal vein (PV) is compromised by a thrombus, then the liver becomes more dependent on the hepatic artery, and there can be a risk of hepatic infarction and liver failure.Therefore, Y90 is generally preferred when PV thrombus is present.Even with a patent PV, TACE can cause some hepatic injury and induce liver decompensation.For this reason, the NCCN has advised that bilirubin above 3mg/dL is a relative contraindication to TACE [50].Due to immediate side effects related to the procedure, an overnight stay in the hospital is often necessary to monitor for post-embolism syndrome, which includes fever, pain, and nausea.
Treatment with TACE can be performed more than once.However, treatment that is deemed a TACE failure or refractory is when (1) the tumor lacks objective response post-treatment with > 50% viable disease after two TACE sessions; (2) new HCC has developed within the area of treatment zone after two TACE sessions; (3) AFP has not shown improvement despite two TACE sessions; and (4) there is progression of HCC with advancement of HCC staging, such as with vascular invasion or extrahepatic metastases [17].Once patients are deemed as having TACE treatment failure, other alternative treatments should be considered, including systemic therapy.
Y90
Transarterial radioembolization is also performed by an interventional radiologist, but instead of chemotherapy, a radioactive isotope, Yttrium-90, is delivered intra-arterially.This procedure is typically performed in one session, but an initial mapping session is required to quantify the amount of hepatopulmonary shunting and gastroduodenal reflux.If excessive hepatopulmonary shunting or gastroduodenal reflux is present, there is a risk for radiation pneumonitis or gastric ulceration, and the procedure is contraindicated [51].With the presence of PV thrombus, in contrast to TACE, Y90 has minimal embolic effect with a low risk of hepatic ischemia and therefore can be safely delivered [52].Although the presence of PV thrombosis is not a contraindication to Y90, it is a negative prognostic marker and outcomes are worse for these patients [53].Adequate liver function is required for this procedure, and pretreatment bilirubin values above 2mg/dL are a predictor of the risk for radiation-induced liver disease post-procedure [54].Tolerability for Y90 is superior to that for TACE, especially with respect to abdominal pain, transaminitis, and time in the hospital [55].
The landmark LEGACY study demonstrated that treatment with Y90 is safe and effective for early HCC.This multi-center retrospective trial included 162 patients with CP A and a solitary HCC lesion less than 8 cm in size with a median lesion size of 2.7 cm.The study showed an ORR of 88.3% during a follow-up period of 29.9 months, with a 3-year OS of 86.6% [56].Based on these results, the Food and Drug Administration approved the use of Y90 for HCC in 2021.In addition, in the prospective single-center RASER study, 29 patients with early HCC, who were not candidates for RFA, were treated with Y90 radiation segmentectomy with curative intent.ORR was 100% while CR was 90%.OS at 1-year and 2-years were 96% [57].This study demonstrated that radiation segmentectomy was safe and effective for unresectable early-stage HCC with potential curative intent.(Table 3).Limited studies have evaluated the safety and efficacy of Y90 and TACE [58,59].The TRACE study is the largest prospective study to date comparing Y90 and DEB-TACE in a single-center randomized trial involving 72 patients with BCLC A or B, not eligible for surgery or ablation [58].In patients receiving Y90, the median TTP was 17.1 months while patients receiving DEB-TACE had 9.5 m.For Y90, the median OS was 30.2 m and for DEB-TACE it was 15.6 m.The safety profile was similar between the two treatment arms.This study demonstrated that Y90 is associated with superior tumor control and better OS when compared with DEB-TACE (Table 3).
Lastly, treatment with Y90 using personalized dosimetry vs. standard dosimetry provides better radiologic response and improved survival with fewer adverse events.The concept of personalized dosimetry requires a delicate balance between adequate radiation dose to the tumor and preserving liver function.The DOSISPHERE study, a randomized multi-center phase II trial, evaluated 60 patients with BCLC B and C, achieving ORR in 71% and 36% for personalized and standardized dosimetry groups, respectively [60].The median OS rates in the intention-to-treat analysis were 26.6 and 10.7 m for the personalized and standard dosimetry, respectively.Patients in the standard dosimetry group received 120 +/− 20 Gy to the perfused lobe.At least 205 Gy was targeted to the index lesion in the personalized dosimetry group, with less than 120 Gy to the non-tumor tissue (Table 3).
The use of personalized dosimetry was further validated by the global TARGET study, a multi-center retrospective study of 207 patients with BCLC B/C treated with increased tumor-absorbed doses [61].Patients receiving an increased tumor-absorbed dose were associated with improved ORR and OS.
External Beam Radiation
While Y90 delivers radiation internally to the tumor, external delivery of radiation is another treatment option for patients using SBRT.The overall survival (OS) and local tumor control rates are excellent, though most studies are either retrospective or observational.For small lesions, the local control and OS rates compare favorably to the rates seen with ablation [62].As an advantage over ablation, SBRT can easily treat lesions regardless of proximity to the hepatic dome or blood vessels.However, the caudate lobe should be treated with caution, as edema or off-target effects can damage the neighboring bowel [63].Hepatic toxicity is low, with rates reportedly less than 10%, and PV thrombosis is not a contraindication [64,65].With increasing size of a lesion, the efficacy of SBRT decreases, and it is most effective in tumors less than 6 cm in diameter [65].Emerging studies are also showing the potential benefit of SBRT in combination with TACE in the treatment of unresectable HCC with PV thrombosis.SBRT could achieve thrombus reduction or resolution, allowing PV flow restoration that will then allow TACE treatment [66,67].
Systemic Therapies
Patients with BCLC B or C HCC who have adequate performance status and liver function, but are no longer candidates for LRT, either due to disease burden or extrahepatic spread, should be considered for systemic therapy.Untreated BCLC B and BCLC C HCC portends a poor prognosis of approximately 9 months (m) and 3 m, respectively, and thus effective therapies are essential to improve outcomes [68].HCC is resistant to conventional cytotoxic chemotherapy due to several complex molecular mechanisms, including autophagy activation, apoptosis evasion, expression of drug efflux pumps, enhancement of intracellular drug metabolism, and development of DNA repair mechanisms, among others [69].The treatment options in 2023 for first-line treatment for HCC include targeted therapies such as multikinase inhibitors (MKI), anti-VEGF therapies, immune checkpoint inhibitors (ICIs), or combinations of these (Table 4).
Multikinase Inhibitors
Significant progress has been made in the advancement of systemic therapies in HCC treatment since the FDA approval of sorafenib in 2007.No effective systemic treatments were available until sorafenib, a tyrosine kinase inhibitor (TKI), was shown to be superior to placebo.HCC is a highly vascular tumor, and the signaling pathways promoting angiogenesis, such as VEGF, are critical in HCC tumor growth and metastatic potential [70].The mechanism of action of the small molecule MKI against HCC is suppression of tumor growth, cell proliferation, differentiation, and angiogenesis through multiple complex pathways.Sorafenib inhibits VEGF, PDGFR, Raf, Ras, MEK, ERK, c-KIT, and RET, whereas lenvatinib inhibits VEGF, FGFRs, PDGFR, SCFR, KIT, and RET [71].TKIs were the only option for over a decade due to many trials that failed to show superiority to sorafenib.Despite an improvement in OS with TKIs, the response rates were disappointing, and durability was lacking.
Sorafenib
In 2007, for the first time, systemic therapy was shown to improve outcomes over placebo in the first-line setting for advanced HCC.In the SHARP trial, investigators randomized 602 patients with advanced HCC to first-line therapy with either sorafenib 400 mg by mouth twice daily or to placebo [72].The co-primary outcomes were OS and time to symptomatic progression, while the secondary outcomes were time to radiographic progression and safety measures.Patients could not be eligible for local therapies and had to have an ECOG performance status of 0-2 and CP A cirrhosis.
The OS primary outcome demonstrated a median survival of 10.7 m in the sorafenib group compared to 7.9 m with placebo.At one year, the survival rates were 44% and 33%, respectively, representing a 31% relative reduction in the risk of death (HR 0.69).The time to symptomatic progression primary outcome showed no difference in sorafenib and placebo, but a secondary endpoint of radiographic progression-free survival (PFS) was met with a longer radiographic PFS with sorafenib (5.5 vs. 2.8 m).Although the disease control rate was improved with sorafenib (43% vs. 32%), the objective response rates (ORR) were disappointing.Only 2% of patients achieved a partial response (PR) by RECIST.There were no complete responses (CR), and most patients had stable disease (SD) as their best response with sorafenib.Most patients enrolled in the SHARP trial were from Europe and Australia (88%) and about 10% were from North America.A second phase III study of sorafenib vs. placebo confirmed the efficacy of sorafenib in patients in the Asia-Pacific region, with a similar HR for death of 0.68 and similar poor ORR (PR in 3.3% vs. 1.3%) [73].This trial also showed a slightly longer PFS (2.8 vs. 1.4 m) but no difference in time to symptomatic progression in the two groups.
Sorafenib was shown to have toxicities in 80% of patients compared to only 52% of placebo, but most side effects were mild with < 30% of patients having a grade 3-4 adverse event (AE).The most common AEs in the sorafenib group included diarrhea, fatigue, hand-foot syndrome (HFS), alopecia, and anorexia.Diarrhea and HFS were the most severe with 8% of patients having a grade 3 event for each.Sorafenib was considered well tolerated overall, with a permanent drug discontinuation rate due to AE of only 11% (compared to 5% in placebo).
Lenvatinib
In the ten years following the approval of sorafenib, many phase III trials with various drugs and combinations failed to show non-inferiority or superiority to sorafenib.Sorafenib remained the only option for first-line treatment until 2018 with the addition of lenvatinib, a potent MKI, to the treatment arsenal.The REFLECT study was an open-label, phase 3, noninferiority trial that compared lenvatinib to sorafenib in first-line unresectable HCC [74].Nine hundred fifty-four patients with CP A cirrhosis and ECOG 0-1 were included.Dosing was based on body weight, with patients at least 60 kg receiving 12 mg daily and patients less than 60 kg receiving 8 mg daily.Patients were excluded from the study if they had main PV invasion, 50% or more liver involvement, or uncontrolled hypertension.The enrollment took place in 20 countries; approximately two-thirds of patients were from the Asia-Pacific region and one-third were from the Western region.The primary endpoint was OS, and this was first tested for non-inferiority and then for superiority.Secondary endpoints included PFS, time to progression (TTP), ORR, and quality of life (QOL) measurements.
The primary endpoint for OS was met for non-inferiority, but not for superiority.The median OS was 13.6 and 12.3 m in the lenvatinib and sorafenib groups, respectively, and this difference was not statistically significant.Lenvatinib demonstrated superiority in all secondary endpoints, including PFS, TTP, and ORR.Nearly a quarter (24.1%) of patients in the lenvatinib arm showed an objective response (mRECIST, investigator review) with 23% PR and 1% CR.In the sorafenib arm, the response rate was lower, with only 9.2% having an objective response (mRECIST, investigator review), 9% with a PR, and less than 1% with a CR.The disease control rate was higher in the lenvatinib arm, and more patients had progressive disease as the best response in the sorafenib arm.
The rate of AEs was similar in the two groups, though the side effect profile was different.The most common AE for sorafenib was palmar-plantar erythrodysaesthesia (PPE) (any grade 52%, 11% grade 3-4).The sorafenib arm also saw more alopecia (25% vs. 3%) and diarrhea (46% vs. 39%).The patients on lenvatinib had more significant hypertension, proteinuria, and hypothyroidism.Fatigue, anorexia, and diarrhea were common in both treatment arms.Less than 10% of patients had to completely stop therapy due to an AE, but patients in the lenvatinib arm did have a slightly longer time on treatment (5.7 vs. 3.7 m).In this trial, just slightly more than a third of patients went on to receive second-line therapy, underscoring the importance of choosing the optimal first-line treatment for patients.
Additional TKIs have been approved by the FDA and are indicated as second-line treatments.Regorafenib, cabozantinib, and ramucirumab have been tested in the secondline setting and showed superiority over the placebo.Ramucirumab is given to biomarkerselected populations; it is only approved in patients whose AFP is at least 400 ng/mL.If an ICI is used in the first-line setting, sorafenib or lenvatinib can be considered as options for second-line treatment.
Immunotherapy
ICIs have demonstrated durability in numerous solid tumors, and the immunobiology of HCC lends itself to therapeutic intervention targeting the immune cells.The presence of tumor-infiltrating lymphocytes in HCC tumors correlates with outcome, suggesting that the immune responses could be important in treating HCC [75].Immune checkpoint proteins are involved in the control of a person's immune response, keeping the immune system in check.There are a number of these proteins on T cells, including PD1, CTLA4, TIGIT, and LAG3, each of which can be inhibited by ICIs.When ICIs inhibit these checkpoints, it allows the patient's immune response to activate and destroy cancer cells.The challenges with ICIs in HCC include the fact that cancer cells typically begin in an environment of chronic inflammation, and many immune cells in the liver are involved in maintaining and promoting tolerance to neo-antigens.The tumor immune microenvironment of HCC can dampen the host immune response and promote tolerance [76].
Although single-agent checkpoint inhibition with PD1/PDL1 was not superior to sorafenib, the response rate, durability, and safety signals were encouraging [77,78].In the Checkmate 459 study, nivolumab was compared to sorafenib in the first-line setting, and despite no difference in survival, the response rate by RECIST was higher (15% vs. 7%) and the rates of grade 3-4 AE were lower (22% vs. 49%) [77].Since that time, the use of ICIs in combination with other agents to harness the immune response has proven to be more successful.These combinations have redefined the treatment options and have largely supplanted TKIs as the preferred first-line option for most patients.
Atezolizumab and Bevacizumab
Bevacizumab combined with atezolizumab was FDA-approved in May 2020, and it was the first systemic treatment option found to improve survival over sorafenib in over a decade.Bevacizumab is a VEGF monoclonal antibody, and when used in combination with ICIs, it can change the microenvironment to an immune stimulatory environment by improving priming and activation of T cells, tumor infiltration of T cells, and inhibiting cells that lead to immune suppression [79].The mechanisms of anti-VEGF antibodies combined with PD1/PDL1 antibodies lead to a synergistic effect to achieve better outcomes than with ICIs alone [80].
The IMbrave 150 trial was an open-label, phase 3 trial that randomized 501 patients with unresectable HCC to atezolizumab plus bevacizumab or sorafenib [81].The co-primary endpoints were OS and PFS.Secondary endpoints included ORR, duration of response, and time to deterioration of QOL.This was a global study with 40% of patients enrolling from Asia and Japan.Patients with CP A cirrhosis and ECOG 0-1 were included.Due to the potential bleeding risk with bevacizumab, an updated EGD within 6 m of treatment was required, with treatment of varices as per standard of care.Patients with untreated or incompletely treated varices with a high risk of bleeding were excluded.A quarter of patients had varices, and some had untreated varices at baseline (11-14%).Patients were also excluded if they had uncontrolled hypertension, recent hemoptysis, or were on full-dose anticoagulation.In contrast to prior studies, high-risk patients with main PV invasion or involvement of at least 50% of the liver were included.
The primary outcomes were met with an improvement in OS at 12 m (67.2% vs. 54.6%)and an improvement in PFS by 2.5 m (6.8 vs. 4.3 m).With 12 m of additional follow-up, the median OS for atezolizumab and bevacizumab was the longest median OS for any systemic therapy at the time of the publication (19.2 vs. 13.4 m) [82].The response rates by mRE-CIST were significantly improved with atezolizumab and bevacizumab (33.2% vs. 13.3%, p < 0.001), and there was an impressive 10.2% complete response rate in the combination arm.The disease control rate was improved (72% vs. 55%), and patients were able to stay on treatment longer.As is typical with ICIs in other cancers, the durability of response was improved in the combination arm (duration of response not reached vs. 6.3 m) [83].
Atezolizumab and bevacizumab were well tolerated, and the mean duration of treatment was more than double the time on treatment with sorafenib (7.4 vs. 2.8 m).Most patients did experience an AE, but few patients had to discontinue treatment due to side effects in either arm.Diarrhea and PPE were significantly more common in the sorafenib arm.Proteinuria and hepatitis were more common in the combination arm [82].This regimen has supplanted TKIs as the standard of care for eligible patients given the improved survival, response rates, durability, and safety profile.
Tremelimumab and Durvalumab
The cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) immune checkpoint is distinct from the PD1/PDL1 checkpoints, and blocking both leads to complementary effects on antitumor immune responses [84].Data from a phase 1b study showed maximum expansion of T cells after a single dose of CTLA-4 antibody treatment that did not increase further after additional doses [85].In addition, toxicity from CTLA-4 antibodies often comes after repeated exposure.Therefore, a single dose of tremelimumab was tested in combination with durvalumab (PD-L1 inhibitor).This dual ICI therapy invoked responses that were not seen with single-agent ICIs [86].
The HIMALAYA trial randomized 1171 patients with unresectable HCC requiring first-line systemic therapy to the combination of tremelimumab and durvalumab, durvalumab monotherapy, or sorafenib [87].The STRIDE (single tremelimumab, regular interval durvalumab) regimen consisted of a single 300 mg dose of tremelimumab in addition to durvalumab 1500 mg every 4 weeks.The primary objective was to evaluate OS for STRIDE vs. sorafenib, and the secondary endpoint was to evaluate the noninferiority of durvalumab vs. sorafenib.A second combination regimen, T75 + D, consisted of tremelimumab 75 mg every 4 weeks for four doses plus 1500 mg of durvalumab every 4 weeks, but enrollment was closed to this arm when a phase 2 study demonstrated no difference in efficacy compared to durvalumab monotherapy [19].Patients in this study had BCLC B or C HCC, were CP A, and were ineligible for LRT.Patients were excluded if they had a thrombosis in the main PV, prior LT, or a history of an autoimmune disease.
The primary endpoint of OS in the HIMALAYA trial was met.Patients receiving the STRIDE regimen had a median OS of 16.4 m, compared to 13.8 m with sorafenib.With longer follow-up, a quarter of patients were still alive at 4 years (4-year OS 25.2% vs. 15.1%)[88].The secondary endpoint for noninferiority of durvalumab compared to sorafenib was also met, but the superiority of durvalumab was not significant.The median TTP in each group was similar (5.4 vs. 5.6 m).The discrepancy in PFS and OS results may be due to disease stabilization after initial progression in patients getting ICIs.The investigators allowed patients to be rechallenged with tremelimumab beyond radiographic progression if they met certain criteria, including investigator-assessed benefit to treatment, no threat to vital organs, and progression that did not occur after a PR or CR.The disease control rate was similar across the three arms, but the ORR by RECIST was higher in the immunotherapy arms (20.1%, 17.0% vs. 5.1%).Twelve patients (3.1%) in the STRIDE arm had a CR, compared to none in the sorafenib arm.
Treatment with the STRIDE regimen was well tolerated.Treatment-related grade 3 or 4 AEs were seen most often in the sorafenib group (36.9%), leading to a dose delay or discontinuation in nearly half of patients.In contrast, grade 3 or 4 treatment-related AEs were seen less frequently in the STRIDE regimen (25.8%), with less than a third of patients requiring dose delay or discontinuation.As expected, grade 3 or 4 immune-mediated AEs were more common in the STRIDE regimen compared to durvalumab (12.6% vs. 6.2%), and the requirement of high-dose steroids was also more common in the STRIDE regimen compared to durvalumab alone (20.1% vs. 9.5%).
Combination of ICIs with Multikinase Inhibitors
Using VEGF TKIs to modulate the immunosuppressive microenvironment and increase the efficacy of ICIs is another strategy that has been tested as a first-line treatment of HCC; however, the studies evaluating these combinations have shown mixed results.In LEAP 002, pembrolizumab and lenvatinib were compared to lenvatinib, but the primary endpoints of OS and PFS were not met.Although the median OS for the combination was an impressive 21.2 m, the control arm did remarkably well (19 m), so the difference was not statistically significant [89].Similarly, in the COSMIC-312 study, cabozantinib plus atezolizumab improved PFS (HR 0.63) compared to sorafenib, but it did not show any difference in OS (15.4 vs. 15.5 m) [90].Again, the control arm did remarkably well with an almost 50% increase in survival for sorafenib compared to what was seen in the original SHARP trial.More patients in the sorafenib arm received subsequent therapy (37% vs. 20%), with a higher percentage of patients receiving immune therapy in the second-line setting, which could explain the longer-than-expected OS in the sorafenib arm.
In 2023, the CARES-310 study showed a significant improvement with the combination of camrelizumab (a PD1 inhibitor) and rivoceranib (an oral VEGFR TKI) compared to sorafenib [91].This phase 3 trial randomized 543 patients with primary outcomes of OS and PFS.Both primary outcomes were met with a PFS of 5.6 vs. 3.7 m (HR 0.52) and a median OS of 22.1 vs. 15.2 m (HR 0.62).Although the PFS was only improved by less than 2 m, the 7 m difference in OS was notable with the longest median OS published to date.
This trial was an international trial, but most patients enrolled were from Asia, with only 17% of patients from non-Asian countries.This contrasts with HIMALAYA and IMbrave150 studies in which most patients were from non-Asian countries.Given this global distribution, a large proportion of patients in CARES-310 had viral hepatitis as the cause of their cirrhosis.Only 15% of patients had a non-viral cause of cirrhosis in the CARES-310 study, whereas IMbrave 150 and HIMALAYA had 30-40% of patients with nonviral causes.The combination of camrelizumab and rivoceranib was difficult to tolerate, and 81% of patients had a grade 3-4 AE (compared to 52% with sorafenib).The OS was remarkable, but given the high incidence of AE, and the relatively low numbers of Western patients with non-viral causes of cirrhosis, it remains to be seen if these results will be generalizable or change practice in the West.At the time of this writing, this combination is not FDA-approved, but the FDA has accepted a new drug application for camrelizumab and rivoceranib for first-line treatment in patients with metastatic HCC.
Future Directions in the Era of Immunotherapy
As HCC therapeutics have entered into the new era of immunotherapy, there is great interest in the safety and efficacy of immunotherapy in early-and intermediate-stage HCC as well as the role of ICIs in combination with LRT.
Given the high rates of recurrence with resection or ablation, adjuvant therapy has been an area of investigation.Sorafenib did not improve recurrence-free survival (RFS) rates when administered post-operatively [92].A recent trial, which has not yet been published as of this writing, was shown to improve outcomes in the adjuvant setting and has the potential to change practice.The IMbrave050 trial was an open-label phase III randomized clinical trial that compared atezolizumab and bevacizumab to active surveillance in patients with HCC at high risk of recurrence after ablation or resection [93].Patients were treated with atezolizumab and bevacizumab every 3 weeks for 17 cycles, or 1 year.High-risk features included size over 5 cm, more than three tumors, microvascular or minor macrovascular invasion, or grade 3/4 pathology.The primary endpoint of RFS was met at the interim analysis with an HR of 0.72 (p = 0.012) and 12 m RFS of 78% and 65%, respectively.Longer follow-up is needed to determine if the RFS benefit will be maintained in subsequent analyses or if progression was merely delayed with one year of adjuvant therapy.Various phase 3 trials are ongoing to evaluate ICIs in early-stage HCC after resection or ablation.
For intermediate-stage HCC, ongoing trials are also evaluating combination therapy with TACE or SIRT with ICIs in combination with synchronous or on-demand intra-arterial therapies.In addition, future considerations in LT include the potential use of ICIs for the purpose of downstaging or as a bridge to transplantation, thus allowing eligibility for HCC MELD exception points.The timing of discontinuing ICIs prior to transplantation remains unclear.Neoadjuvant studies are also ongoing.While early findings are promising for the role of ICIs prior to transplantation, larger trials are needed to ensure safety and efficacy prior to implementing this high-risk strategy in routine clinical practice.
Drugs with novel mechanisms are also of interest.The recent success of tiragolumab, an anti-TIGIT antibody, in addition to atezolizumab and bevacizumab in the phase Ib/II MORPHEUS-liver study, has paved the way for the ongoing phase 3 trial IMbrave152 looking at this triplet combination.New combinations of previously evaluated drugs, such as adding ipilimumab to atezolizumab and bevacizumab, are also being tested.Trials looking at completely novel therapeutics (vaccines and CAR-T) are also ongoing.
The future role of combination therapies with ICIs in the treatment of CP B cirrhosis patients remains unclear, given the concerns over safety in this patient population.Studies with nivolumab thus far have shown it to be safe and effective.Recent real-world data provide preliminary evidence for the safety and efficacy of atezolizumab plus bevacizumab in patients with CP B cirrhosis [94].See Table 5 for a selected list of ongoing clinical trials.As clinical trials continue to focus on ICIs and other novel agents, challenges remain in the understanding of the molecular heterogeneity of HCC and the liver tumor microenvironment.Biomarkers are needed to stratify patients and predict how they will respond to certain therapies.Additional therapeutic approaches may be needed to increase tumor susceptibility to ICIs in patients who are less likely to respond, whether it be due to the etiology of cirrhosis or to other reasons for a dampened immune response.The treatment paradigm for HCC is evolving, and the future model is envisioned to be one of precision medicine and personalized care.This would involve the ability to identify biomarkers for early detection, treatment response, and disease surveillance with the incorporation of clinical, radiologic, and biochemical data in the era of machine learning and artificial intelligence.
Best Supportive Care: Incorporation of Non-Hospice Palliative Care in HCC
Patients with HCC commonly have preexisting cirrhosis, and this dual diagnosis increases the complexity of their care.This patient group bears considerable physical, psychosocial, and financial burdens, with caregiver burnout as well as possible stigmatization.Symptoms from liver decompensation that can occur with HCC include ascites, variceal hemorrhage, hepatic encephalopathy, sarcopenia, and frailty.These patients are also faced with symptoms related to their tumor, extra-hepatic spread, or effects of treatment.The most common symptoms faced by patients with HCC are abdominal pain, fatigue, anorexia, nausea/vomiting, and ascites [95].Liver cancer also ranks in the top three cancers for high prevalence of depression and anxiety [96].As the disease progresses and the symptom burden increases, the role of PC becomes more evident (Figure 3).Creating a new clinical model of practice between the hepatology/transplant team and PC will require a paradigm shift in clinical practice that includes incorporating PC providers in the multidisciplinary team model.This new care model is designed to provide patient-centered supportive care, whether treatment goals are curative or non-curative.The incorporation of PC in HCC management can provide benefits to overall care, including improving patient quality of care and QOL as well as supporting caregiver and care teams.In addition, the PC team can initiate early discussions of advance care planning but would approach it differently in patients who are pursuing curative-intent therapy, including LT, in contrast to patients with non-curative palliative goals of care or those who require hospice care [99].Models of care involving the integration of PC are part of routine practice in several end-stage diseases, including advanced cancer, chronic kidney failure, and congestive heart failure.This model of care has been shown to increase survival, decrease hospitalizations, and improve patient QOL [100].There are currently limited evidence-based data to provide recommendations regarding PC involvement specifically in HCC care [101].Further research is needed to better understand the timing of PC referral, intervention, and outcomes of HCC patients receiving PC.
Conclusions
HCC is a particularly lethal malignancy with a prevalence that varies according to the global region and risk factors.The highest global burden of HCC is in the East, with HBV as the most common cause in Asia as well as sub-Saharan Africa.In the West, nonviral causes such as MASLD and ALD are more common, and metabolic causes are increasing worldwide in parallel with the obesity epidemic.
The complexity of HCC care includes the management of not only the cancer but the underlying liver disease, and, therefore, it should be managed by a multidisciplinary team ideally in a co-located clinic at a liver transplant center.For patients with early-stage disease, curative approaches are possible through surgical resection, ablation, or LT.If portal hypertension or tumor distribution precludes surgery, then LT should be considered as it There is a growing recognition in the hepatology community regarding the importance of early intervention of palliative care (PC) for HCC patients, whether the goals of therapy are curative or non-curative.Early intervention with PC even in earlier stage disease can assist with symptom management, advanced care planning, and psychosocial support.PC has been historically underutilized in patients with HCC, and over a quarter of patients with advanced-stage HCC never enter hospice care before the time of their death [97].Various barriers to referral to PC include prognostic uncertainty, the unpredictable clinical trajectory of cirrhosis, lack of time for these discussions, stigma and biases from the patient or caregiver, and the misconception that PC is associated with "giving up" [98].A new framework for HCC care involving partnerships with PC, hepatology, and the transplant team is greatly needed, and ensuring that referral to PC is not synonymous with stopping active therapies or disease-directed therapies.
Creating a new clinical model of practice between the hepatology/transplant team and PC will require a paradigm shift in clinical practice that includes incorporating PC providers in the multidisciplinary team model.This new care model is designed to provide patient-centered supportive care, whether treatment goals are curative or non-curative.The incorporation of PC in HCC management can provide benefits to overall care, including improving patient quality of care and QOL as well as supporting caregiver and care teams.In addition, the PC team can initiate early discussions of advance care planning but would approach it differently in patients who are pursuing curative-intent therapy, including LT, in contrast to patients with non-curative palliative goals of care or those who require hospice care [99].Models of care involving the integration of PC are part of routine practice in several end-stage diseases, including advanced cancer, chronic kidney failure, and congestive heart failure.This model of care has been shown to increase survival, decrease hospitalizations, and improve patient QOL [100].There are currently limited evidence-based data to provide recommendations regarding PC involvement specifically in
Figure 1 .
Figure 1.Multidisciplinary team model in HCC care.
Figure 1 .
Figure 1.Multidisciplinary team model in HCC care.
Figure 1 .
Figure 1.Multidisciplinary team model in HCC care.
4 .
Non-Curative Approaches (Palliative/Tumor Control) 4.1.Locoregional Therapies Patients with intermediate-stage BCLC B HCC who have multifocal disease and preserved liver function can receive LRT with arterially directed therapies (TACE/Y90) and/or SBRT (Table
Figure 3 .
Figure 3. Incorporation of palliative care in HCC care involves consideration of the cirrhosis stage, HCC stage, treatment, and goals of therapy.The level of involvement of palliative care will vary based on these factors.
Figure 3 .
Figure 3. Incorporation of palliative care in HCC care involves consideration of the cirrhosis stage, HCC stage, treatment, and goals of therapy.The level of involvement of palliative care will vary based on these factors.
mTTP: median time to progression; mPFS: median progression free survival; ORR: overall response rates; OS: overall survival; ITT: intention to treat analysis.
Table 4 .
Selected landmark studies for systemic therapy.
Table 5 .
Selected ongoing clinical trials. | 2024-02-08T16:14:15.516Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "12d0f0150b5c11a552c9e2727a74b3a78f0b1740",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/16/3/666/pdf?version=1707037768",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c395aa5115bc2a1fa1edcfcbcc954871e4e307c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118742863 | pes2o/s2orc | v3-fos-license | Experimental consequences of predicted charge rigidity of superconductors
The theory of hole superconductivity predicts that in superconductors the charged superfluid is about a million times more rigid than the normal electron fluid. We point out that this physics should give rise to large changes in the bulk and surface plasmon dispersion relations of metals entering the superconducting state, that have not yet been experimentally detected and would be in stark contradiction with the expected behavior within conventional BCS-London theory. We also propose that this explains the puzzling experimental observations of Avramenko et al\cite{sound} on electron sound propagation in superconductors and the puzzling experiments of W. de Heer et al\cite{clusters} detecting large electric dipole moments in small metal clusters, as well as the Tao effect\cite{tao} on aggregation of superconducting microparticles in an electric field. Associated with the enhanced charge rigidity is a large increase in the electric screening length of superconductors at low temperatures that has not yet been experimentally detected. The physical origin of the enhanced charge rigidity and its relation to other aspects of the theory of hole superconductivity is discussed.
I. INTRODUCTION
A normal metal screens electrostatic fields over distances of order of the interelectronic distance, or k −1 F , with k F the Fermi wavevector, quantitatively of order A −1 for normal metallic densities. Conventional BCS-London theory predicts that the response of superconductors to a static electric field is essentially the same as that of normal metals [4][5][6][7][8][9]. Instead, the theory of hole superconductivity [10] predicts that superconductors can only screen electrostatic fields over much larger distances, in their ground state over distances of order λ L , the London penetration depth, quantitatively of order hundreds ofÅ [11].
Within the theory of hole superconductivity the inability of superconductors to screen over smaller distances originates in the fact that superconducting electrons reside in highly overlapping orbits of radius 2λ L [12], and this precludes the possibility of charge fluctuations of shorter wavelengths that would destroy the ability of the superconducting electrons to maintain phase coherence as they traverse these large orbits. That superfluid electrons reside in orbits of radius 2λ L follows from the fact that according to this theory this is the only consistent way to explain dynamically the Meissner effect exhibited by all superconductors [13]. The inability of superfluid electrons to screen leads, via the compressibility sum rule, to the prediction that superfluid electrons are highly incompressible unlike normal metal electrons.
This enhanced rigidity of the superconducting fluid implies that the longitudinal plasmon dispersion relation will be much steeper in the superconducting than in the normal state [14]. In contrast, BCS theory predicts no change in the plasmon dispersion relation [5][6][7][8]. The bulk plasmon dispersion relation can be measured by EELS (electron energy loss spectroscopy) [15,16] and by inelastic X-ray scattering [17] as well as optically in transmission experiments through thin films [18,19]. Also the surface plasmon dispersion relation, which can be measured in EELS [20] or optical experiments [21], should change in the superconducting state. To our knowledge these experiments have not been yet done on superconductors.
We discuss here what we expect the observations will show, in stark contrast with what would be expected within BCS-London theory.
We furthermore discuss three experiments that have been performed in recent years that provide strong evidence in favor of the enhanced rigidity of superfluid electrons predicted by our theory: (i) sound propagation by electrons (Avramenko effect) [1], (ii) electric dipole moments of small metal clusters (de Heer effect) [2], and (iii) aggregation of superconducting microparticles in large electric fields (Tao effect) [3].
The larger electric screening length of superconductors should be directly detectable experimentally. So far the only experimental indication of this appears to be a report by Jenks and Testardi [22] that measured an increased penetration of electric field in Y BCO films below T c . We discuss the expected behavior of the electric screening length below T c within our theory.
II. ELECTRODYNAMIC EQUATIONS FOR SUPERCONDUCTOR
Within the theory of hole superconductivity the first London equation for the time derivative of the supercurrent is modified to read [14] ∂ J s ∂t = n s e 2 m e ( E + ∇φ) with φ the electric potential (the ∇φ -term is absent in the conventional London equations). The magnetic vector potential A in the second London equation arXiv:1201.3637v3 [cond-mat.supr-con] 24 Feb 2012 obeys the Lorenz gauge ∇ · A = −(1/c)(∂φ/∂t) rather than the London gauge ∇ · A = 0 [14]. Note that Eq.
(1) follows from Eq. (2) on using Faraday's law. The charge density in the superconductor ρ( r, t) satisfies the equation [14] and the electric potential φ( r, t) satisfies the same equation where ρ 0 is a uniform positive charge density and φ 0 ( r) is the resulting electrostatic potential (∇ 2 φ 0 = −4πρ 0 ). The London penetration depth λ L is given by the usual form [23] 1 with ω p the plasma frequency. The parameter ρ 0 is determined by the condition that the internal electric field that develops in the interior of the superconductor due to expulsion of negative charge to the surface [11] should reach its maximum value [24] within a London penetration depth from the surface, pointing outward perpendicular to the surface.
III. DIELECTRIC FUNCTION AND COMPRESSIBILITY
It follows from the electrodynamic equations discussed in the previous section that the longitudinal dielectric function for the superfluid within our theory is given by [14] Eq. (7) is of the generic form of a hydrodynamic longitudinal dielectric function for the electron fluid [25,26] that yields for the static dielectric constant with κ the electronic compressibility. The second equality follows from the compressibility sum rule [27], so that with .
For the free electron gas, the zero temperature compressibility is with F = m e v 2 F /2 the Fermi energy and v F the Fermi velocity, yielding Instead, for the superconductor we have from Eqs. (7) and (8) κ s = 1 n s m e c 2 (15) so that the superconducting electron fluid is enormously more rigid than the normal metal electron fluid, since c >> v F . We expect this enhanced rigidity to show up in experiments where electron density oscillations are induced that are not accompanied by motion of the ions so that an electric potential builds up in the interior of the superconductor. Eq. (9) yields the static longitudinal dielectric functions for the superfluid electrons and the normal metal electrons respectively with λ L the London penetration depth given by Eq. (5) and λ T F the Thomas Fermi screening length given by with a 0 the Bohr radius.
IV. BULK PLASMONS
The bulk plasmon dispersion relation follows from setting the longitudinal dielectric function to zero. Eq. (8) yields hence we predict for superconductors at zero temperature For the normal metal, the plasmon dispersion relation obtained from the longitudinal dielectric function calculated in the random phase approximation (Linhardt dielectric function) yields Eq. (19) with [28] which is slightly different from Eq. (13), valid in the low frequency limit [29]. Thus, to reproduce the Linhardt dielectric function the hydrodynamic form Eq. (8) has to include a variation of β 2 from low to high frequencies.
Within a two-fluid description of a metal below the superconducting transition temperature the electronic pressure results from the sum of the superfluid and the normal fluid pressures, hence and the parameter β 2 is where n n and n s are the densities of normal and superfluid electrons and we have used the high frequency value of β Eq. (21) for the normal electron contribution. Thus, Eqs. (19) and (23) show that as the temperature is lowered below T c a sharp increase in the slope of the bulk plasmon dispersion relation ω 2 k versus k 2 should be seen. In a two-fluid model one has n s = n(1 − t 4 ), n n = nt 4 , with t = T /T c , which is also approximately the behavior predicted by BCS theory [23], so we expect for the bulk plasmon dispersion relation V. SURFACE PLASMONS Surface plasmons (also called surface plasmon polaritons) are longitudinal charge oscillations coupled to an electromagnetic wave with both longitudinal and transverse field components propagating along the surface of a metal, excited by either fast electrons or electromagnetic radiation. Crowell and Ritchie [30] and Fuchs and Kliewer [26] derived the following dispersion relation [30] and ω k → ω p / √ 2 for large k. For any β = 0, the surface plasmon dispersion relation for large k is, from Eq. (25) so it approaches the bulk plasmon dispersion relation Eq. (19). For small wavevectors the surface plasmon dispersion relation is so it increasingly deviates from the transverse dispersion relation ω k = ck for smaller β and larger k. Figure 1 shows examples of the surface and bulk plasmon dispersion relations for various β. As function of temperature, β increases very rapidly as T is lowered below T c according to Eq. (23). For any typical value of v F (of order 1% of the speed of light) the v F term in Eq. (23) can be ignored, so that The values of β of 0, 0.1, 0.2 and 0.5 shown in Fig. 1 correspond to values of T /T c of 1, 0.998, 0.990 and 0.931 respectively. Consequently we expect rapid changes in the observed surface and bulk plasmon frequencies as the temperature is lowered below T c , in contrast to conventional BCS-London theory that predicts no change [8].
However, for surface plasmons the interpretation of experiments could be more complicated because it appears that in experiments performed in the normal state the induced charge fluctuations can spill out of the surface, drastically modifying the dispersion relation [31][32][33], an effect which is not taken into account by Eq. (25). We expect this spill-out effect to be even more pronounced in the superconducting state because of the enhanced rigidity and because the superconductor has an enhanced tendency to spill out electrons within our theory [34].
VI. PLASMON EXPERIMENTS
Measurement of the angular dependence of scattered electrons in electron energy loss experiments (EELS) provides information on the bulk plasmon dispersion relation. A large number of such studies has been performed on many different normal metals since the 1950's [15]. Generally these studies are done at room temperature, although there have also been EELS studies of the effect of temperature on the plasma frequency down to liquid helium temperatures for Al [35] and P b [36]. However to our knowledge there has not been a single EELS study of plasmons in a superconducting metal that would look at possible changes in the plasmon dispersion relation below the critical temperature (except for ref. [37] for a high T c cuprate that did not detect any change presumably due to experimental accuracy limitations). This is very surprising and we hope such experiments will be done in the near future. As discussed in Sect. IV we expect a very rapid increase in the plasmon energy for fixed wavevector as the system is cooled below T c .
Longitudinal bulk plasmons can also be excited optically with obliquely incident p-polarized (parallel to the plane of incidence) electromagnetic radiation [18,19] and the plasmon dispersion relation can be measured. For example, Lindau and Nilsson [19] obtained the bulk plasmon ω k for Ag from transmission experiments through thin films of different thicknesses of the order 100Å. The experiment is done at fixed angle of incidence and each film thickness gives a small number (2 in this case) of points in the dispersion relation. Anderegg et al [38] measured the plasmon dispersion relation for K from oscillatory structure in the absorption of thin films of varying thickness from 27Å to 100Å, and were able to extract up to 10 data points per film. Again, it would seem straightforward to do such experiments with superconducting films but not a single study has been performed so far to our knowledge. We hope such studies will be done in the near future.
Inelastic X-ray scattering (IXS) experiments can also provide information on the bulk plasmon dispersion relation [39][40][41][42]. IXS experiments at low temperatures have been performed in recent years for example to study the physics of liquid and solid He [43][44][45]. However no attempt has been made to date to study the plasmon dispersion relation of metals like e.g. Al [41] in the temperature range where they become superconducting using this technique.
Surface plasmon experiments on superconductors have never been performed to our knowledge, neither EELS nor IXS nor optical. With conventional optical methods it is complicated to excite surface plasmons because of required matching conditions and rough surfaces are needed which introduces additional complications [46]. However, recently developed scanning near-field optical microscopy techniques [47] provide the possibility to locally excite and detect surface plasmons [48] and may allow for detailed studies of the effect of the onset of superconductivity on the surface plasmon dispersion relation.
Finally, surface plasmons excited in metal nanoparticles (Mie resonances) [49] are sensitive to the longitudinal dielectric function [50] and thus are likely to show interesting changes due to the change in the dielectric response that we predict upon onset of superconductivity. Such experiments have never been done with superconducting nanoparticles to our knowledge.
VII. ELECTRON SOUND ANOMALY
Avramenko and coworkers [1] apply a longitudinal elastic wave to the surface of a metal and detect an electric potential oscillation at the opposite end of the sample. They find two types of signals, one propagating at the ordinary sound velocity and a much faster one propagating at a speed of order the Fermi velocity, which they call "electron sound". When the temperature of the sample is lowered below the superconducting transition temperature the amplitude of the transmitted signals drops precipitously. Avramenko et al point out that this behavior has no explanation within the conventional theory of superconductivity.
According to Avramenko et al the displacement amplitude at the receiving interface for the electron sound signal is where s is the sound velocity and v ef f the velocity of electron sound propagation which Avramenko et al assume is the Fermi velocity v F . u 0 is the amplitude of the elastic vibrations at the interface where the signal is generated. u ES determines the electric potentials measured at the receiving interface ϕ S and ϕ ES for sound and electron sound. Both potentials decrease precipitously as the temperature is lowered below the superconducting T c . If the superfluid is very rigid compared to the normal fluid as predicted by our theory (Eq. (15)) it is natural to expect that the amplitude of longitudinal charge oscillations will rapidly decrease as the temperature is lowered below T c and the superfluid concentration increases. Following the behavior of the bulk modulus Eq. (23) we argue that the electron sound velocity v ef f in Eq. (30) below T c can be estimated by within a two-fluid description. Figure 2 shows the obtained behavior of the amplitude of the potentials with this assumption, compared to the experimental data of Avramenko et al for Ga [1] (2004) for three different values of v F /c. It can be seen that our curves qualitatively and semiquantitavely fit the observations for reasonable values of v F /c. An even better fit may result from using values of the superfluid concentration derived from measurement of the temperature-dependent London penetration depth rather than the two-fluid model temperature dependence assumed here. We argue that the comparison shown in Fig. 2 provides strong evidence in favor of the greatly enhanced charge rigidity of superconductors predicted by our theory.
VIII. DE HEER CLUSTER DIPOLE MOMENTS
In a series of papers, W. de Heer and coworkers [2] established that small metallic clusters of N b, V and T a exhibit a large electric dipole moment at low temperatures (of order several Debye for clusters of up to 100 atoms). Through a variety of measurements they found very strong evidence that the development of the electric dipole moment is associated with the onset of superconductivity. In contrast, similar clusters of a nonsuperconducting metal, N a, showed essentially no electric dipole moments [51].
Within our theory a superconducting body expels negative charge from the interior to the surface and the resulting charge distribution is rigid. The distribution of electronic charge is determined by the geometry of the body and can be obtained by numerical solution of the electrostatic equations [52]. Initially we had hoped [11] that the inhomogeneous electronic charge distribution predicted by our theory would account for the electric dipole moments observed by de Heer et al. However our calculations show that the electronic charge distribution does not exhibit an electric dipole moment even for sample shapes without inversion symmetry [53].
However, the distribution of ionic charge in a small cluster is discrete rather than continuous, and this fact is not taken into account in our calculation. Small metallic clusters have irregular shapes [54,55], in general with no inversion symmetry. Generically an electric dipole moment will be generated by the ionic charges determined by the overall shape of the cluster as well as by the discrete location of the ions. In the normal state, as well as within conventional BCS theory, metallic or superfluid electrons are extremely efficient at screening electric fields over a length scale λ T F , of order 1Å (eq. (18)) and thus will screen any ionic dipole moment. Instead, within our theory the superfluid electrons can only screen electrostatic fields over distances of order the London penetration depth (Eqs. (16), (5)), typically of order several hundredÅ, which is much larger than the linear dimensions of the de Heer clusters (which have up to ∼ 100 atoms and linear dimensions smaller than 10Å). Therefore, we argue that the observation of large unscreened electric dipole moments in metallic clusters of dimensions much smaller than the London penetration depth is strong evidence in favor of the large rigidity of the superfluid electron charge distribution predicted by our dielectric function Eq. (7).
IX. TAO EFFECT
In a series of papers [3], R. Tao and coworkers found that superconducting microparticles in a strong electrostatic field assemble into spherical shapes of macroscopic dimensions. We have proposed a detailed explanation of this "Tao effect" [56], based on the charge expulsion and resulting electric fields in the neighborhood of superconducting particles of non-spherical shape predicted by our theory.
However, even without considering the details of our theory, in a more general context it is clear, as pointed out in the experimental papers [3], that this observation is impossible to explain unless electrostatic fields penetrate the superconducting particles a distance considerably larger than the Thomas Fermi length. This then requires that the superconducing charge distribution is more rigid than in the normal state where it can screen the electric field beyond anÅ or so of the surface. Thus, we argue that the observation of the Tao effect is also a strong indicator that the charge distribution in superconductors is more rigid than in the normal state.
X. SCREENING OF ELECTROSTATIC FIELDS
The electrodynamic equations of our theory predict [14], according to Eq. (16), that the superfluid electrons screen applied static electric fields over a distance λ L rather than over a Thomas Fermi screening length as predicted by BCS theory [8,57]. In fact, the London brothers themselves considered electrodynamic equations for superconductors predicting such behavior in an early version of their theory [58]. However, shortly thereafter H. London performed an experiment [59] attempting to detect this effect and didn't find it, after which the London brothers discarded that version of the theory and adopted the conventional London equations to describe superconductors which do not allow electric fields in the interior.
We expect the electric screening length to increase continuosly from λ T F ∼ 1Å to λ L ∼ 100's ofÅ as the temperature is lowered from T c to 0. In a 2-fluid model description the static dielectric constant at finite temperatures is given by giving the effective electric screening length λ E as witn n s (t) = n(1 − t 4 ), n n (t) = nt 4 , with t = T /T c . Eq. (33) predicts the temperature dependence of the electric screening length shown in Fig. 3. Note that only at temperatures well below T c does the screening length increase substantially. In H. London's 1936 experiment [59] he attempted to measure changes in the capacitance of a capacitor with superconducting electrodes of the metal Hg that would result from an increased electric penetration depth. His experiment showed no change, from which he concluded that the electric screening length doesn't change in superconductors. However, the lowest temperature reached in H. London's experiment was T = 1.8 o K, which corresponds to T /T c = 0.43 for Hg (T c = 4.153 o K). With the sensitivity of his experiment, London could have detected a change in the capacitance corresponding to the screening length increasing above λ E ∼ 20Å. As can be seen in Fig. 3, for T /T c ∼ 0.4 the screening length would only have increased to about 5Å, hence substantially less than what could have been detected with the sensitivity of that experiment.
In 1993 Jenks and Testardi attempted to measure the penetration of a static electric field into epitaxial thin films of Y Ba 2 Cu 3 O 7−x [22]. They reported a large change close to T c , in apparent disagreement with both BCS theory and with our expected behavior shown in Fig. 3. However, it is not clear that this experiment was free of experimental artifacts, since the results also showed a large change in penetration depth with temperature above T c , and variations between different films. The experiment has not been repeated, nor are there any other published reports of attempts to measure changes in the electric screening length below the superconducting critical temperature in either high T c or conventional materials to our knowledge.
XI. DISCUSSION
The need for reformulation of London electrodynamics arose in our theory from the prediction that negative charge expulsion occurs in the transition to superconductivity [60], which is a consequence of the microscopic physics of electron-hole asymmetry [61] and incompatible with the conventional London equations that assume that no electrostatic fields can exist in superconductors. Our reformulation renders the theory relativistically covariant [14], unlike conventional London electrodynamics, and allows for a natural and consistent extension of the electrodynamics equations to the spin sector [24] so as to describe both charge and spin currents, which is necessitated by the predicted existence of an outward pointing electric field in the interior of superconductors.
The enhanced charge rigidity and inability to screen can be seen also as a natural consequence of several other aspects of the theory. For example, superconductivity in this theory is driven by kinetic energy rather than potential energy lowering [62,63]. Thus, in contrast to the normal metal the superconductor is willing to pay a price in Coulomb potential energy in order to optimize kinetic energy, which naturally results in its inability to effectively screen electric fields over short distances, a process which is potential-energy driven in the normal metal. Kinetic energy lowering is associated with the fact that in the transition to superconductivity electrons 'undress' from the electron-ion interaction, expand their wavelength and no longer "see' the discrete ionic potential [64], hence are unable to screen perturbations on interatomic distance scales as normal electrons do.
Associated with the much larger screening length is the fact that the compressibility of the superfluid is enormously reduced compared to the normal metal. This is related to the enhanced quantum pressure of the superfluid compared to the normal fluid, which is manifest in superconductors in the negative charge expulsion and in superfluid 4 He in the fountain effect [65]. It does not mean however that the pressure is increased by the same factor as the rigidity (bulk modulus). We have for the superconductor (Eq. (15)) and integrating we obtain where n 0 is an integration constant, in contrast to the normal metal where It is natural to conclude that n−n 0 is of order ρ − /e (ρ − is the excess negative charge density near the surface [24]), the expelled number density, which is smaller than the superfluid density n by about the same factor (∼ 10 6 ) than the energy m e c 2 is larger than the Fermi energy F [24]. Thus the superfluid pressure in the superconductor is of the same order of magnitude as the electronic pressure in the normal metal, but its rigidity is enormously enhanced.
How is this compatible with the experimental observation that the compressibility of a solid in the superconducting state is essentially the same as in the normal state? Clearly in a quasistatic compressibility measurement the ions and the electrons move together, no charge imbalance is generated and the enhanced rigidity does not manifest itself. It is only in experiments where the electronic density is locally changing relative to the ionic density that the much larger rigidity will show up.
Formally one can write electrodynamics equations for the superconductor where the screening length for electrostatic fields is λ L but where no charge expulsion occurs, as done by the London brothers themselves in the early version of their theory [58], as well as by others thereafter [66,67]. Mathematically the formalism is very appealing but there is no physics behind it, and perhaps for that reason the London brothers were quick to discard it soon thereafter when H. London's experiment seemed to disprove it [59]. For us instead, the enhanced charge rigidity and the predicted charge expulsion are F in the normal state (left) to radius 2λL in the superconducting state (right). This is the origin of the charge expulsion and the charge rigidity over distances of order λL predicted by the theory. The black dots denote the instantaneous position of the electron, i.e. the "phase", which is random in the normal state where the orbits are non-overlapping and highly correlated between different overlapping orbits (i.e. phase coherent) in the superconducting state. The orbiting speed is v 0 σ = /(4meλL) in the superconducting state [24].
inextricably linked: no enhanced rigidity can take place without charge expulsion and no charge expulsion can occur without accompanying enhanced rigidity. This is because both phenomena are a direct consequence of the fact that electronic orbits expand, driven by kinetic energy lowering, from microscopic non-overlapping orbits of radius k −1 F to orbits of radius 2λ L in the transition to superconductivity [12], as shown schematically in Fig. 4. Orbit expansion implies outward motion of negative charge, and the resulting mesoscopic orbits are highly overlapping which makes it impossible to create a charge fluctuation over a small distance since the extra electrons would not have the ability to insert their orbits in the mesh of highly correlated interpenetrating orbits that already exists.
It is often said in the context of the conventional theory of superconductivity that the wavefunction of a superconductor is "rigid", a concept first introduced by F. London. In the conventional theory, "rigidity" refers only to the response to magnetic perturbations. Instead, our theory extends the property of rigidity of superconductors also to the response to electric perturbations. Rigidity to both magnetic and electric perturbations originates in the overlapping phase-coherent 2λ L orbits depicted in Fig. 4, which also explains the macroscopic phase coherence (phase rigidity) of the superconductor: an electron orbiting out of phase would collide with other electrons in overlapping orbits and pay a high price in Coulomb energy. And this also explains why the length λ L enters symmetrically in our theory for both magnetic and electric phenomena [24]: the 2λ L orbits are necessary for the Meissner effect to take place [13], as already suspected long ago by Smith [68] and by Slater [69], and the same 2λ L orbits determine the electric screening length. The wavefunction for the superconducting state has to de-scribe superfluid electrons in 2λ L orbits, which BCS theory does not do, if it is to describe the ubiquitous Meissner effect and the experimental consequences of enhanced charge rigidity discussed in this paper.
The fact that the superfluid wavefunction is rigid with respect to both magnetic and electric perturbations and the resulting new electrodynamics follow naturally in a relativistic context [58]. Within Klein-Gordon theory [70] describing a relativistic scalar wavefunction Ψ( r, t), the current four-vector J = ( J( r, t), icρ( r, t)) for the current and charge densities is given by with A the magnetic vector potential and φ the electric potential. In the conventional theory it is said that the Meissner effect results from the fact that the wavefunction is unaffected by changes in the magnetic vector potential A. Hence, since J = 0 in the absence of magnetic fields, eq. (37a) implies that for any value of A J( r, t) = − n s e 2 m e c A( r, t) with n s = Ψ * Ψ, giving rise to the Meissner effect. Extending the argument it is natural to assume that the wavefunction Ψ( r, t) is also unaffected by applied electric fields and by proximity to the boundaries of the sample. If deep in the interior of the superconductor the charge density is assumed to be a constant ρ 0 , with associated electric potential φ 0 ( r) (∇ 2 φ 0 = −4πρ 0 ), it follows from applying Eq. (37b) to a position deep in the interior and another arbitrary position r and substracting, that at any position r with or without applied electric fields ρ( r, t) − ρ 0 = − n s e 2 m e c 2 (φ( r, t) − φ 0 ( r)) (39) which is the basic equation of our modified electrodynamic formalism [14] determining the charge distribution and electric potential in superconductors of arbitrary shape.
In summary we have discussed in this paper six different experimental probes of the enhanced charge rigidity of superconductors predicted by our theory. Three of them (electron sound, de Heer effect, Tao effect) have already shown clear evidence for enhanced charge rigidity. Another two (bulk and surface plasmons ) have not yet been experimentally tested, and there are a variety of different experimental techniques (EELS, IXS, optical transmission, optical near-field, nanoparticles) that can be used for that purpose. Finally, the predicted increased electric screening length below T c has yielded ambiguous results so far [22,59]. For none of the three observations that we interpret as arising from the enhanced charge rigidity predicted by our theory have alternative plausible explanations been proposed, and they all seem to be incompatible with conventional London-BCS theory. For the changes that we predict in the bulk and surface plasmon dispersion relations no other such predictions have been made in other theoretical frameworks and they are also incompatible with conventional BCS-London theory, as is the predicted increase in electric screening length at low temperatures. It will be interesting to confront the predictions of our theory and of BCS-London theory with future experimental results. | 2012-02-24T20:29:51.000Z | 2012-01-17T00:00:00.000 | {
"year": 2012,
"sha1": "958da7c112469888dc19229b4111d63d82538a4a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1201.3637",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "958da7c112469888dc19229b4111d63d82538a4a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
204738635 | pes2o/s2orc | v3-fos-license | Whole Genome Sequencing of Familial Non-Medullary Thyroid Cancer Identifies Germline Alterations in MAPK/ERK and PI3K/AKT Signaling Pathways
Evidence of familial inheritance in non-medullary thyroid cancer (NMTC) has accumulated over the last few decades. However, known variants account for a very small percentage of the genetic burden. Here, we focused on the identification of common pathways and networks enriched in NMTC families to better understand its pathogenesis with the final aim of identifying one novel high/moderate-penetrance germline predisposition variant segregating with the disease in each studied family. We performed whole genome sequencing on 23 affected and 3 unaffected family members from five NMTC-prone families and prioritized the identified variants using our Familial Cancer Variant Prioritization Pipeline (FCVPPv2). In total, 31 coding variants and 39 variants located in upstream, downstream, 5′ or 3′ untranslated regions passed FCVPPv2 filtering. Altogether, 210 genes affected by variants that passed the first three steps of the FCVPPv2 were analyzed using Ingenuity Pathway Analysis software. These genes were enriched in tumorigenic signaling pathways mediated by receptor tyrosine kinases and G-protein coupled receptors, implicating a central role of PI3K/AKT and MAPK/ERK signaling in familial NMTC. Our approach can facilitate the identification and functional validation of causal variants in each family as well as the screening and genetic counseling of other individuals at risk of developing NMTC.
Introduction
Thyroid cancer is the most common endocrine malignancy with an age adjusted incidence of 0.5-20/100,000 persons per year [1]. Significant regional differences exist with Italy being among the countries with the highest incidence rates in the world [1]. An increasing incidence has been observed worldwide during the past decades, which can to a certain extent be related to changes in the availability of medical services and in standard clinical practice. On the other hand, regional diagnosed with PTC or micro-PTC (II-2, II-3, II-6, III-1) and one child with benign nodules (II-1). Her unaffected son was deemed a reliable control (II-4). WGS (*) was performed on five family members. In family 2, there were six cases (III-1, III-3, III-4, IV-3, IV-4, IV-5), one probable case (IV-1) and one control (IV-2) out of which six underwent WGS. Family 3 consisted of two related cases (IV-4, IV-5) and one unrelated case (III-1) of which all three underwent WGS. Family 4 is characterized by bilateral PTCs concurrent with other subtypes of NMTCs (Hürthle cell cancer, follicular cancer). Four family members were diagnosed with thyroid cancer of which all underwent WGS (II-2, III-1, III-2, III-3). WGS was performed on eight family members of family 5. Five members were affected by PTC, Hürthle cell cancer, micro-PTC or a combination of two of the subtypes (II-2, II-3, II-5, II-8, II-9). Four members were possible carriers either affected by benign nodules or deceased (I-1, II-4, II-6) and two were unaffected (II-1, II-7).
Whole Genome Sequencing and Variant Evaluation
WGS for 23 cases and 3 controls was performed using Illumina-based small read sequencing after DNA was isolated from peripheral blood using the QIAamp ® DNA Mini Kit (Qiagen, Cat No. 51104) according to the manufacturer's instructions.
Variant Calling Annotation and Filtering
Sequencing data was mapped to a reference human genome (assembly version Hs37d5) using BWA mem (version 0.7.8) and duplicates were removed using biobambam (version 0.0.148). Single nucleotide variants (SNVs) and indels were called from all the samples in a family together using Platypus (version 0.8.1). ANNOVAR, 1000 Genomes, dbSNP and ExAC (Exome Aggregation Consortium) were used in the annotation of variants as explained in detail in our previous paper [14]. Variants to be evaluated further were selected using the following criteria: i) A quality score greater than 20, and a coverage greater than 5x; ii) All Platypus filters were met. Variants with a minor allele frequency (MAF) less than 0.1 % in 1000 genome and ExAC-nonTCGA data were selected for further analysis. A pairwise comparison of shared rare variants among the cohort was performed to check for sample swaps and family relatedness.
Whole Genome Sequencing and Variant Evaluation
WGS for 23 cases and 3 controls was performed using Illumina-based small read sequencing after DNA was isolated from peripheral blood using the QIAamp ® DNA Mini Kit (Qiagen, Cat No. 51104) according to the manufacturer's instructions.
Variant Calling Annotation and Filtering
Sequencing data was mapped to a reference human genome (assembly version Hs37d5) using BWA mem (version 0.7.8) and duplicates were removed using biobambam (version 0.0.148). Single nucleotide variants (SNVs) and indels were called from all the samples in a family together using Platypus (version 0.8.1). ANNOVAR, 1000 Genomes, dbSNP and ExAC (Exome Aggregation Consortium) were used in the annotation of variants as explained in detail in our previous paper [14]. Variants to be evaluated further were selected using the following criteria: (i) A quality score greater than 20, and a coverage greater than 5x; (ii) All Platypus filters were met. Variants with a minor allele frequency (MAF) less than 0.1 % in 1000 genome and ExAC-nonTCGA data were selected for further analysis. A pairwise comparison of shared rare variants among the cohort was performed to check for sample swaps and family relatedness.
Variant Filtering following the FCVPPv2
Variant evaluation was performed using the criteria of our in-house developed Familial Cancer Variant Prioritization Pipeline v2 (FCVPPv2) [14]. This process is summarized in Figure 2 and explained in the following text.
Variant Filtering following the FCVPPv2
Variant evaluation was performed using the criteria of our in-house developed Familial Cancer Variant Prioritization Pipeline v2 (FCVPPv2) [14]. This process is summarized in Figure 2 and explained in the following text.
Segregation in Pedigrees
The variants were filtered based on pedigree data considering family members diagnosed with NMTC or micro-PTC as cases, benign nodules or goiter as potential variant carriers and unaffected members as controls. The probability of an individual being a Mendelian case or true control was considered. The general rule was that variants had to be present in all cases and absent from all controls.
Variant Ranking Using In Silico Tools
After filtering variants based on pedigree segregation, the CADD tool v1.3 [15] was applied. Variants with a scaled PHRED-like CADD score greater than 10, which accounts for the top 10% of probable deleterious variants in the human genome, were prioritized. Variants were then selected according to their conservation scores. High evolutionary conservation suggests functional importance of a position. Genomic Evolutionary Rate Profiling (GERP), PhastCons and PhyloP were used to assess conservation of the variant position, whereby GERP scores >2.0, PhastCons scores >0.3 and PhyloP scores >3.0 indicate a high level of conservation and are therefore used as thresholds in the selection of potentially causative variants. After that, all missense variants were assessed for deleteriousness using the following tools: SIFT, PolyPhen V2-HDIV, PolyPhen V2-HVAR, LRT, MutationTaster, Mutation Assessor, FATHMM, MetaSVM, MetLR, PROVEAN, VEST3 and RI using dbNSFP [16]. Variants predicted to be deleterious by at least 60% of these tools were shortlisted for further analysis. Lastly, intolerance scores were considered. These were merely used to rank the variants and not as cutoffs for selection. The ranking of variants according to the intolerance scores of the corresponding genes relies
Segregation in Pedigrees
The variants were filtered based on pedigree data considering family members diagnosed with NMTC or micro-PTC as cases, benign nodules or goiter as potential variant carriers and unaffected members as controls. The probability of an individual being a Mendelian case or true control was considered. The general rule was that variants had to be present in all cases and absent from all controls.
Variant Ranking Using In Silico Tools
After filtering variants based on pedigree segregation, the CADD tool v1.3 [15] was applied. Variants with a scaled PHRED-like CADD score greater than 10, which accounts for the top 10% of probable deleterious variants in the human genome, were prioritized. Variants were then selected according to their conservation scores. High evolutionary conservation suggests functional importance of a position. Genomic Evolutionary Rate Profiling (GERP), PhastCons and PhyloP were used to assess conservation of the variant position, whereby GERP scores >2.0, PhastCons scores >0.3 and PhyloP scores >3.0 indicate a high level of conservation and are therefore used as thresholds in the selection of potentially causative variants. After that, all missense variants were assessed for deleteriousness using the following tools: SIFT, PolyPhen V2-HDIV, PolyPhen V2-HVAR, LRT, MutationTaster, Mutation Assessor, FATHMM, MetaSVM, MetLR, PROVEAN, VEST3 and RI using dbNSFP [16]. Variants predicted to be deleterious by at least 60% of these tools were shortlisted for further analysis. Lastly, intolerance scores were considered. These were merely used to rank the variants and not as cutoffs for selection. The ranking of variants according to the intolerance scores of the corresponding genes relies on the assumption that a variant in a gene intolerant to functional genetic variation is more likely to be deleterious than one that is tolerant to functional variation. We used three intolerance scores based on NHLBI-ESP6500, ExAC datasets and a local dataset, all of which were developed with allele frequency data. The ExAC consortium has developed two additional scoring systems using large-scale exome sequencing data including intolerance scores (pLI) for loss-of-function variants and Z-scores for missense and synonymous variants. These were used for nonsense and missense variants respectively. In our final list, we also included missense variants in known tumor suppressor genes and oncogenes independent of their deleteriousness and intolerance scores. However, all variants had to meet previous cut-offs, i.e., MAF >0.1, pedigree segregation, CADD-PHRED >10 and positive conservation scores.
Analysis of Non-Coding Variants
Non-coding regions make up over 98% of the genome and possess millions of potentially regulatory elements and noncoding RNA genes. Hence it is crucial to analyze the potential pathogenic impact of such variants in a Mendelian disease. Putative miRNA targets at variant positions within 3 untranslated regions (UTRs) and 1 kb downstream of transcription end sites were detected by scanning the entire dataset of the human miRNA target atlas from TargetScan 7.0 [17] with the help of the intersect function of bedtools. We scanned the 5UTRs and 1 kb regions upstream of transcription start sites for transcription factor binding sites using SNPnexus (version 3; Dec 2017) [18]. For regulatory variants, we merged enhancer [19] and promoter [20,21] data from the FANTOM5 consortium and super-enhancer data from the super-enhancer archive (SEA) [22] and dbSUPER [23] using the intersect function of bedtools to identify putative enhancers, promoters and super-enhancers in our dataset. We accessed epigenomic data and marks from 127 cell lines from the NIH Roadmap Epigenomics Mapping Consortium via CADD v.1.3 [15], which gave us information on chromatin states from ChromHmm [24] and Segway [25]. The CADD analysis of 3 UTRs also gave us mirSVR scores for putative miRNA targets; a score lower than -0.1 is indicative of a "good" miRNA target [26]. Furthermore, we used SNPnexus to obtain non-coding scores for each variant and to identify regulatory variants located in CpG islands. Top 3 'UTR and downstream variants that had CADD scores >10 and miRNA target site matches with mirSVR scores <−0.1 were short-listed. Similarly, upstream and 5 UTR variants in enhancers, promoters, super-enhancers or transcription factor binding sites with CADD scores >10 were selected.
Variant Validation
In order to increase the confidence in variant calls and reduce the risk of false positives, we visually inspected the sequencing data of all short-listed variants for correctness using the Integrative Genomics Viewer (IGV; version 2.4.10) [27].
Ingenuity Pathway Analysis (IPA)
IPA (Qiagen; http://www.qiagen.com/ingenuity; analysis date 08/04/2019) was used to perform a core analysis to identify relationships, mechanisms, functions, networks, and pathways relevant to the genes affected by variants that passed the mean allele frequency cut-off, fulfilled family-based segregation criteria, had CADD scores >10 and were not intergenic or intronic variants. Data were analyzed for all five families together. Top canonical pathways were identified from the IPA pathway library and ranked according to their significance to our input data. This significance was determined by p-values calculated using the right tailed Fisher's exact test. These values indicated the probability of association of genes from the input dataset with the canonical pathway by random chance alone. Ratios were also calculated for each pathway by dividing the number of genes from the input dataset that map to the pathway by the total number of genes in that pathway. The ratios did not influence the ranking of the canonical pathways.
IPA was also used to generate gene networks in which upstream regulators were connected to the input dataset genes while taking advantage of paths that involved more than one link (i.e., through intermediate regulators). These connections represent experimentally observed cause-effect relationships that relate to expression, transcription, activation, molecular modification and transport as well as binding events.
Whole Genome Sequencing
In this study, five families with reported recurrence of NMTC were analyzed. WGS identified a total of 112254, 207873, 120323, 91427 and 101081 variants which were reduced by pedigree-based filtering to 6368, 9373, 3123, 7060 and 2708 in families 1-5, respectively. Non-synonymous SNVs were the most common exonic variants ( Figure S1).
Final Prioritization of Candidates according to the FCVPPv2
After applying the FCVPPv2, the number of potential pathogenic protein coding variants was reduced to 31. These variants are listed in Table 1. A number of genes are of high significance to our study as they are either related to cancer or play a role in thyroid metabolism. CHEK2 is a known tumor suppressor gene involved in DNA damage response [28]. EWSR1 generates a powerful oncogenic protein causing Ewing sarcoma [29], RET is a proto-oncogene well-known in hereditary medullary thyroid carcinoma NRP1 is known to be positively associated with the progression of breast cancer [30], POT1 is a known predisposing gene in malignant melanoma [31] and TG encodes the precursor of iodinated thyroid hormones and is associated with susceptibility to autoimmune thyroid diseases (AITD) [32].
FCVPPv2 also identified 14 upstream and 5 UTR variants, which are shown in Table 2. Among them, three variants are of particular interest in thyroid cancer. The PCM1 variant is a 5 UTR variant that our data showed to affect three transcription factor binding sites (Egr-3, AP-2alphaA and AP-2 gamma). Chromosomal aberrations involving this gene have been associated with PTC and a variety of hematological malignancies [33]. The other 5 UTR variant is located in the P4HB gene which is known to be involved in the structural modification of the thyroglobulin precursor in hormone biogenesis [34]. Both variants are present in CpG islands and have been predicted to be localized at an active transcription start site by ChromHmm and Segway. The third variant is an upstream variant in the DAPL1 gene, shown to affect the binding sites of MAZR and Sp1, a potential tumor suppressor in thyroid cancer, by SNPnexus and Segway.
Furthermore, 25 variants located downstream and in 3 UTRs were shortlisted (Table 3). Among them, two genes of importance can be highlighted, namely ACVR1B and NLK. Mutations in the ACVR1B gene are associated with pancreatic cancer [35]. The variant in the 3 UTR of ACVR1B is localized at a target site for miR-6871-5p with a context ++ percentile score of 53, indicating a relatively good context for repression of the mRNA due to this miRNA. Altered expression of NLK is associated with cancer development and has been shown to be an independent prognostic factor in colorectal cancer [36]. The corresponding variant to this gene has two predicted miRNA target sites for miR-6818-5p and miR-6867-5p with high context ++ percentile scores (88 and 79, respectively).
Variants prioritized by the FCVPPv2 were also present in pathways, networks, and disease categories shown to be significantly enriched in FNMTC by IPA. Table 1. Top exonic variants prioritized following the FCVPPv2. Chromosomal positions, classifications, PHRED-like CADD scores and the percentage of positive intolerance (Int) and deleteriousness (Del) scores are included for each variant. Additional information regarding protein-protein interactions (STRING), localization in protein domains (InterPro [37]) and the biological function of the respective protein (GeneCards [38]) is included. Table 2. Top upstream and 5 UTR variants prioritized according to the FCVPPv2. Variant annotation, chromosomal position, and regulatory consequences according to FANTOM5, SEA, CADD and SNPnexus are listed. The FANTOM5 database gives information on known promoters. CADD gives an overall deleteriousness score together with chromatin state information based on ChromHmm and Segway scores and information on transcription factor binding sites (TFBSs). Location of the variants within a specific TFBS and CpG island were obtained from SNPnexus. A cumulative non-coding score is shown as a percentage of positive scores from all scores listed in the footnote. Cut-offs for these scores are also indicated in the footnote.
Ingenuity Pathway Analysis (IPA) Shows Enrichment of GPCR and RTK Mediated Pathways
In order to identify key biological functions and signaling pathways affected in FNMTC, we filtered the variants according to pedigree segregation, CADD scores and location, excluding intronic and intergenic variants. The variants were in 339 genes, with 92, 122, 14, 72 and 39 genes coming from families 1-5 respectively. Of these genes, 210 gene IDs could be mapped by IPA and were part of the subsequent analysis (Table S1). The remaining 129 genes were uncharacterized genes with RP11 IDs, and thus could not be mapped.
Of the top 150 diseases and bio functions, 123 were cancer-related with thyroid cancer at position 99 (p = 3.17 × 10 −5 ), NMTC at position 120 (p = 6.39 × 10 −5 ), differentiated thyroid cancer (DTC) at position 125 (p = 7.88 × 10 −5 ) and PTC at position 148 (p = 2.16 × 10 −5 ) (Table S1B). There was a high overlap of molecules among the four thyroid cancer related categories. This overlap of eight genes included two genes prioritized using our pipeline (RET and TG), that are of particular interest in thyroid cancer.
With the aim of evaluating the canonical pathway results to determine the most significant pathways in our dataset, we created a network of the top 18 overlapping canonical pathways (Table S1C, Figure 3). The threshold of common genes between the pathways was set at 2. G-protein coupled receptor (GPCR) and receptor tyrosine kinase (RTK) mediated pathways, as major mediators of thyroid cancer development, were represented by 12 pathways (Figure 3). The genes involved in the top 18 pathways along with their corresponding variants are listed in Table S2.
Ingenuity Pathway Analysis (IPA) Shows Enrichment of GPCR and RTK Mediated Pathways
In order to identify key biological functions and signaling pathways affected in FNMTC, we filtered the variants according to pedigree segregation, CADD scores and location, excluding intronic and intergenic variants. The variants were in 339 genes, with 92, 122, 14, 72 and 39 genes coming from families 1-5 respectively. Of these genes, 210 gene IDs could be mapped by IPA and were part of the subsequent analysis (Table S1). The remaining 129 genes were uncharacterized genes with RP11 IDs, and thus could not be mapped.
Of the top 150 diseases and bio functions, 123 were cancer-related with thyroid cancer at position 99 (p= 3.17 × 10 −5 ), NMTC at position 120 (p=6.39 × 10 −5 ), differentiated thyroid cancer (DTC) at position 125 (p=7.88 × 10 −5 ) and PTC at position 148 (p= 2.16 × 10 −5 ) (Table S1B). There was a high overlap of molecules among the four thyroid cancer related categories. This overlap of eight genes included two genes prioritized using our pipeline (RET and TG), that are of particular interest in thyroid cancer.
With the aim of evaluating the canonical pathway results to determine the most significant pathways in our dataset, we created a network of the top 18 overlapping canonical pathways (Table S1C, Figure 3). The threshold of common genes between the pathways was set at 2. G-protein coupled receptor (GPCR) and receptor tyrosine kinase (RTK) mediated pathways, as major mediators of thyroid cancer development, were represented by 12 pathways (Figure 3). The genes involved in the top 18 pathways along with their corresponding variants are listed in Table S2. . Top 18 overlapping canonical pathways visualized as a network, which shows each pathway as a single "node" colored proportionally to the Fisher's Exact Test p-value, where brighter red indicates higher significance. Nodes marked with asterisk (*) belong to the group of GPCR and RTK mediated pathways.
Network Analysis Reinforces the Central Role of PI3K/AKT and MAPK/ERK Signaling in FNMTC
We conducted a network analysis using the IPA software to predict interacting molecular networks significant to our input-data and to evaluate genes with a central role in FNMTC (Figure 4, Table S1D). Since the IPA network analysis includes paths with intermediate regulators that involve more than one link, a comprehensive picture of the possible gene interactions was generated. The networks were ranked according to scores that were generated by considering the number of focus . Top 18 overlapping canonical pathways visualized as a network, which shows each pathway as a single "node" colored proportionally to the Fisher's Exact Test p-value, where brighter red indicates higher significance. Nodes marked with asterisk (*) belong to the group of GPCR and RTK mediated pathways.
Network Analysis Reinforces the Central Role of PI3K/AKT and MAPK/ERK Signaling in FNMTC
We conducted a network analysis using the IPA software to predict interacting molecular networks significant to our input-data and to evaluate genes with a central role in FNMTC (Figure 4, Table S1D). Since the IPA network analysis includes paths with intermediate regulators that involve more than one link, a comprehensive picture of the possible gene interactions was generated. The networks were ranked according to scores that were generated by considering the number of focus genes (input data) and the size of the network to approximate the relevance of the network to the original list of focus genes. We focused on the three highest scoring networks, which had scores ranging from 33 to 51 (Table S1D).
In coherence with the pathway analysis, the network analysis reinforces the importance of central perpetrators of GPCR and RTK mediated signaling (AKT, ERK1/2: Networks 1 & 3) and their downstream effectors (NFκB, CREB: Network 2). Furthermore, Network 3 encompasses a number of genes related to thyroid metabolism including TG from our prioritized shortlist. genes (input data) and the size of the network to approximate the relevance of the network to the original list of focus genes. We focused on the three highest scoring networks, which had scores ranging from 33 to 51 (Table S1D).
In coherence with the pathway analysis, the network analysis reinforces the importance of central perpetrators of GPCR and RTK mediated signaling (AKT, ERK1/2: Networks 1 & 3) and their downstream effectors (NFκB, CREB: Network 2). Furthermore, Network 3 encompasses a number of genes related to thyroid metabolism including TG from our prioritized shortlist.
Overlapping Pathways in Familial Non-Medullary Thyroid Cancer
Since GPCR and RTK mediated signaling were highlighted in both pathway and network analyses, we propose a pathway to facilitate a general understanding of FNMTC at a molecular level ( Figure 5).
Since GPCR and RTK mediated signaling were highlighted in both pathway and network analyses, we propose a pathway to facilitate a general understanding of FNMTC at a molecular level ( Figure 5). Activation of GPCR receptors can activate MAPK/ERK signaling as well as PI3K/AKT signaling via one of the four subclasses of G-proteins (Gαs, Gαi/o, Gαq/11, and Gα12/13). Dimerization of receptortyrosine kinase (RTK) receptors can be induced by growth factors such as EGFR and GDNF, which results in the phosphorylation and subsequent activation of the receptor monomers. Receptor activation is linked to downstream signal transduction pathways like the MAPK signaling cascade and the PI3K/AKT system via adaptor proteins. Genes from our dataset that were present in these pathways as activators or regulators are highlighted in Figure 5.
Discussion
The high heritability of thyroid cancer can be attributed to both rare, high-penetrance mutations and common, low-penetrance variants [4,13]. The former is best identified by studying families with a Mendelian pattern of inheritance of the disease in question. We used this principle in our study and identified 31 exonic and 39 non-coding rare potentially pathogenic variants segregating with the disease in five PTC-prone families.
Scientific and technological advancements in genomics have allowed WGS to become the state-ofthe-art tool not only for the identification of driver mutations in tumors but also for the identification of novel cancer predisposing genes in Mendelian diseases. The former has led to improvements in personalized medicine, wherein therapeutic approaches are based on targeting dysregulated pathways Activation of GPCR receptors can activate MAPK/ERK signaling as well as PI3K/AKT signaling via one of the four subclasses of G-proteins (G αs , G αi/o , G αq/11 , and G α12/13 ). Dimerization of receptor-tyrosine kinase (RTK) receptors can be induced by growth factors such as EGFR and GDNF, which results in the phosphorylation and subsequent activation of the receptor monomers. Receptor activation is linked to downstream signal transduction pathways like the MAPK signaling cascade and the PI3K/AKT system via adaptor proteins. Genes from our dataset that were present in these pathways as activators or regulators are highlighted in Figure 5.
Discussion
The high heritability of thyroid cancer can be attributed to both rare, high-penetrance mutations and common, low-penetrance variants [4,13]. The former is best identified by studying families with a Mendelian pattern of inheritance of the disease in question. We used this principle in our study and identified 31 exonic and 39 non-coding rare potentially pathogenic variants segregating with the disease in five PTC-prone families.
Scientific and technological advancements in genomics have allowed WGS to become the state-of-the-art tool not only for the identification of driver mutations in tumors but also for the identification of novel cancer predisposing genes in Mendelian diseases. The former has led to improvements in personalized medicine, wherein therapeutic approaches are based on targeting dysregulated pathways specific to the affected individual. There are also some reports of WGS being successfully used to implicate rare, high-penetrance germline variants in cancer, for example POT1 mutations in familial melanoma [39] and POLE and POLD1 mutations in colorectal adenomas and carcinomas [40]. Identification of cancer-predisposing mutations is a critical step in cancer risk assessment and can help in cancer screening and prevention strategies. Furthermore, the implication of predisposition genes and their respective pathways may facilitate development of targeted therapy. However, one has to be critical in reporting novel variants before appropriate functional validation and evaluation of their penetrance in a large cohort of families. The importance of this step is exemplified by controversial findings regarding the implication of HABP2 G534E in familial NMTC [41].
Some of the genes shortlisted based on FCVPPv2 have already been identified in other cancers. These include CHEK2 mutations in breast cancers and also in a variety of other cancers including thyroid cancer [28], EWSR1 in Ewing sarcoma [29], RET in hereditary medullary thyroid carcinoma, NRP1 in breast cancer [30] and germline POT1 variants in malignant melanoma [31]. Moreover, it is interesting to note that the expression of NRP2, an important paralog of the NRP1 gene, has been correlated to lymph node metastasis of human PTC and is required in the VEGF-C/NRP2 mediated invasion and migration of thyroid cancer cells [42]. The upstream variant in the DAPL1 gene is shown to affect the binding sites of MAZR and Sp1 by SNPnexus and Segway. MAZR1, also known as PATZ1, has been shown to be downregulated and delocalized in thyroid cancer cell lines derived from papillary, follicular and anaplastic thyroid carcinomas [43]. Another study has demonstrated the role of PATZ1 as a tumor suppressor in thyroid follicular epithelial cells and its involvement in the dedifferentiation of thyroid cancer [44].
Other genes of interest shortlisted based on the pipeline (PNPLA8, PTGIR, RET, GNB2 and POT1) were involved in the enrichment of MAPK/ERK and PI3K/AKT pathways. The MAPK pathway is the most frequently mutated signaling pathway in human cancer and is thus considered one of the most promising targets for cancer therapy. This pathway plays a central role in the induction of biological responses such as cell proliferation, differentiation, growth, migration and apoptosis [45]. Initiated by an extracellular mitogenic stimulus that leads to the activation of RTK or GPCR, the MAPK/ERK pathway leads to the phosphorylation and subsequent translocation of ERK into the nucleus. ERK activation plays a central role in the induction of cell cycle entry and the suppression of negative regulators of the cell cycle [46]. Although MEK1 and MEK2 can be activated by multiple MAP kinase kinase kinases (MAP3Ks) as well as by RAF, they serve as sole activators of ERK1/2 and thus as gatekeepers of the MAPK cascade [47]. Overexpression or aberrant activation of RTKs or their immediate downstream targets (PI3K, RAS and SRC) can result in the upregulation of the MAPK/ERK signaling pathway [48]. A common somatic mutation in this pathway is BRAFV600E, which has been implicated in melanoma [49], thyroid and colorectal cancer [50] and hairy cell leukemia [51].
The importance of the PI3K/AKT pathway in thyroid cancer was first recognized when patients suffering from Cowden's syndrome caused by a germline mutation in the PTEN gene were found to have FTC [52]. PI3K activation phosphorylates and activates AKT which can have numerous downstream effects via activation or inhibition of multiple proteins that are involved in cell growth, proliferation, motility, adhesion, angiogenesis, metabolism and apoptosis.
Furthermore, our findings are in line with recent studies on PTC tissues and PTC cell lines have implicated activation of MAPK/ERK and PI3K/AKT pathways in thyroid carcinogenesis [53][54][55]. Interestingly, somatic alterations that lead to the activation of the MAPK pathway as well as of the PI3K/AKT pathway are common in aggressive thyroid cancers, such as metastatic or recurrent PTC/FTC and ATC [56]. The targeting of downstream RAS effectors has already been shown to be a promising approach, however patients treated with RAF or MEK inhibitors frequently develop drug resistance [47]. Targeting the downstream ERK kinase, which is also known as the gatekeeper of the MAPK cascade, can overcome the acquired drug resistance induced by upstream kinase inhibitors [57]. In this context, it is also important to note the similarity between our proposed model for the molecular mechanisms in FNMTC and the reported molecular mechanisms in non-familial NMTC. It is known that patients with familial NMTC may have a more aggressive form of the disease, with larger tumors in younger patients and increased rates of extra-thyroid extension and lymph node metastasis. This suggests that FNMTC should be explored further to gain a better understanding of the cause of increased aggressiveness. However, none of the variants were identified in more than one family. As the phenotypes of our families differed (as described in Figure 1), it is likely that also the mutations causing the disease in the families also are different. We analyzed only 5 families and no other WGS data on FNMTC are available, thus restricting the possibility to confirm the variants in larger data sets. Functional analysis of promising candidates highlighted in this study may shed some light to the mechanisms underlying this phenomenon.
Interpreting WGS data and selecting one out of millions of genetic variants as the cause of hereditary cancer is a daunting task and highlights the importance of the use of a standardized protocol like the FCVPPv2. We were able to prioritize 31 exonic and 39 non-coding potential cancer-predisposing variants using our family-based pipeline from which we hope to pinpoint one candidate gene for each family. The final selection and implication of one candidate gene predisposing to cancer in each family is beyond the scope of this paper as it will involve further steps including population screening and functional studies. In the present study, we decided to focus on the analysis of pathways that are enriched in familial NMTC to see how the variants prioritized using our pipeline fit into the general pathway analysis results. The IPA analysis of all genes already presented us with valuable data and there was a high involvement of genes prioritized using our pipeline in the top diseases and bio functions, canonical pathways and networks generated by IPA. Although IPA could give us a general idea of molecular pathways affected in the studied families, it is important to keep in mind that the analysis was conducted at a gene level and not at a variant level. The evaluation at a variant level is largely dependent on the pipeline and its subsequent steps as mentioned above. We have already successfully implemented this pipeline to identify DICER1 as a candidate predisposing gene in familial Hodgkin lymphoma [58] and are confident that our pipeline can be applied to the NMTC families in a similar manner.
Conclusions
In conclusion, WGS data analysis of five NMTC-prone families allowed us to prioritize 31 exonic and 39 non-coding variants from which we subsequently hope to identify one candidate gene per family. Furthermore, we were able to identify pathways and networks significant to our dataset, including important tumorigenic pathways such as MAPK/ERK and PI3K/AKT signaling pathways. The implication of previously reported tumorigenic signaling pathways and the presence of known tumor suppressor or oncogenes in these affected pathways show that the pathogenesis of FNMTC is in concordance with characteristic molecular mechanisms of cancer. The next steps will include selecting one candidate gene per family and validating it with the help of population screening and functional studies. We hope that our results can facilitate personalized therapy in the studied families and contribute to the screening of other individuals at risk of developing NMTC. | 2019-10-14T03:51:37.358Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "6ed58b10a64bca60b09640856d26a002d690cd00",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/9/10/605/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c8f2da6564b09134612bb9e8586b79e12509341",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
245720598 | pes2o/s2orc | v3-fos-license | A Review on the Cooking Attributes of African Yam Bean ( Sphenostylis stenocarpa )
African yam bean, an underutilized legume usually cultivated for its edible tubers and seeds, is known for its nutrition-rich qualities; however, the crop’s level of consumption is low. The underutilization of the crop could be attributed to several constraints, including long cooking hours of up to 24 hours. Cooking time is an important food trait; it affects consumers’ choices, nutrients content, and anti-nutrient conditions. Additionally, foods requiring long cooking hours are non-economical in terms of energy usage and preparation time. The prolonged cooking time associated with AYB places enormous limitations on the invaluable food security potentials of the crop. Therefore, the availability of AYB grains with a short cooking time could lift the crop from its present underused status. To efficiently develop AYB grains with reduced cooking time, information on the crop’s cooking variables is a prerequisite. This review presents available information on variations in cooking time, cooking methods, and processing steps used in improving cooking time and nutrient qualities in AYB. Likewise, the review brings to knowledge standard procedures that could be explored in evaluating AYB’s cooking time. This document also emphasizes the molecular perspectives that could pilot the development of AYB cultivars with reduced cooking time.
Introduction
Food and nutrition security which is part of livelihood, is notably attracting the attention of stakeholders, spanning across nations, research organizations, the general public, academic institutions, and policymakers. At present, the world population is estimated at 7 billion; however, by 2050, the population is expected to reach 9.3 billion. As of 2017, the number of food-insecure people worldwide was estimated at 690 million [1]; however, by 2050, a 70-85% increase in food production will be needed to feed the projected 9.3 billion people [2,3]. Notwithstanding, upscaling the adoption and utilization of sustainable crops offers considerable potentials in boosting food production amidst the prevailing challenges.
Grain-Legumes are sustainable, capable of surviving under harsh climate conditions. The grain legumes require minimal fertilizer inputs because of their ability to fix atmospheric nitrogen through symbiosis with soil Rhizobia. Also, intercropping legumes with other crops has improved soil fertility and crop productivity [4][5][6]. Importantly, legumes are a good source of food and feed for humans and animals, respectively; crops within the legume category are nutritionally rich; most significantly, they provide affordable sources of protein [7,8]. The contribution of legumes as food and feed differs across types, while some legumes are known worldwide and considerably utilized (soy bean) (Glycine max L), common bean (Phaseolus vulgaris L), cowpea (Vigna unguiculata L.)) others are less known and underutilized (African yam bean (Sphenostylis stenocarpa Harms), lablab beans, (Lablab purpureus L) wing bean (Psophocarpus tetragonolabus L.). Adopting and accepting underutilized legumes such as African yam bean as a food crop is vital for their survival; nevertheless, AYB's adoption and utilization is intertwined with several factors, including cooking time, nutrient potentials, palatability, and value-added products.
African yam bean
African yam bean which, is commonly referred to as AYB, is one among the underutilized grain legumes of tropical Africa. The crop is grown for its edible seeds and tuberous roots. Figure 1 presents AYB seeds harvested from a field evaluation in 2020. AYB seeds are enclosed in pods measuring about 3-15 cm long, such that a single pod can accommodate up to 30 seeds. The crop is a climber usually grown in mixed cropping with major crops [10][11][12]. AYB is locally adopted and has wide adaptability across diverse environmental conditions [13,14]. Even though the crop is usually cultivated as an annual crop [15][16][17], some schools of thought consider it as perennial [18][19][20]. The cultivation of AYB majors among smallholder farmers across sub-Saharan Africa, of which Nigeria is one country prominent on the list [21]. The consumption of AYB is known to contribute to daily nutrition, food availability, and diet diversification to communities utilizing it; this date back to the Nigerian civil war of 1967-1970, where the crop's food and nutritional potentials were efficiently utilized in fighting malnutrition and hunger [15,[22][23][24].
The seeds of AYB provide an affordable source of protein when compared with other plant sources and animal extract. Aside from its rich protein content, its high carbohydrate content [25,26] is comparable to the amount reported in grain cereals. AYB's amino acid (histidine, isoleucine, lysine, methionine) profile is more in quantity than the amount observed in soybean [27][28][29]. Likewise, several authors have reported the presence of essential nutrients in AYB's seeds [25,26,[30][31][32][33][34][35][36]. AYB tubers (Figure 2) contain considerable amount of magnesium (167 mg/100 g), potassium (1010 mg/100 g), protein (15-16%), and carbohydrate (67-68%) [34]. In addition to the crop's nutritional qualities, the crop is flexible for use in various diets; it can be utilized as a condiment, or as a whole meal, or as a snack. The contribution of AYB in feeds enrichment is an added advantage of the crop's food and nutrition attributes [37,38].
Considering the enormous potential of AYB and its role in some African traditions [39-41]; the efficient utilization of AYB can reduce hunger and nutritional challenges in sub-Saharan Africa. Nevertheless, the food potential of the crop remains widely untapped, which can be attributed to several constraints such as long cooking hours of up to 24 hours [41-44], a long-maturity cycle of 9-10 months [16,17,45], and the abundance of anti-nutrition factors [35, [46][47][48][49]. However, the genetic variability reported in the crop [9,[50][51][52][53] provides a foundation for breeders to develop improved cultivars. In particular, the availability of AYB cultivars with reduced cooking time could boost the cultivation and consumption of the crop. Up-to-date information on cooking-related attributes is a prerequisite for improving cooking time trait. Keeping the above in view, the present review brings to knowledge cooking variables reported in AYB. Also, the review proposes the application of standard procedures and molecular technology for advanced studies. Furthermore, the present document is intended to stimulate more research interest towards improving cooking time in the crop.
Structure of African yam bean seeds
Past research investigations have explained the relationship between seed properties, variety type, seed storage conditions, and cooking time [54,55]. Table 1 presents the physical properties reported in AYB seeds. AYB seeds are, dicot in nature and they can measure up to 10 mm in length and 7 mm in width and thickness [9,[50][51][52][53]56]. The seeds of AYB differ in texture across germplasm; they could be rough, wrinkled, or smooth. The electron microstructure study of seeds revealed the presence of smooth starch granules exhibiting different sizes and shapes [57]. The cells were bounded by cell walls same as observed in other legumes [58,59]. Likewise, the round undulating surface observed in the cotyledon is similar in structure to that of cowpea [59,60]. For seeds subjected to milling, the cotyledon and cell components showed structural change. Equally, cell wall materials and protein matrix were reduced to flakes and particles; however, the structure of starch granules remained unchanged. The micrographs of cotyledon, flour, and starch showed the size of starch granules within the range of 4-40 μm for lengths and 4-25 μm for diameter [57].
Cooking quality in African yam bean
Preparing and cooking food is an integral part of daily living [61,62]. For example most grain legumes are subjected to cooking before consumed; the cooking process converts raw food into a ready-to-eat product. Also, cooking facilitates the destruction of foodborne pathogens, thereby eliminating microbial hazards and achieving quality [63]. Moreover, the physical and chemical changes that occur during cooking increases the digestibility and availability of nutrient for use and storage in the body [64]; through processes including inactivation of antinutrient, starch gelatinization, proteins denaturation, leaching of polyphenols and solubilization of polysaccharides among other factors [59, 65,66]. Despite the importance of cooking in food and nutrition the cooking culture is dwindling, especially in industrialized societies where individuals are exposed to a busy lifestyle with little time at their deposal. To cope with busy schedules, consumptions are choosing convenience food that requires less cooking time. Also, reports have shown that consumers are ready to pay more in exchange for long cooking hours [67,68].
Cooking time, an attribute of cooking quality is defined as the time from the beginning of cooking up to when the food becomes tender and suitable to eat [66,69]. AYB, the same as most legumes is characterized by seed hardness, requiring long cooking hours of up to 24 hours ( Table 2) in some scenarios [80]. Seed hardness has been identified as a heritable trait but also affected by seed composition, production, and, storage environment [54,81,82]. The mechanism by which seeds become hard-to-cook is categorized as a very complex phenomenon; it includes processes such as changes in the intracellular cell wall, middle lamella, polysaccharides, and other components. The hard-to-cook mechanism in seeds has been extensively reviewed by authors [83][84][85]. According to a particular study, an increase in calcium ion concentration led to a subsequent increase in seed hardness and a decrease in phytate concentration. It was also reported that a higher rate of leaching in phytate and peptic acid occurred in cooked and soaked hard-to-cook seeds than in fast-to-cook seeds [85]. Generally, grains with short cooking time are more preferred by consumers; because less time is invested in their preparation, and importantly less energy is spent when compared to energy requirements for grains with long cooking time. In addition, several studies have shown that nutrients such as minerals and proteins are conserved when grains are cooked over a short period. In contrast grains requiring long cooking hours usually lose a significant amount of nutrients [55,86]. Cooking methods reported in AYB include boiling, steaming, roasting, and frying. However, advanced procedures including, sensory analysis: involving sensory panel [87,88]; tactile method: [89] a method of compressing seeds within the thumb; texture analysis: [87] a method that measures the resistance of seed compression using a texture analyzer [90] have been investigated in major legumes.
Boiling
Boiling cooking method is a moist approach whereby the target food is submerged into a liquid. Cooking is achieved through the transfer of heat from the cooking equipment to the liquid in contact with the food. The food surface absorbs the heat and through conduction, the heat passes through to cook the food. The boiling method was experimented with selected AYB grains. The steps included boiling the grains in water for 480 minutes ( Table 2) and thereafter oven drying for 24 hours before milling into flour [70]. In another report, AYB grains were boiled for 228 minutes. The analysis of the boiled seeds showed a reduction in phytate content and an increase in moisture content [47]. In addition, the boiling cooking method was reportedly used in preparing porridge. The procedure included presoaking seeds overnight and boiling them for 60 minutes. The porridge analysis showed an increase in carbohydrate, gross energy, fiber, lipid, water absorption capacity, oil absorption, bulk density, and gelation capacity however a decrease in protein and moisture content was observed [71].
Roasting
The roasting method is commonly used in preparing "roasted AYB grain," a popular snack consumed in combination with other food in South-East Nigeria [19,40,43]. Roasting was effective in increasing the level of phosphorus and in-vitro protein digestibility of grains. An increase in phytic acid was also reported; however, the tannin level was shown to be at the barest minimum [43]. In the preparation of breakfast cereal from AYB grains in combination with maize and coconut fiber, the blends were roasted for 5 minutes at 280 0 c temperature. The formulated blends revealed a protein content of 18.26%, moisture content of 4.20%, ash content of 7.36%, and energy content of 339.47% [77]. The roasting approach was likewise used in preparing AYB flour. The grains were subjected to roasting for 45 minutes ( Table 2) using firewood as the energy source. Then, the roasted grains were dehulled and milled. The analysis of the roasted flour showed a decrease of about 0.27 mg/100 g in the level of the tannin content [78]. In a separate study, AYB grains were roasted in an oven at 120 0 c for 300 minutes; and the roasted grains were dehulled and milled. The analysis of the dehulled flour showed a reduction in the emulsifying capacity, foam capacity, and stability of the flour, also the samples presented a high water and oil absorption capacity [79]. In a further experiment, researchers investigated the effect of roasting on the proximate, mineral, and anti-nutrient content of AYB grains. The study preceded the roasting of grains over firewood for 1 hour at 300 0 c temperature condition. An increase was reported in the levels of calcium, potassium, copper, iron, manganese, magnesium, phosphorus, and sodium, and a drastic reduction in the percentage level of phytate, oxalate, tannins, hydrogen cyanide, and trypsin inhibitor was reported. On the contrary, there was no significant increase in the nutrient content [47].
Steaming
The steaming approach involves the use of steam as the cooking medium; the steam is mostly generated from vigorously boiling water. Unlike reported in boiling method, the steaming procedure does not require submerging the food directly into the water; in steaming, the target food gets cooked as the result of the steam or vapors generated from the boiling water. Steam is considered a good heat conductor, nevertheless, the temperature release from steam does not exceed that of boiling water except in the pressure system [91]. Steaming was reported to have minimal effects on chlorophyll, soluble protein, sugar, vitamin c, and glucosinolates [92]. The steaming process helped preserve antioxidant properties and maintained the lowest biogenic amine content in bean varieties [93]. In AYB, the steaming approach was reportedly used in preparing a traditional snack DOI: http://dx.doi.org/10.5772/intechopen.99674 called "Moi-Moi". The procedure involved dehulling and wet milling of the grains accompanied by spicing. For the Moi-Moi to get cooked, it was steamed for about 60 minutes [71,76]. The analysis of the AYB Moi-Moi showed a lower gelation capacity, higher water absorption capacity, lower oil absorption capacity when compared to Moi-Moi made from cowpea. The sensory analysis of AYB Moi-Moi showed no significant difference in color and flavor from Moi-Moi made from cowpea (cowpea is the most common grain for preparing Moi-Moi). Additionally, the acceptance level of the AYB Moi-Moi was similar to Moi-Moi constituted from cowpea [71]. Some researchers utilized the steaming cooking method in making Moi-Moi from AYB and cowpea blends, they reported a total steaming time of about 50 minutes [75].
Frying
Frying is one of the ancient and well-known cooking methods used for food preparation; the procedure is known for its ease, speed, and unique flavor and taste [94]; in addition, frying gives an attractive color, texture to food. The frying process involves the use of fat or oil which serves as the medium of direct heat transfer with the food [63,95]. The transfer of heat, oil, and air during the frying process brings about changes like loss of moisture, oil uptake, starch gelatinization, aromatization, denaturation of protein, and changes in the color of the food. The changes in food and oil are largely dependent on the food property, the quality of oil, heating process, length of immersion, the rate at which air mixes with the oil, temperature, and the quality of the frying medium [96]. Frying could lead to the release of toxic products through oxidation, which usually occurs when oil is continuously used under high temperatures and atmospheric air [97]. The frying method of cooking was reportedly used in the preparation of traditional snacks commonly known as "akara" or "beans ball", a snack widely eaten in Nigeria. The grains were soaked overnight and dehulled before wet milling (paste) and spicing. The frying medium (groundnut oil) was heated to 185-190 0 c, and the total frying time was about 5 minutes ( Table 2). The end product (akara) showed an increase in carbohydrate, gross energy, water absorption capacity, oil absorption capacity, bulk density, and gelation capacity. Meanwhile, no significant difference was reported in accepting the AYB akara from the usual cowpea akara [71]. In like manner, the frying method was used in preparing Kokoro a popular snack in South-West Nigeria. The Kokoro process involved deep-frying the paste constituted from the AYB-Maize blend for about 10 minutes. The proximate analysis conducted on the Kokoro showed an increase in protein, sugar, ash, moisture, potassium, and calcium as the proportion of AYB flour increases. On the contrary, a decrease in fat and starch was observed with an increase in AYB flour [72]. Furthermore, the frying process was used to produce AYB cheese, using palm oil as the frying medium. The sensory evaluation indicated a general acceptance of the AYB cheese [73].
Baking
The baking process is a method whereby the raw dough is transformed into crumb and crust texture, under the influence of heat. During baking, the changes that occur include the crust formation, yeast inactivation, coagulation of protein, volume expansion, starch gelatinization, and moisture loss [98][99][100]. The baking approach was used in producing cookies from AYB-wheat composite flour. The cookies were baked for 20 minutes using an oven mark of 180 0 c. The nutritional analysis of the cookies showed an increase in protein content from 8.59 to 9.35% fat from 3.84 to 4.63%, ash from 4.84 to 5.21%, and crude fiber from 3.84 to 4.22%. An Legumes increase in mineral content corresponding to a percentage increase in the level of AYB flour was also observed [74].
Technological gap in the evaluation of AYB cooking time
In AYB, the majority of the cooking time investigations were conducted using basic approaches like firewood, gas, and kerosene stove. No information is documented on the use of standard equipment such as texture analyzer and Mattson bean cooker; however, the use of Matson bean cooker and texture have been reported in several legumes.
Mattson bean cooker
One standard method of measuring cooking time in pulses is to evaluate using a Mattson bean cooker [101]. The equipment is easy to use, cost-effective, and generates unbiased data compared to other methods [90]. The use of Mattson cooker is recommended in grain genetic improvement for evaluating new varieties [66]. Mattson first developed the Mattson bean cooker, having 100 plungers [102], but was later redesigned to have 25 plungers [103]. The usage of the equipment involves placing individual presoaked seeds on each of the saddle on the rack such that the tip of each plunger comes in contact with the surface of the seed. The weight of each plunger can be optimized to suit the size of the target grain by adjusting the number of lead buckshot inside each plunger. To initiate the cooking test, the lower part of the cooking rack is immersed in a boiling water bath up to half of its height. When a seed reaches tenderness, the plunger penetrates that particular seed and drops a short distance through the hole in the saddle. The top of a plunger that has dropped (penetrated a seed) will be lower than the top of the plungers which are yet to drop. The scenario makes it visibly easy to identify the plunger that has penetrated its seed [66,90]. The cooking time for a set of seeds (25) has been explained differently by researchers; the cooking time was defined as the time required for 100% of the seeds to get penetrated [104]. In an additional study, the cooking time was recorded as the time 92% of seeds got penetrated [105]. Operating the Mattson cooker requires the uninterrupted attention of the user; the user manually records the time each plunger penetrates a seed the situation becomes more critical when multiple plungers penetrate at the same time. To overcome the bottleneck of manual recording several researchers have reported the use of an automated Mattson cooker where the cooking time is automatically recorded [66,90,106].
Texture analysis
The texture is an important trait of food characterized by its mechanical, geometrical, surface, and body attributes detected by senses of vision, hearing, touch, and kinesthetics [107,108]. The mechanical attributes have to do with the qualities of the food under stress conditions; like hardness, cohesiveness, elasticity, and adhesiveness. In contrast, the geometrical attributes are related to the size, shape, and structural arrangement of the product. The surface attribute has to do with the sensations produced (in the mouth) around or in the surface of the product by moisture and fat or either of the two; similarly, the body attributes are related to the feelings produced in the mouth and how the moisture and fat or both are released [109]. Of recent, instrumental texture analysis has proven to be efficient in evaluating the mechanical and physical qualities of the raw and finished product, of which the application of texture analyzer is a well-established protocol. A texture analyzer DOI: http://dx.doi.org /10.5772/intechopen.99674 is used for evaluating the hardness, fragility, adhesiveness, springiness, cohesiveness, gumminess, chewiness, and resilience of food [54,88]. The instrument is easy to operate; it eliminates subjective judgment, as may be found in sensory evaluations [109]. The selection of a probe for use during analysis is dependent on the type of test, which could be a compression test, penetration (puncture) test, traction (tension). The different texture analysis test types were previously reviewed [109]. The texture analyzer has been applied in several texture studies in legumes, fruits, vegetables, meat, milk, among others [109,110].
Methods of reducing cooking time in AYB
Several studies have reported a significant decrease in cooking time after seeds were subjected to processing methods like presoaking, dehulling, frying, steaming, and blanching [43,71,111,112].
Presoaking of seeds
Presoaking is a long-age traditional practice used in homes to reduce cooking time, especially in grain legumes. The approach is flexible, simple, and common both at the domestic and industrial levels. The process involves the imbibition of water through the outer cuticle, the seed coat, and then into the cotyledons; [69,113]. The first step in imbibition is the penetration of water by the seed, and the process can be through the seed coats since the seed coat has high fiber content and thus high-water holding capacity. Water inhibition can also occur through the micropyle or hilum; when the water reaches the cotyledons, the seed starts to absorb water and swell until the seeds attain their maximum water uptake capacity. Presoaking of seed before cooking enables the easy identification of unhydrated seeds, which can be discarded to achieve uniform cooking time. The procedure reduces cooking time because the hydrated seeds acquire a soft texture and thereby speeding up the cooking process and shortening the cooking time [114]. Also, soaking aid the easy identification of hydratable seeds and improves the nutrient quality of foods since the soaked content is usually discarded. Soaking grains before cooking is a good practice used traditionally in increasing food safety especially in situations when consumers have no idea of the storage preservatives used for the target grain.
The effect of presoaking in shortening the cooking time of AYB's seed was reported by several authors. Presoaking AYB seeds in distilled water over a varying time of 6, 12, 18, and 24 hours reduced cooking time by 50%. The process also reduced the level of tannin and phytate, in addition to improving in-vitro protein digestibility. Soaking for 12 hours was the most effective in reducing cooking time, tannin, phytate, and in-vitro protein digestibility; however, soaking for 24 hours before dehulling was observed to significantly increase crude protein level by 16% [43]. In a similar study, AYB seeds were presoaked each in 0.20%, 0.40%, 0.60%, 0.80% and 1.00% of akanwu (sodium sesquicarbonate), and common salt (sodium chloride) and water for a duration of 6, 12, 18, 24, 30, 36 hours. Seeds soaked for 6 hours in 0.060% akanwu and 1.00% common salt showed a 50% decrease in cooking time, while seeds soaked in tap water achieved a 50% reduction in cooking time after 24 hours of presoaking. Meanwhile, seeds presoaked in tap water took about 180 minutes to get tender [112]. According to a study, a 50% reduction in cooking time was achieved when seeds were presoaked for 12 hours in either 1% potash or 4% common salt. Seeds soaked for 12 hours in 4% common salt reached tenderness after 45 minutes of cooking however seeds that were not soaked remained hard even after 60 minutes of cooking [111]. In a similar experiment, presoaking seeds in a different medium (water, alkali, brine, alkaline-brine) reduced the cooking time to a considerable level; the most effective medium was alkaline-brine, with a maximum cooking time of 100 minutes as against 210 minutes reported for cooking dry raw seeds [115]. In a separate study, AYB grains soaked overnight reached tenderness after 60 minutes of cooking [71]. Notably, aside from reducing cooking time, presoaking is also effective in investigating nutrient and anti-nutrient content [15, 43, 116-119].
Dehulling
Dehulling is a procedure through which seed coats or testa are removed either mechanically or using a machine. In most traditional setting, the process is carried out using either mortar and pestle or grinding stone, depending on the available option. Dehulled seeds have a good appearance in texture, cooking quality, palatability, and ease in digestibility. The approach reduces cooking time in grains legumes because during the dehulling process impermeable seed coats which usually prevent water uptake are removed [120]. Dehulled AYB grains showed the shortest cooking time of 35 minutes as against 80 and 150 minutes reported for whole seeds and soaked seeds, respectively [121]. The dehulling approach was observed to have a significant effect on the functional properties of AYB flour; a higher bulk density (0.93 g/cm 3 ) was reported as against the bulk density (0.59 g/ cm 3 ) in cowpea and pigeon pea (0.70 g/cm 3 ). Similarly, the swelling index (5.9 g/ cm 3 ) of dehulled AYB flour is more than the observed value in cowpea (3.7 g/cm 3 ) and pigeon (4.1 g/cm 3 ). The water absorption capacity (2.8 ml/h 2 0/g) in dehulled AYB flour was also higher than the observed in cowpea (1.2 ml/h 2 0/g) and pigeon pea (2.4 ml/h 2 0/g) [122]. In a further experiment, a higher water capacity of 71 ml/g was observed for dehulled AYB than the value of 60 ml/g reported for raw samples [71].
About 80-90% of the total amount of potential anti-nutrient factors (polyphenols) in grain legumes are found in the seed coats, and thus dehulling has proven to be effective in reducing anti-nutrient contents especially those found in the seed coats [123,124]. Authors reported a drastic reduction in oxalate, phytate, saponin, trypsin inhibitor, and tannin content of dehulled AYB flour [122]. Similarly, an increase in protein but a decrease in calcium and iron was reported for dehulled AYB flour [43]. In a separate study, the proximate analysis of dehulled AYB flour showed high protein content, high carbohydrate concentration, and sufficient level of amino acid [125].
Fermentation
Fermentation increases the bioaccessibility and bioavailability of nutrients and sensory quality in addition to shelf life [126,127]. The process involves the biochemical modification of food by microorganisms and their enzymes [128]; the process is capable of disrupting the activities of pathogens [126,129]. The fermentation process was explored for the preparation of "tempeh" from AYB grains; tempeh is a traditional food usually made from fermented soybean or soybean already broken down by microorganisms. The procedures for making AYB tempeh included: cooking presoaked grains for 45 minutes at 100 0 c and inoculating the cooked grains with spore suspension to initiate fermentation. The inoculated grains were allowed to ferment over 42 hours. The final product showed significant changes in crude protein and carbohydrate. An increase in protein and amino nitrogen content was reported DOI: http://dx.doi.org /10.5772/intechopen.99674 whereas a decrease in carbohydrates was observed. The quality of the AYB tempeh was acceptable to a large number of sensory panelists [130]. Meanwhile, some authors reported the minimal effect of fermentation on calcium, iron, magnesium, and zinc contents. However, they reported about a 34% reduction in phytate level and only tannin traces were detected [43]. Further research investigated the solid (3 days) and liquid (62 days) state fermentation approaches in making sauce from AYB grains. The prepared sauce revealed an increase of 11.94%, 4.85%, and 16.75% in ash, protein, and carbohydrate contents respectively. The sensory evaluation showed the acceptability of the AYB sauce was not significantly different from the level of acceptance of the commercial soy sauce in terms of color, aroma, and flavor [131].
Other studies used the fermentation process to formulate a yogurt-like product from dehulled and whole AYB grains. The process involved: the extraction of milk from grains which was followed by inoculation with a starter culture. For fermentation to occur, the inoculated milk was kept undisturbed over a time frame of 12 hours. The analysis of the formulated AYB yogurt presented a high total viable and Lactobacilli counts. As storage time increases, a decrease in the microbial load of the yogurt was observed [132]. In a similar experiment, raw AYB grains fermented for 48 hours showed an increase in protein and oil content [70]. "Dawa-Dawa" a traditional condiment was reportedly prepared through fermentation. The grains were boiled in water laced with "potash", the boiled grains were later dehulled and allowed to ferment at room temperature for 72 hours. The proximate analysis of the "Dawa-Dawa" showed an increase in crude protein from 22.00 to 32.80% and crude fibers from 5.70 to 7.77%, ash content increased from 3.20 to 4.60%, and lipid from 1.20 to 1.38%. Nevertheless, a decrease in carbohydrates from 74.20 to 57.21% was observed in the product [133].
Germination
Germination is a complex process that involves a mature seed to make an immediate change from maturation to the germination-driven stage and prepare for seedling growth [134]. The stages of germination include uptake of water by the seeds (imbibition) and the second phase is the reinitiating of metabolic processes followed by the emergence of the radicle through the seed envelopes. The germination process was used to prepare flour from AYB grains. The grains were soaked in water at room temperature for 48 hours. After soaking, the grains were allowed to sprout for 96 hours and subjected to oven drying. The dried grains were further dehulled and milled into flour. The germinated AYB-wheat composite flour showed an increase in protein; for every increase in the percentage of AYB flour [74].
Seed hardness attribute
Seed hardness is an important quality of grain legumes; the trait acts as a barrier against seed coat pathogens and seed damage. Likewise, it affects germination, seed processing, and cooking time [82,135]. Seed hardness is heritable but can also be influenced by environmental conditions at production and storage time [81,82]. The genetic factors responsible for seed hardness are not well understood; however, the roles of a few genes have been documented [82]. The influence of the environment on seed hardness is reflected in the hard-to-cook phenomenon, which is not also independent of genetic influence [82,84]. Understanding the genetic basis of cooking time in AYB is a necessity for improving the trait. It is noteworthy that genetic architecture in cooking time is yet to be reported in AYB; thus, no molecular approach has been documented in studying AYB's cooking time. Molecular techniques like GWAS and QTL could locate loci that controlled cooking time and thereby facilitate the identification of fast cooking lines. Likewise, new breeding techniques, including ZFNs, TALENS, and CRISPR/Cas9, have provided researchers the flexibility to insert desired traits precisely and quickly.
DNA technology
Previously, it would require about 7-10 years to transfer a target trait from a species to an adapted cultivar. The conventional process requires, handling a large number of progenies and several cycles of field evaluation. However, with molecular biology, a gene can be transferred in a single experiment, and within 5-6 years the new cultivar could exhibit a stable gene expression [136]. Presently, advances in plant molecular biology have provided processes and platforms through which the genetic architecture of traits can be well understood, manipulated, and transferred from different backgrounds [136,137]. In addition, through DNA technology, gene sequences and functions can be accessed. Similarly, specific region (s) on the chromosome can be identified, molecular markers can be developed and genetic maps can be constructed, among many other possibilities. Genetic manipulation using physical, chemical, and biological mutagenesis presents added advantages with an enormous contribution to crop improvement. Among the widely used DNA technology reported in crop improvement programs are Genome-Wide Association Study (GWAS), Quantitative Trait Loci (QTL) Mapping, and Genome Editing.
GWAS
Over the years, GWAS has been implemented across a wide variety of crops such as soybean, maize, common bean, sorghum, and rice [55, [138][139][140][141]. GWAS identifies genetic variants across the genome and associates the variants with the target phenotype. The commonly used GWAS approach involves identifying single nucleotide polymorphism (SNPs) markers and testing each marker for evidence of an association between the marker and the trait of interest. The marker-trait association approach relies on linkage disequilibrium (LD) between markers and causal polymorphisms [142,143]. To minimize false genotype-phenotype association that may arise from population structure, a linear mixed model analysis option is usually implemented. The application of GWAS has contributed significantly to identifying candidate genes; identified markers can be mapped to reference genomes, and thereafter candidate genes can be identified [143]. Once genomic regions of a target trait and the corresponding alleles at each locus are identified, the allele can be incorporated into another variety through crosses. The resultant progenies with the desired allele combination can be subjected to marker-assisted selection. GWAS in combination with marker-assisted breeding offers great gains for improving quantitative traits with low heritability [136].
QTL mapping
QTLs are phenotypically defined regions on the chromosome that contribute to allelic variation for a biological trait [144]. QTL technique has become a popular approach [144,145] used to study complex traits [146,147]. The application of QTL analysis in crop improvement was reported by several authors [82,148]. Regions on the chromosomes that significantly affect variations of quantitative traits are identifiable through QTL mapping. The ability to locate chromosomal region (s) is DOI: http://dx.doi.org /10.5772/intechopen.99674 important in identifying target genes and in understanding the genetic mechanism of genetic variation. Majorly, QTL mapping reveals information on QTL's having a significant effect on trait variation, and also answers the question to what extent is the variation due to additive, dominant, and epistasis effects of the QTL? The mapping of QTL also shows the genetic correlation of different traits and also answers the question does the QTL interact with the environment? [149]. The ability of QTL mapping to unravel and, at the same time provide answers to genetic questions makes it a powerful technique in crop improvement.
Genome editing
The discovery of genome editing technologies has revolutionized plant and animal research. Through genome editing, researchers can introduce sequencespecific modifications into the genome of different cell types and organisms. The site-specific nucleases (SSNs) have successfully been used in precise gene editing. The SSNs create double-stranded breaks (DSB) in the target DNA. The DSB is repaired through non-homologous end joining (NHEJ) or homologdirected recombination (HDR) pathways resulting in insertion/deletion (INDELS) and substitution mutations in the target region (s), respectively [150,151]. The technology produces defined mutant; also, the edited crops typically carry the desired trait [152]. Gene editing has been reported in plants including Arabidopsis [153], rice [154], and other crops, The genome editing techniques include meganucleases, zinc finger nucleases (ZFNs), transcription activatorlike effector nucleases (TALENs), clustered regularly interspaced palindromic repeats (CRISPR/Cas9). These techniques have been extensively reviewed [151,155].
Conclusion
Despite the unique attribute of AYB as a seed and tuber producing crop, the crop is underutilized due to identified limitations, including long cooking hours and the abundance of anti-nutrition. Different cooking hours have previously been reported for AYB grains; the lengthiest cooking duration was 24 hours. The cooking hours were observed to be dependent on the cooking methods used, the energy source, and the germplasm considered. The boiling cooking method presented the most prolonged cooking hours (24) while roasting gave rise to the least cooking time of 5 minutes. The diverse cooking methods experimented within AYB effectively reduced the level of anti-nutrient content in the grains. Nevertheless, processing methods such as presoaking and dehulling were observed as the most effective in improving both cooking time and nutritional contents. Fermentation and germination likewise showed positive effects in enhancing the nutrient quality of AYB food products.
Furthermore, the application of recommended equipment like the Mattson bean cooker and texture analyzer could efficiently evaluate cooking time and seed hardness across AYB germplasm. The adequate phenotyping of cooking traits using basic and standard equipment will provide definite baseline information that breeders could use to select parental materials for hybridization and genetic improvement of cooking traits. Additionally, DNA technology which has proven to be effective in providing solutions to complex problems could be exploited through GWAS, QTL mapping, and genome editing for the improvement of AYB's cooking attributes. Conclusively, the present review is targeted at stimulating researchers' interest in developing AYB cultivars with reduced cooking time.
Conflict of interest
The authors declare no conflict of interest.
© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 2022-01-06T16:31:35.746Z | 2021-12-21T00:00:00.000 | {
"year": 2021,
"sha1": "fefd3a64b435d675ea66d2f8c832a80ce6a9fe61",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/79467",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "1a75726e443498d8b1ceea5b0d7d58e9b37e3d92",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
250557226 | pes2o/s2orc | v3-fos-license | FOOTWEAR SECTOR IN INDIA: A ROLE OF ADVANCED TECHNOLOGIES SECTOR IN INDIA: A ROLE
Footwear sector is a very significant segment of Leather and Non Leather products in India. The size of Indian Domestic Footwear Industry is estimated to be worth 1919 million pairs where leather and non-leather Footwear per capita consumption is estimated to be approx. 1.61 pairs. The major component of footwear sector is a design, product development, clicking, closing, component, lasting & finishing. Advanced technologies in the area of shoe design systems, automation, cost savings and productivity improvements as well as enabling new developments in footwear sector in India. Although today footwear is produced using many similar methods to those employed all those years ago, obvious technological innovations in machinery, raw materials, production and testing techniques have changed what was to all intents and purposes a cottage industry into a multi-billion dollar sector. At the same time, recent years have seen a distinct shift in factory location away from the traditional industrial heartlands of Europe and North America to the new lands of opportunity, primarily in Asia. The purpose of this paper is to review the areas where advanced technologies can significantly affect the way of footwear sector is practiced. Strategies for implementation of the necessary changes in practice are also discussed.
Introduction
The design and production of comfortable, long-lasting and well-made footwear have been the goal of shoemakers around the world for thousands of years. As with so many other industries Http://www.granthaalayah.com ©International Journal of Research -GRANTHAALAYAH [197] that have played a vital role in civilization, little changed in the way shoes and boots were made until the coming of the industrial revolution towards the end of the 19th century.
Although today footwear is produced using many similar methods to those employed all those years ago, obvious technological innovations in machinery, raw materials, production and testing techniques have changed what was to all intents and purposes a cottage industry into a multibillion dollar sector. At the same time, recent years have seen a distinct shift in factory location away from the traditional industrial heartlands of Europe and North America to the new lands of opportunity, primarily in Asia.
Over the past 90 years, there have been major changes in shoemaking, some small and others significantly affecting the industry. The most development of the areas lasts, alternatives to leather, machinery, footwear testing and the location of manufacturing plants.
1.1.Footwear Making Process
Designing: Designing of the shoe is most primary and important process of footwear manufacturing. It starts with sketching that showcase the creativity of the designer. The sketch is further converted into three dimensional shoes, considering all dimensions of the foot. The designers also specify the materials required for making the shoes. Clicking or Cutting: Clicking is the modern name of cutting. In this department, materials are cut in various designs. Materials mainly leather is cut manually or by machine. Material saving, quality & productivity are the most concerns of the department. The operation needs high level of skill as the expensive materials including leathers are cut here. Leathers may also have various defects on the surface which needs to be adjusted in the shoe components. 70% of the cost of the shoe are due the the cost of these cut materials. Closing: Here the cut component pieces are assembled and stitched together, as per the samples, so as to produce the three dimensional completed upper. Wide percentage of manpower is required in the process of upper making. Lasting: In this process, upper is further shaped in the form of shoe. There are various construction process in lasting to make the shoe like stuckon, stobel, string lasting etc. Finishing: Finishing is the process to enhance the apperance of the shoe. Special waxes, creams, crayons, solvents etc. are used. Packing: The shoe lift is inserted in the shoes to maintain the shape of the finished shoes.
After this operation, the finished shoes are kept in the boxes.
Literature Review
According to Padmini Swaminath (1996) in her paper "Development Experiences: Gender Prospective on Industrial Growth, Employment, and Education" explains how the industrial development in India lacks the co-ordination between the govt/ industry and the labour. The paper attempts to assess the quality of state interventions and their impact on industry and labour. The author emphasizes the need for transforming the state interventions into strategic gender needs. Http://www.granthaalayah.com ©International Journal of Research -GRANTHAALAYAH [198] Mr. Refeeq Ahmed (1986) in his paper "development Perspectives of Indian Footwear Industry, The case of Indian footwear" highlights the importance of the Indian leather footwear industry's potentiality for exports. He brings out the need for popularizing brand name, strengthening training facilities particularly to women, and close linkages between industry, training, and educational institutions. The paper also emphasized the need to have the service of experts from developed countries to train the local artisans in particular lines.
Parmeshware S. (1990) has made the study on the impact of development agencies on cobblers of Athani town from a socio-political point of view.
Objective
To evaluate the role of advance technology To understand the advantage of advance technology.
Research Methodology
Data are mostly collected through social science include censuses, government departments, organizational records desk research of online resources, research papers, conference documents, and other publications. Data from SATRA, Council from leather export has been used. Annual report on MSMEs, Annual report of the ministry of commerce and industry, various annual reports of State Financial Corporation, and various financial institutions have been used. The data have been compiled from two types of sources: Published documents and reports The World Wide Web
4.1.Changing Styles Through the Decades
The 1910s: The First World War of 1914-1918 saw millions of men going to fight around the world. With women filling the jobs left vacant by the men's absence, a desire for more practical women's shoes for use in the factories was born. However, as shortages started to bite, the idea of being wasteful was severely criticized. With a lack of fabrics, dresses became shorter and the same design of lace-up boot that had been worn at the turn of the century was now viewed as practical rather than 'old-fashioned'. A few footwear designers did try to create more interesting styles with, for example, leathers mixed with coloured canvas or gabardine, to create two-tone 'spectators'. Suede became popular, and ballet-style pumps were decorated with a variety of removable buckles made from steel and decorated with silver filigree, diamanté or marcasite. Once peace was declared, fashions quickly changed in an effort to throw off the depression of wartime austerity. The 1920s: The 1920s was a time of incredible change, during which more liberal views on acceptable dress codes were forged. Dance crazes like the Charleston, which demanded a securely-fastened shoe with a low heel and a closed toe, influenced standard shoe design tremendously. The discovery of ancient Egyptian Pharaoh Tutankhamen's tomb in 1922 served to encourage a love of all things exotic, and this was reflected in shoe designs of the age. Http://www.granthaalayah.com ©International Journal of Research -GRANTHAALAYAH [199] Brilliantly-dyed leather, metallic finishes and bright fabrics were used to create neverbefore-seen designs, and rich brocades, satin, silk and velvet were often embellished with metallic overstitching, embroidery and fake gemstones. Heels were often decorated with crystals, often in Art Deco designs. The 1930s: This was a decade that saw the world plunged into a financial depression after the US stock market crash of 1929. As in the First World War years, footwear needed to last longer and sombercolours such as black, brown, maroon and navy blue became standard. In an attempt to introduce a new fashion, platform shoes first appeared in the 1930s and, when the world went to war again in 1939, a shortage of leather and a ban on the use of rubber for non-essential requirements forced shoemakers to use wood, cork and other materials for these platforms. The 1940s: With the Second World War dominating everyone's life for much of the decade, footwear continued to be austere, and it was viewed as unpatriotic to be very fashionable during such a time of shortage. In much of the world, leather was reserved for military use, so shoemakers had to show initiative in their choice of raw materials. Reptile skins and mesh became popular alternatives. Rationing in the USA meant that shoe manufacturers could only use heels measuring one inch high or less, with a limited choice of colours. The 1950s: After the war, optimism was high and one of the great icons of fashion footwearthe stiletto heelgained a massive following during the early part of the decade. Flat pumps based on the ballet shoe regained their popularity and were quickly available in an incredibly diverse colour range. The 1960s: Young people suddenly found themselves with more money to spend. This led to a decade of tremendous change, with highly experimental styles of fashion, music, art and literature. Hot pants and miniskirts took the Western youth market by storm, with flat-heeled high boots proving particularly popular. The hippie culture also became a major fashion and, as the race to be the first on the moon accelerated, new metallic 'space-age' materials (including coated plastic) were increasingly used by the world's shoemakers. The 1970s: Celebrities dressed to shock in the 1970s, with punk and glam rock encouraging dramatic styles that quickly found their way onto the high street. Footwear designers working for such well-known figures as David Bowie and Elton John let their imaginations run riot, producing styles that included eight-inch platform heels decorated with sequins. The birth of disco demanded comfortable dancing shoes, and strappy sandals became the choice of millions. Almost as a deliberate contrast to these outlandish fashions was a return to Edwardianstyle pumps and squared-off toes reminiscent of the 1940s, as well as neat court shoes for well-dressed businesswomen. For the first time, running became one of the world's most popular pastimes, and sports shoes started to sell by the million. The 1980s: A new group of ambitious consumers with money to spendwell-paid young professionals nicknamed 'Yuppies'looked to designer labels to emphasize their wealthy status in life, and retailers were only too pleased to supply just what they wanted. Many 'new' styles were actually updated versions of popular shoes from the 40s and 50s, with menswear influencing women's fashions in the form of lace-up brogues. Moulded jellies were first made during the 1980s, and were marketed in a spectrum of colours. The 1990s: While some glittering styles continued to hit the high street, the excesses of previous decades were replaced by more sombre designs before the end of the millennium. A number of shoe fashion revivals took place, with 1970s-style chunky platform shoes regaining their popularity and pastel-coloured ballet pumps once again proving to be the best buy. A perceptible change was seen in purchasing trends, with buyers of fashion footwear starting to look for more than simply attractive styling. Perhaps for the first time since the shortages of wartime, shoppers began to demand comfort as well as looks. The 2000s: Heels began to rise once more at the beginning of the 21st century, and the popularity of designer labels showed no signs of flagging. Embellishment of shoes with crystals, beads, and embroidery and exotic leathers arrived yet againand has since proved to be a regular part of the footwear designer's palette.
The next 90 years?
The footwear industry, which for centuries had used traditional methods of manufacture, has clearly taken technology to heart in recent decades, and this has greatly benefited both shoemakers and shoe wearers. Many changes have been evidentin all aspects of design, materials, and manufacture -but perhaps the greatest difference is where most of the world's footwear is now made. Many European and North American companies in shoemaking and ancillary trades have either closed down or moved their plants to the Far East. Despite current tough economic conditions, feedback from delegates at recent footwear trade shows has been quite buoyant. As long as manufacturers continue to design well and test their new concepts and existing styles carefully, they will provide what customers wantand that bodes well for the future of footwear. the uppers and soles to help reinforce branding on all areas of the model. It automates routine procedures, increasing speed and consistency whilst reducing the possibility of mistakes. CAD data can now be used effectively for a wide variety of activities across footwear manufacturing business. CAD/CAM generates data at the design stage, which can be used right through the planning and manufacturing stages.
1) Foot
Latest improvements in the CAD/CAM technology are: Graphics capabilities and interconnectivity have improved enormously, Software developments have progressively made systems more intuitive and easier to use, With 2D sketch and paint modules, a serviceable sketch can be produced and then colour and texture can be added. 3D systems enable the last and design to be viewed from any perspective and several angles even simultaneously.
With CAD/CAM software, footwear manufacturers can cut their time to market dramatically and so increase market share and profitability. In addition, the power and flexibility of the software can overcome restrictions to the designer's creativity imposed by traditional methods.
CAD/CAM software can be used to generate machining data for shoe sole models and moulds Shoe sole mould makers are able to strengthen their capabilities of mould design and production techniques to meet the market demands for shorter product life cycle, quality improvement and handling versatile pattern design. This helps especially sports shoe producers to manufacture products rapidly and to introduce them earlier than their competitors.
3D CAD/CAM is the core technology for shoe sole mould in the footwear industry and develops towards specialization.
Benefits of CAD/CAM in the mould manufacturing are: Total modeling for rapid generation of design concepts and variations, Reverse engineering from existing models or parts, Easy design modification and morphing capability, Completely accurate designs regardless of complexity, Group grading of soles and uppers, Advanced decorating techniques, Realistic onscreen visualization, Rapid generation of molds from product designs.
New Technology in Last Design
The first stage in the footwear manufacturing process is the production of the last. In pre-first world war Europe, lasts were often made from cast iron. As the war started to use up significant amounts of metal, wood was used more often and became the preferred material from 1919. This was often maple, sourced from Canadian forests that in many cases were owned by the last Http://www.granthaalayah.com ©International Journal of Research -GRANTHAALAYAH [202] manufacturers themselves. Copy lathes allowed lasts to be produced rapidly following the creation of a correctly-sized model.
There was no significant further change in the way lasts were made until the Second World War when the first commercial plastics started to be made. Following the end of the war, brittle thermoplastics were used to make lasts until the early 1960s. At that time, polyethylene was used for the first time, which proved to be a durable and tough material. Later, injection moulding speeded up the process, with a roughly-shaped block being turned down to an accurate last. Between 50 per cent and 60 per cent of the material was cut away during this process, but this was reusable.
Today, manufacture of lasts is a fast process. Computerized digitizing allows for the scanning of a model last so it can be reproduced accurately on the screen. The software can be used to manipulate the last in digital form, altering such elements as the heel height or adding an allowance for an insock. Data stored in a program can be used to cut accurate lasts quickly, with modern machinery allowing a number of different sizes to be formed at the same time. In addition, digitized last information can be shared by e-mail between last manufacturers around the world. The last making was once a craft needing the trained skills of a foundry worker and a carpenter.
At the beginning of the 20th century, cast iron lasts were made in a number of sections which were then often fixed together with interlocking pins. This allowed for the last to be taken apart in order to remove it from the partly-finished footwear without causing too much damage. Wooden lasts also were designed to be broken down, with removable 'scoop blocks' held in place by screws or brass springs. Today, plastic lasts are normally hinged to allow removal after the shoemaking process, although in the Far East, lasts are very often made of solid polyethylene to speed up the process.
In the early part of the 20th century, a well-made last would stay in use for 25 years and may have remained in an individual shoe being manufactured for three to six months. Because of this, a lot oflasts was needed. Today, a typical shoe stays on a last for a maximum of 20-30 minutes, due to the use of a heat-setting process during footwear production.
The Arrival of Alternatives to Leather
Animal skins have long been used by man as a protective covering. When skins were first tanned to produce leather, this new material combined a level of water resistance with good insulation and wind resistance, water vapour permeability and high absorbency, as well as being flexible enough to be formed and set into the desired shape.
Demand for good-quality leather, along with rumors of a potential shortage, led some companies to explore the possibility of producing an affordable alternative to this traditional material, which could match the properties of leather. After the Second World War, a wide range of synthetic materials derived from the petrochemical industry appeared on the market. Inexpensively made, these had consistent properties. An early attempt to produce a leather-like material involved bonding a textile base to a polymeric coating. One of the first of these was PVC polymer coated Http://www.granthaalayah.com ©International Journal of Research -GRANTHAALAYAH [203] fabrics (PVCCFs), which gave an imitation of the flesh and grain of the leather. Such early materials had good abrasion resistance, but low water vapour permeability, poor flex crack resistance and were cold to the touch.
Polyurethane coated fabrics (PUCFs) were developed in the 1960s and were an improvement on PVCCF. Originally, the materials were made by casting a polyurethane film, which was then stuck to the fabric base with an adhesive tie coat. These materials had more of the feel and appearance of leather, and also had a degree of water vapour permeability.
Further advances were made by using a brushed fabric as the substrate to give improved appearance and handle. One of these developments was coagulated PUCF, in which an organic solvent solution of PU was applied to a brushed fabric. It was then immersed in a non-solvent for coagulation, which resulted in the formation of a porous structure. This increased both the flexibility and water vapour permeability and gave a more leatherlike appearance.
Poromerics (micro porous synthetic leather substitutes) were developed in the 1960s and 1970s and were intended to be an improvement overcoated fabrics. They were defined by SATRA in their introduction as 'a man-made shoe upper material, which is generally similar in nature and appearance to leather and, in particular, has comparable water vapour permeability'.
The application of coated fabrics was limited by the properties of the knitted or woven base fabrics. Poromerics used a nonwoven fabric impregnated with the polymer (usually PU), thus producing a more leatherlike material. A wide range of poromerics with diverse structures was developed. The nonwoven substrate offered the closest simulation to the fibre structure of leather but required significant levels of the binder. The aim was to increase the degree of interweaving and reduce the need for impregnation. Advances continue, with the development of micro-denier fibres, which are being used to produce materials with characteristics much closer to leather.
Later developments include the use of hydrophilic fibres to enhance comfort by producing more absorbent materials, permeable but abrasion-resistant topcoats to mimic the grain, new impregnation techniques, hydrophilic PU formulations and water-based systems.
As well as being selected for the majority of footwear uppers, the leather had been the material of choice for solings until it initially encountered serious competition from rubber in the 1930s. At first, soles were cut from natural crepe rubbera material formed from natural latex tapped from rubber treeswhich has low levels of resistance to solvents and oils, but is both durable and flexible.
Quite soon thereafter, units were being made from vulcanized natural rubber compounds formed using heat and pressure. Vulcanized synthetic rubbers such as styrene-butadiene rubber were then developed, as was rubber reinforced with high-styrene resins (resin rubbers) which provided hard, thin sheet solings that were leather like in both feel and appearance.
In the 1960s, thermoplastic solings began to be developed. The first of these -PVC (polyvinyl chloride) and TR (thermoplastic rubber)allowed sole production with faster and cheaper processes than were required by vulcanized rubber. Polyurethane (PU) solings were introduced at the end of the 1960s. Most familiar in reactionmoulded lightweight microcellular form, polyurethane is also used in thermoplastic grades (TPU). Since the late 1970s, microcellular EVAethylene vinyl acetatein cross-linked form has proved popular as a lightweight soling material. Developments during the last two decades of the 20th century saw the introduction of soft vulcanized rubber ('latex' rubber) as an alternative to TR, and polyolefin elastomers (POE)elastomeric forms of polypropylene mixed with ethylene-propylene rubber.
Developing New Machinery
The demands made by innovative designers of modern footwear have forced the development of new technologyfrom the introduction of large automatic footwear-moulding machines to an improvement in the quality and strength of some of the smallest elements of the shoemaking processsuch as the needles used in the stitching process and threads which also have more colour resistance than those used in years gone by.
There were a number of ingenious and quite sophisticated shoemaking machines invented by 1910. These included various heel building and heel attaching machines, stiffener moulders, and sole moulders, finishing machines, buttonhole sewing machines, eyeletters and skivers. To a greater or lesser degree, these processes have remained very similar even into the 21st century. After cement sole attaching systems were introduced in the mid-1920s, various sole and shoe bottom roughing and cementing machines were developed, as well as a wide variety of attaching presses.
Between 1950 and 1960, high-pressure rubber moulding and vulcanizing machines, combined with the introduction of the pre-finished sole, as well as Louis heel and sole units, made considerable impact on the footwear industry.
The decade leading up to 1970 saw the introduction of PVC injection moulding systems, which were followed by the polyurethane reaction injection moulding (RIM) process. The arrival of moist heat setting, invented by SATRA (and for which the Technology Centre received the Queen's Award for Industry in 1969), dramatically reduced the setting timeand hence, the number of lasts requiredand is recognized as one of the great landmarks in footwear manufacture.
In the field of upper preparation, the wider use of man-made materials led to the use of travelling head cutting presses and, in turn, to processes involving high-frequency cutting, welding, and embossing.
In lasting, the introduction of back-part moulding and seat lasting machines accompanied by developments in forepart pulling and lasting machinesboth now with built-in hot-melt cement systemshave also done much to alter the look of the modern shoe factory.
In recent years, computerized machines controlling such processes as pattern cutting and decorative stitching are very common around the world. Little had altered in stitching machinery for more than half of the 20th century. Up until the 1970s, operatives used electric clutch-driven machines, which took great skill and experience to achieve the correct speed. Things changed in the 1970s when the first electronic stitching machines were introduced, allowing the operator to vary the stitching speed by using a foot pedal.
Testing Comes of Age
Chemical testing of footwear and components plays a vital role in the production of well-made shoes and boots. Perhaps surprisingly, a laboratory from the 1940s would have looked little different from one in the 1960s, with traditional wet chemistry, using burettes, flasks, and Bunsen burners being the order of the day.
Things started to change in the mid-1960s, with the introduction of the first infrared testing equipment. Many new test methodspreviously impractical to performwere developed during this period, taking advantage of the availability of more sophisticated analysis techniques. At last, polymers could be accurately identified, as could surface contaminants. Such quicklygained knowledge brought impressive benefitsfor example, the improvement of chemical adhesionand, in the mid-1980s, chemical testing was further revolutionized with the introduction of chromatography. Bigger and better equipment mainly developed in the pharmaceutical and petrochemical industries quickly found an application in footwear and leather testing.
One of the noticeable changes in chemical testing today is the ability to detect incredibly minute quantities of certain substances. Twenty years ago, heavy metals could be identified to 0.01 per cent. Modern, highly-sensitive equipment can today find heavy metals in parts per million. Also, whereas analysis of organic chemicals was previously very rudimentary, now the detection of pesticides, fungicides, antioxidants, dyestuffs and flame retardants is normal practiceboth qualitatively and quantitatively.
SATRA's work in the field of chemical testing (particularly on discoloration in footwear and detection of banned chemicals) continues to be of great help to our members. "SATRA enables members to stay abreast of current chemical tests, and we are a world leader in test expertise," says Richard Turner, who helped develop SATRA's chemical and analytical technology facility before his recent retirement. "We have the best-restricted substances list in the world and are viewed by many as the 'fount of all knowledge' when it comes to such checks.
"Some types of analysis, such as for extractable fat in leather, still use traditional wet test methods, and it is likely that technology will become even more sophisticated in the future," he continues. "Legislation is getting ever tighter, with some tests looking for results in parts per billion!" Physical testing of whole footwear and components has also improved beyond recognition in recent years. From its establishment in 1919, SATRA has been identifying and solving testing problems faced by footwear manufacturers. In recent decades, modern technology has superseded simple mechanical testing of many items, providing access to computerized tests and giving exceptionally accurate results. Sophisticated whole-shoe tests, such as the Advanced Moisture Management Test (AMMT) and PEDATRON sole abrasion test have been developed by SATRA, providing rapid analysis of footwear problems that previously took months of wear trials to establish. SATRA remains at the forefront of test machinery development and continues to introduce new developments into the footwear industry.
4.3.Factories on the Move
For most of the 20th century, the main footwear-producing companies were located in Europe and the USA. Whilst there was a small proportion of the overall global shoe production coming from Asia, the traditional strongholds of Italy, France, the UK, Spain, the USA, and Germany produced the majority of footwear until the early 1970s. Then, India, South Korea, and Taiwan opened up to the Western-style mass production of high-quality leathergoods, followed soon afterwards by China.
Conclusions
Advanced technologies in the area of footwear design, footwear construction, maintenance, and operation of footwear technology. New tools and techniques have the potential of achieving cost savings and productivity improvements as well as enabling new developments in Footwear sector. There is a general feeling the footwear industrialist that the much of the future growth and development in footwear sector would depend upon how effectively these new technologies are adopted in the footwear sector.
The basic purpose of the paper was to review the areas where advanced technologies can significantly affect the way footwear industrials is practiced. Advance technology is also responsible for the productivity improvement. Now a day the working condition is very fast. The quality as well quantity of the product is also improved through the advance technology; it is also reducing the manpower requirement in the industry. | 2020-12-10T09:03:16.887Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "03fc20822eb7fd0b495153f19be269ad49325302",
"oa_license": "CCBY",
"oa_url": "https://www.granthaalayahpublication.org/journals/index.php/granthaalayah/article/download/IJRG16_C12_209/2291",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "eed4994c3a9117e6f6ddb256a47d4d46f88cd74e",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
104294228 | pes2o/s2orc | v3-fos-license | Individual Molecular Dynamics of an Entangled Polyethylene Melt Undergoing Steady Shear Flow: Steady-State and Transient Dynamics
The startup and steady shear flow properties of an entangled, monodisperse polyethylene liquid (C1000H2002) were investigated via virtual experimentation using nonequilibrium molecular dynamics. The simulations revealed a multifaceted dynamical response of the liquid to the imposed flow field in which entanglement loss leading to individual molecular rotation plays a dominant role in dictating the bulk rheological response at intermediate and high shear rates. Under steady shear conditions, four regimes of flow behavior were evident. In the linear viscoelastic regime (γ˙<τd−1), orientation of the reptation tube network dictates the rheological response. Within the second regime (τd−1<γ˙<τR−1), the tube segments begin to stretch mildly and the molecular entanglement network begins to relax as flow strength increases; however, the dominant relaxation mechanism in this region remains the orientation of the tube segments. In the third regime (τR−1<γ˙<τe−1), molecular disentangling accelerates and tube stretching dominates the response. Additionally, the rotation of molecules become a significant source of the overall dynamic response. In the fourth regime (γ˙>τe−1), the entanglement network deteriorates such that some molecules become almost completely unraveled, and molecular tumbling becomes the dominant relaxation mechanism. The comparison of transient shear viscosity, η+, with the dynamic responses of key variables of the tube model, including the tube segmental orientation, S, and tube stretch, λ, revealed that the stress overshoot and undershoot in steady shear flow of entangled liquids are essentially originated and dynamically controlled by the Sxy component of the tube orientation tensor, rather than the tube stretch, over a wide range of flow strengths.
Introduction
The study of flow properties of polymeric solutions and melts has a rich history of perplexing the physicists and engineers who have endeavored to understand and model the many and varied physical responses of these complex fluids to an imposed flow field. In particular, the description of fast flows of macromolecular fluids has proven to be a difficult challenge. Although many continuum level theories have proven capable of describing gross rheological data in the linear and weakly nonlinear viscoelastic flow regimes (i.e., at low to intermediate values of the strain rate relative to a characteristic relaxation time of the fluid), most of these have not been able to provide a quantitative description of the flow properties of solutions and melts at high flow strength. There are many possible reasons one could cite to explain this state of dysfunction, but the overall reason is abundantly clear: for polymeric fluids experiencing strong flow conditions, all of the physical and dynamical phenomena occurring within these materials have not been understood and accounted for in the prevailing mathematical models. Developing reliable mathematical models necessarily depends upon complementary experimentation. A debilitating feature of rheological experimentation, however, is that these seemingly simple experiments typically only provide bulk-scale measurements that have effectively been averaged over macroscopic length and time scales. As a consequence, any dynamic behavior that is of much shorter length and time scale than those of the measuring instrument are effectively washed out of the system response, even though they contribute to the overall response. Therefore, for much of the 20th century, rheologists had little in the way of small length and time scale information to guide attempts at improved mathematical modeling.
The 21st century is proving to be a golden age of rheological discovery. New experimental methods have been developed which are beginning to tap into small time and length scale phenomena that have a dramatic impact on the bulk rheological response of a polymeric liquid, particularly under conditions of strong flow. Furthermore, the present century has seen the rise of a new form of scientific exploration; i.e., virtual experimentation. Advances in computational algorithms and efficiency have led to a new paradigm in experimentation that, under the right circumstances, can lead to a powerful new means to probe the small length and time scale phenomena that dominate the bulk rheological responses of polymeric fluids under strong flow conditions.
The primary advantage of virtual experimentation of an atomistically detailed polymer chain over experiment is that every chain within the sample can be examined individually, not simply the bulk rheological or microstructural response. This allows much more detailed information to be gleaned from the simulation with respect to the experiment, as statistically meaningful correlations can be established via ensemble averaging of the dynamical behavior of each individual chain. Additionally, simulations are readily amenable to topological analysis, extending equilibrium properties such as tube diameter, primitive path length, and number of entanglements to nonequilibrium flow situations [1][2][3][4][5][6][7]. Certainly, bulk-averaged properties, such as the conformation and stress tensors, can still be calculated, but also with the ability to examine the effects of short timescale individual chain dynamics upon them. Ultimately, more and better information at the microscopic scale should lead to better rheological and microstructural models of polymeric liquids under flow.
Recent evidence collected via virtual experimentation of monodisperse atomistic melts has demonstrated that a flow-induced disentanglement of polymer macromolecules occurs at high strain rates in steady shearing flow. This reduction in interchain constraints leads to the onset of individual molecular retraction and rotation cycles, which occur within oriented tube-like structures composed of the highly-extended surrounding chain molecules. Eventually, the tube network disintegrates as the chains become effectively disentangled, allowing them to tumble with characteristic frequencies similarly to corresponding macromolecules in dilute solution. This new phenomenon has been observed via nonequilibrium molecular dynamics (NEMD) simulations of molten polyethylenes in the unentangled and moderately-entangled molecular-weight regimes (i.e., polyethylenes ranging up to C 700 H 1402 ) [1][2][3][4][8][9][10][11][12][13][14][15]. This unexpected observation from atomistic simulations has already been hypothesized to explain some of the difficulties that manifest in flow models for high strain-rate flows [16][17][18].
In the present contribution, prior results of unentangled (liquids ranging in molecular weight roughly up to C 250 H 502 ), mildly entangled (C 400 H 802 ), and moderately entangled (C 700 H 1402 ) polyethylene melts are extended to a highly entangled system, C 1000 H 2002 , thus completing the entire suite of virtual experiments of flexible, monodisperse linear macromolecular fluids ranging from unentangled alkane liquids to highly entangled polyethylene melts. Hence this publication presents the final piece of the puzzle to those that preceded it, providing a full description of the rich, complex dynamical behavior and the underlying physical mechanisms that give rise to it as macromolecular chain length increases from several carbon units up to 1000. Moreover, this work extends prior studies that were focused on steady-state dynamics to the transient response of these entangled liquids under startup of shear conditions. The data and analysis presented in the remainder of this article will help enable physicists and engineers to develop new and improved models for the bulk rheological behavior of these macromolecular fluids covering all the relevant length and time scales.
Simulation Methodology
Equilibrium and nonequilibrium molecular dynamics simulations of a monodisperse, linear, C 1000 H 2002 melt were performed in the NVT ensemble at a constant density of 0.766 g/cm 3 (corresponding to a pressure of 1 atm) and constant temperature of 450 K. Four different rectangular simulation cells were chosen for different shear rate ranges in order to minimize the computational cost by optimizing the simulation box size and number of particles. Table 1 summarizes the cell sizes in various directions as well as the number of particles and applicable Wi range. In the nonlinear viscoelastic regime (Wi > 1), the box dimension in the flow direction (x) was larger than the dimensions in the gradient (y) and neutral (z) directions to ensure minimal system size effects at high shear rates where chains orient and stretch in the direction of flow. These dimensions were chosen based on the same considerations in terms of the chain end-to-end distance at different Wi which were employed for a shorter C 700 H 1402 chain liquid in prior work [2]. The smallest simulation cell, containing 20,000 particles, was equilibrated for more than 8 times the longest relaxation (disengagement) time before any data were gathered for analysis. The simulation cells containing 40,000 and 60,000 particles were created by replicating the equilibrated small simulation cell respectively once and twice in the x-direction, then equilibrated for one disengagement time. The longest cell was created by replicating the equilibrated cell containing 60,000 particles, twice in the x-direction, and then equilibrated for 0.8 disengagement time. It should be mentioned that the transient data were obtained using only a single independent initial equilibrium configuration to minimize the computational cost; however, this configuration varied from one simulation cell size to another-see Table 1. Although ideally such data should be collected using more than one independent initial configuration at each Wi, based on prior experience, this has only a slight effect on the data presented in this work, considering the sufficiently large number of particles in the simulation cells. The Siepmann-Karaboni-Smit (SKS) united-atom potential model [19] was used to quantify the energetic interactions between the atomistic components of the polyethylene liquid. This is the same potential model employed in many other prior simulation studies [1][2][3]8,[10][11][12][13][14]17,18,[20][21][22][23] to represent energetic interactions between either -CH 3 for the end-groups of the chains or -CH 2 -groups for interior carbon atoms along the chain backbone. (Please refer to one of the references cited above for a detailed discussion of the SKS model equations and parameters.) The NEMD equations of motion were used to perform the NEMD simulations, which were maintained at a constant temperature of 450 K using a Nosé-Hoover thermostat [24][25][26][27][28][29][30][31]. The set of evolution equations for the particle positions and momenta were integrated within the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) environment, which is implemented using the p-SLLOD equations of motion [29][30][31]. (Note that for steady-state and startup shear flow as considered herein the SLLOD and p-SLLOD algorithms are the same.) Boundary conditions were periodic at all box surfaces with a deforming simulation cell in the x direction. The equations were integrated using the reversible-Reference System Propagator Algorithm (r-RESPA) [32] with two different time steps. The long time step was 4.70 fs, which was used for the slowly varying nonbonded Lennard-Jones interactions, and the short time step was 1.176 fs (one-fourth of the long time step) for the rapidly varying forces including bond-bending, bond-stretching, and bond-torsional interactions. The relaxation time of the thermostat was set equal to 100 times the long time step. These time steps are longer than those used in many of the prior studies [1][2][3][4]8,[11][12][13][14]17,18,21,22]; however, a series of test simulations were performed at various Wi to ensure that the new (longer) time steps produced statically equivalent results as the prior (shorter) time steps. Furthermore, these time steps have been used successfully in recent NEMD studies of planar elongational flows of entangled polyethylene melts [20,23]. Without this modification of the time steps, the simulations reported in this article would have been computationally intractable.
A wide range of Weissenberg numbers was examined over the interval [0; 0.01, 11,700], corresponding to the quiescent system and shear rates within the range 2.2 × 10 3 s −1 ≤ . γ ≤ 2.2 × 10 9 s −1 . The topological analysis was performed using the Z1 code developed by Kröger [5], which reduces atomistic configurations to a primitive path network in which the chains are not allowed to cross each other as the algorithm simultaneously minimizes the contour length of each polymer molecule [6]. This method uses geometrical methods rather than dynamical algorithms to minimize the contour lengths of primitive paths in the most computationally efficient manner. The code further defines positions of kinks along the 3-dimensional primitive path of each chain, which are assumed to be roughly proportional to the number of entanglements per chain. Results of the code can be used to interpret other important reptative parameters, such as the effective tube diameter and entanglement strand length. The Z1 code has been compared with other topological analysis techniques by Shanbhag and Kröger [7].
Quiescent Properties
Equilibrium properties of the system can be calculated from the simulation results and compared with the predictions of reptation theory. The ensemble average squared end-to-end distance, R 2 , and radius of gyration, R g 2 , were calculated as 20,107Å 2 and 3353Å 2 , respectively, directly from the equilibrium simulation data. The theoretical fully extended chain end-to-end distance, |R| max , for a C 1000 H 2002 molecule is 1290.2Å. These values may be used to approximate the Kuhn length as = R 2 /|R| max = 15.58Å, and the number of Kuhn segments as N = R 2 /b 2 = 82.79 ≈ 83. Entanglement network properties were evaluated using the Z1-code [5]. Specifically, the average primitive chain contour length, L = 508.6Å, was obtained based on this analysis. These basic properties of the entangled liquid can be used in conjunction with reptation theory to estimate other (theoretical) system properties. The ensemble average entanglement density is thereby estimated as where a is the tube diameter. From this expression, Z = 12.9 and a = L /Z = R 2 1/2 /Z 1/2 = 39.5Å. All of these values are in good agreement with the values estimated for C 400 H 802 and C 700 H 1402 molecules in prior work [2,3,18]. The entanglement molecular weight for polyethylene at 443 K of M e = 1150 g/mol was reported by Fetters, et al. [33], which can be used to estimate an experimental entanglement density of Z = M/M e = 12.9, and tube diameter a = R 2 1/2 /Z 1/2 = 40.6Å. These values are in excellent agreement with the simulation results. The diffusivity of the liquid can be evaluated from the slope of the chain center-of-mass mean-squared displacement (MSD) versus time. According to its definition, D G is 1/6 of this slope at long times: where R G (t) is the position of the chain center of mass at time t. Using this method, the diffusivity of C 1000 H 2002 is calculated as 1.28 × 10 −12 m 2 /s. Note that this value makes it possible to calculate a key model parameter of the theory, the friction coefficient, ξ [34], where k B is the Boltzmann constant and T is the absolute temperature. According to reptation theory, the Rouse and disengagement timescales are governed by the expressions [34] Substituting ξ from Equation (3) into Equation (4), τ d can be expressed as a function of D G : Hence, the theoretical value of the disengagement time (i.e., according to the equations of reptation theory after D G has been estimated from the simulations) is calculated as 5305 ns. Note that from Equations (1), (4) and (5), the ratio τ d /τ R = 3Z, which leads to a theoretical Rouse time of 137 ns. The entanglement time is governed by the reptation-based Equation [3] τ e = π 36 which gives a value of 6.4 ns. This is very close to the values calculated for the C 400 H 802 and C 700 H 1402 liquids (5.1 ns and 6.4 ns, respectively). These values are consistent with theoretical arguments suggesting that the entanglement time is independent of the molecular weight of the polymeric liquid. The characteristic relaxation times can also be estimated directly from the equilibrium simulation results using the characteristic breaks in the segmental mean-square displacement (MSD) plot versus time [34]. The segmental MSD is defined as φ(t) = (r n (t + τ) − r n (τ)) 2 , where r n is the position vector of the n-th monomer (i.e., the n-th -CH2-unit). In order to minimize chain-end effects, only the 500 monomers in the middle of each chain were included in these calculations. The details of the calculations are explained in prior publications [2,3]. Figure 1 displays these plots for very short times (a) and long times (b). As shown in the figure, the disengagement, Rouse, and entanglement times turn out to be, respectively, 5834 ns, 194 ns, and 2.7 ns. Both the Rouse and disengagement times are mildly overpredicted as compared to the theoretical values. Also, the ratio τ d /3τ R = 10, which is smaller than the expected theoretical value of Z = 12.9; this suggests that the Rouse time is significantly overpredicted by this method. On the other hand, τ e is underpredicted compared to the theoretical value; however, it is in good agreement with the entanglement times calculated for the C 400 H 802 and C 700 H 1402 melts using the same method.
Another robust method for direct calculation of the disengagement time from the simulation data is to fit a sum of exponential functions to the autocorrelation function of the chain end-to-end where the longest value of τ i is considered as the disengagement time. p is the minimum number of exponential terms (5 in this case) that results in the best fit (i.e., the closest coefficient of determination, R-squared, to unity, using a nonlinear least-squares method), and c i are fitting constants of order unity. The disengagement time, based on this method, is calculated to be 5270 ns, which agrees very well with the theoretical prediction (5305 ns). , as well as their relevant power-law fitting and exponents. The data for the C400H802 and C700H1402 liquids were obtained from prior work [2,3]. These plots show that the power-law exponents for the disengagement time calculated from either the theoretical method or the fitting method are about 3.3 ± 0.1, in good agreement with experimental measurements for entangled polymers. This suggests that all physical phenomena, including contour length fluctuations (CLF), constraint release (CR), and of course reptation, are captured well by the simulations under quiescent conditions. One may expect a power-law exponent of 3.0 for the theoretical values of the disengagement time based on reptation model predictions alone; however, it should be noted that although Equations (3)-(5) were used for calculation of the theoretical characteristic times, the diffusivity (or equivalently the friction coefficient) was calculated from the simulation results and consequently includes all important physical phenomena, as explained earlier.
In fact, the power-law exponent for the diffusivity itself is −2.3 , in excellent agreement with experimentally observed values [34,35]. The same analysis is valid for the power-law exponent of the theoretical Rouse time, which scales as 2.2 rather than the theoretical exponent of 2, which again should be attributed to CLF and CR effects [2]. Table 2 summarizes the results of the calculations for the equilibrium characteristic relaxation times of the C1000H2002 melt obtained from the theoretical and MSD methods. In the rest of this chapter, we use the values = 5,270 ns (exponential method), = 137 ns (theoretical method), and = 6.4 ns (theoretical method) for the characteristic time scales of the C1000H2002 liquid (unless otherwise noted). , as well as their relevant power-law fitting and exponents. The data for the C 400 H 802 and C 700 H 1402 liquids were obtained from prior work [2,3]. These plots show that the power-law exponents for the disengagement time calculated from either the theoretical method or the fitting method are about 3.3 ± 0.1, in good agreement with experimental measurements for entangled polymers. This suggests that all physical phenomena, including contour length fluctuations (CLF), constraint release (CR), and of course reptation, are captured well by the simulations under quiescent conditions. One may expect a power-law exponent of 3.0 for the theoretical values of the disengagement time based on reptation model predictions alone; however, it should be noted that although Equations (3)-(5) were used for calculation of the theoretical characteristic times, the diffusivity (or equivalently the friction coefficient) was calculated from the simulation results and consequently includes all important physical phenomena, as explained earlier. In fact, the power-law exponent for the diffusivity itself is −2.3, in excellent agreement with experimentally observed values [34,35]. The same analysis is valid for the power-law exponent of the theoretical Rouse time, which scales as N 2.2 m rather than the theoretical exponent of 2, which again should be attributed to CLF and CR effects [2]. Table 2 summarizes the results of the calculations for the equilibrium characteristic relaxation times of the C 1000 H 2002 melt obtained from the theoretical and MSD methods. In the rest of this chapter, we use the values τ d = 5270 ns (exponential method), τ R = 137 ns (theoretical method), and τ e = 6.4 ns (theoretical method) for the characteristic time scales of the C 1000 H 2002 liquid (unless otherwise noted).
Steady-State Structural and Topological Properties
The steady-state microstructural and topological properties of a C1000H2002 melt undergoing simple shear flow are qualitatively very similar to those of the C400H802 and C700H1402 liquids, which were discussed in detail in prior publications [2][3][4]18]. These results are presented concisely herein; interested readers can refer to the cited references for more comprehensive discussions. Overall, steady-state shear properties of the C1000H2002 melt exhibit four distinct regions of behavior (̇< −1 , −1 <̇< −1 , −1 <̇< −1 , −1 <), as noted previously for the C700H1402 liquid [2].
The probability distribution functions (PDFs) of the normalized end-to-end distance and the chain size (measured in terms of ensemble averages of chain end-to-end distance and six times the radius of gyration, respectively) are displayed for various values of in Figure 3a,b. In the linear viscoelastic regime ( ≤ 1), the PDFs are Gaussian and remain essentially unchanged from the quiescent state. The ensemble averages of the squared end-to-end distance and (6 times the) radius of gyration also remain constant and almost equal to each other in this regime. This suggests that the flow is too weak to significantly perturb the global molecular sizes. Keep in mind that the timescale of the flow is larger than the disengagement time (i.e., ̇< −1 ), implying that the constituent macromolecules have ample time for diffusive action to maintain their quiescent configurational properties even though the overall tube network begins to orient along a preferred direction in the shear plane relative to the direction of flow. Note that the ratio 〈 2 〉 〈 2 〉 ⁄ approaches the theoretical value of 6 for long flexible Gaussian chains. Figure 3c displays the ensemble average orientation angle, 〈 〉, as a function of . 〈 〉 is calculated as the angle between the principal eigenvector of the ensemble average of the unit end-to-
Steady-State Structural and Topological Properties
The steady-state microstructural and topological properties of a C 1000 H 2002 melt undergoing simple shear flow are qualitatively very similar to those of the C 400 H 802 and C 700 H 1402 liquids, which were discussed in detail in prior publications [2][3][4]18]. These results are presented concisely herein; interested readers can refer to the cited references for more comprehensive discussions. Overall, steady-state shear properties of the C 1000 H 2002 melt exhibit four distinct regions of behavior ( γ), as noted previously for the C 700 H 1402 liquid [2]. The probability distribution functions (PDFs) of the normalized end-to-end distance and the chain size (measured in terms of ensemble averages of chain end-to-end distance and six times the radius of gyration, respectively) are displayed for various values of Wi in Figure 3a,b. In the linear viscoelastic regime (Wi ≤ 1), the PDFs are Gaussian and remain essentially unchanged from the quiescent state. The ensemble averages of the squared end-to-end distance and (6 times the) radius of gyration also remain constant and almost equal to each other in this regime. This suggests that the flow is too weak to significantly perturb the global molecular sizes. Keep in mind that the timescale of the flow is larger than the disengagement time (i.e., . γ < τ −1 d ), implying that the constituent macromolecules have ample time for diffusive action to maintain their quiescent configurational properties even though the overall tube network begins to orient along a preferred direction in the shear plane relative to the direction of flow. Note that the ratio R 2 / R 2 g approaches the theoretical value of 6 for long flexible Gaussian chains. Figure 3c displays the ensemble average orientation angle, θ , as a function of Wi. θ is calculated as the angle between the principal eigenvector of the ensemble average of the unit end-to-end vector dyadic product, u i u i , and the flow (x) direction. The orientation angle decreases from the zero-shear-rate limit of 45 • (not shown in the figure) to about 30 • at Wi = 1. Finally, the tube stretch is shown as a function of Wi in Figure 3d. The tube stretch is defined as the ratio λ = L /L 0 , where L 0 is the quiescent primitive path length. Both L and L 0 are calculated using the Z1 code. No chain stretch is observed in the linear viscoelastic region, as expected.
As the flow enters the weakly nonlinear regime, , the orientation angle drops dramatically to values smaller than 5 • and plateaus around 1-2 • at higher Wi. The PDF of the end-to-end distance begins deviating from the equilibrium Gaussian distribution by developing a tail at higher values of |R|/|R| max , indicating that a portion of the macromolecules have become partially extended by the applied flow. Notably, the PDF peak is still approximately at the same location as the equilibrium distribution, which suggests that the overall conformation of a significant number of chains has not yet been perturbed. The growth of molecular size and the deviation from Gaussian behavior can also be inferred from Figure 3b, especially for Wi > 10 where R 2 and 6 R 2 g begin to diverge. (Note that there is no theory which indicates these two quantities are equivalent under flow conditions.) Interestingly, the tube network also begins to extend moderately in in this shear-rate region ( Figure 3d). This is an important observation because it contradicts the common notion of tube-based models that no stretching occurs for Figure 3d indicates that tubes are stretched about 16% at . γ ∼ τ −1 R , which is not negligible although just a fraction of the maximum theoretical tube stretch, λ max = 2.77. end vector dyadic product, 〈u u 〉, and the flow ( ) direction. The orientation angle decreases from the zero-shear-rate limit of 45° (not shown in the figure) to about 30° at = 1. Finally, the tube stretch is shown as a function of in Figure 3d. The tube stretch is defined as the ratio = 〈 〉 0 ⁄ , where 0 is the quiescent primitive path length. Both 〈 〉 and 0 are calculated using the Z1 code. No chain stretch is observed in the linear viscoelastic region, as expected.
As the flow enters the weakly nonlinear regime, −1 <̇< −1 (or equivalantly 1 < ≤ 38), the orientation angle drops dramatically to values smaller than 5° and plateaus around 1-2° at higher . The PDF of the end-to-end distance begins deviating from the equilibrium Gaussian distribution by developing a tail at higher values of |R| |R| max ⁄ , indicating that a portion of the macromolecules have become partially extended by the applied flow. Notably, the PDF peak is still approximately at the same location as the equilibrium distribution, which suggests that the overall conformation of a significant number of chains has not yet been perturbed. The growth of molecular size and the deviation from Gaussian behavior can also be inferred from Figure 3b, especially for > 10 where 〈 2 〉 and 6〈 2 〉 begin to diverge. (Note that there is no theory which indicates these two quantities are equivalent under flow conditions.) Interestingly, the tube network also begins to extend moderately in in this shear-rate region (Figure 3d). This is an important observation because it contradicts the common notion of tube-based models that no stretching occurs for ̇< −1 . Quantitatively, Figure 3d indicates that tubes are stretched about 16% at ̇− 1 , which is not negligible although just a fraction of the maximum theoretical tube stretch, max = 2.77. The third shear-rate regime of dynamical behavior is the range −1 <̇< −1 (approximately 40 ≤ < 800). Within this region, vorticity excursions start playing an important role in the system properties. Brownian fluctuations caused by the vorticity of the shear field lead to random excursions of the chain ends outside of the confining tubes; some of these excursions, especially those with shearplane projections that possess negative orientation angles relative to the flow direction, induce The third shear-rate regime of dynamical behavior is the range τ −1 R < . γ < τ −1 e (approximately 40 ≤ Wi < 800). Within this region, vorticity excursions start playing an important role in the system properties. Brownian fluctuations caused by the vorticity of the shear field lead to random excursions of the chain ends outside of the confining tubes; some of these excursions, especially those with shear-plane projections that possess negative orientation angles relative to the flow direction, induce rotation and retraction quasi-periodic tumbling cycles of the individual molecules at moderate and high shear rates similar to those observed in previous work [2,3,8,14,18,22]. A typical cycle begins as a chain molecule stretches and aligns in the flow direction (see Figure 4c). At this point, due to the flow vorticity, chain ends fold backward along the spine of the molecule and slide toward the middle of the chain until the molecule collapses into a compressed configuration. Then the orientation of the chain flips as the chain ends cross and the molecule unravels until it adopts a stretched conformation again that concludes a half cycle. At the lower end of this range (Wi = 40 to 100), the cycle is very irregular, almost chaotic. Here, the macromolecules will reside in the compressed state for a long period of time (see Figure 4a). Under this condition, the chain ends, which are typically very close to each other, exhibit a wagging behavior due to Brownian motion, passing each other back and forth multiple times before the molecule begins to reextend. As a consequence, the orientation angle of the chain end-to-end vector, θ ete , oscillates haphazardly between −90 • and 90 • as evident in Figure 4a. The orientation angle of the chain primary axis, θ pa , however, does not oscillate as much as θ ete suggesting that the body of the molecule does not wag like a solid object. (The primary axis of the molecule is defined as the eigenvector corresponding to the largest eigenvalue of the molecule gyration tensor.) Yet, θ pa changes rapidly between positive and negative values when the molecule is in a collapsed and highly compressed state, indicating that the coiled chains wag for some indefinite period of time before they begin to unravel.
In the uppper portion of the range τ −1 (i.e., 100 < Wi < 800), the dynamical behavior of the macromolecules is much more regular and resembles the tumbling behavior observed in prior work [2,3,8,14,18,22]. During a typical cycle, the chain end-to-end distance varies dramatically from high values associated with the stretched configurations to values that are even smaller than the average equilibrium end-to-end distance. This is manifested in the wide non-Gaussian bimodal probability distribution function at this flow regime, as displayed in Figure 3a. Specifically, the peak at low values of |R|/|R| max shifts to the left as Wi increases and occurs at extensions smaller than the equilibrium peak, indicating the increasing population of the collapsed configurations during the course of the tumbling cycle. At the same time, the ensemble average molecule size ( Figure 3b) and tube stretch ( Figure 3d) increase with Wi in this flow region. Based on theoretical arguments, this is the region wherein tube stretch becomes significant. As mentioned earlier, Figure 3d shows that tube stretching begins at lower flow strength than theoretically expected; however, tube stretch in the third flow region is apparently of a different nature than within the second flow regime. In the third region, λ scales as λ 0.04 while this power-law exponent is 0.03 in the second range, This suggests that the tube stretch is influenced by the tumbling dynamics of the individual macromolecules, and that it has an influential contribution to the shear stress and constitutes a major relaxation mechanism in this intermediate flow strength regime. Note that the time-average orientation angle of the molecules is very close to its plateau value in the third region and does not change significantly, indicating that the chain end-to-end vectors are almost completely aligned in the flow direction on the molecular length scale, although not necessarily on the tube segment length scale. is the angle between the chain end-to-end vector and the flow direction ( ) and is the angle between the primary axis of the chain and the flow direction.
The fourth and final flow regime is the strong flow region where ̇> −1 , approximately > 800. Although the molecules continue to stretch in this region (Figure 3b,d), the molecular size and the tube stretch ultimately attain plateau values, which are significantly smaller than their corresponding maximum theoretical values. The tube stretch profile has an inflection point around ̇− 1 where the curvature changes from positive to negative. This signals a new regime where the tube stretch becomes saturated as chain rotation becomes the more dominant dynamic mechanism. The shape of the end-to-end distance distribution curve is also very different in this high region compared to that at lower regimes. Specifically, the distributions become relatively flat with a characteristic rotational peak at low |R| |R max | ⁄ and a stretch peak that emerges at very high . (See Figure 3a. The stretch peaks can also be easily recognized in C400H802 and C700H1402 systems [18].) These flat distributions, which become wider as increases, are attributed to the more regular molecular rotation cycles at very high shear rates, as discussed by Nafar Sefiddashti, et al. [3]. The skewed distributions within the intermediate regime suggest that during a rotation cycle individual molecules spend on average a longer time at collapsed (or less stretched) configurations than they do at relatively stretched configurations [18], or that some of the chains have not yet stretched enough to begin their rotation cycles (see Figure 4a). Both cases lead to unbalanced lifetimes for various configurations, and consequently irregular rotation cycles. Within the high regime, on the other γ > τ −1 e , approximately Wi > 800. Although the molecules continue to stretch in this region (Figure 3b,d), the molecular size and the tube stretch ultimately attain plateau values, which are significantly smaller than their corresponding maximum theoretical values. The tube stretch profile has an inflection point around . γ ∼ τ −1 e where the curvature changes from positive to negative. This signals a new regime where the tube stretch becomes saturated as chain rotation becomes the more dominant dynamic mechanism. The shape of the end-to-end distance distribution curve is also very different in this high Wi region compared to that at lower Wi regimes. Specifically, the distributions become relatively flat with a characteristic rotational peak at low |R|/|R max | and a stretch peak that emerges at very high Wi. (See Figure 3a. The stretch peaks can also be easily recognized in C 400 H 802 and C 700 H 1402 systems [18].) These flat distributions, which become wider as Wi increases, are attributed to the more regular molecular rotation cycles at very high shear rates, as discussed by Nafar Sefiddashti, et al. [3]. The skewed distributions within the intermediate Wi regime suggest that during a rotation cycle individual molecules spend on average a longer time at collapsed (or less stretched) configurations than they do at relatively stretched configurations [18], or that some of the chains have not yet stretched enough to begin their rotation cycles (see Figure 4a). Both cases lead to unbalanced lifetimes for various configurations, and consequently irregular rotation cycles. Within the high Wi regime, on the other hand, molecules undergo more regular periodic cycles. Hence various configurations between a highly stretched chain and a tightly packed coil have fairly similar lifetimes or probabilities (see Figure 4b), which manifest in the flat probability distribution of the end-to-end distance [18]. Figure 5 displays the entanglement network properties of the C 1000 H 2002 melt at various Wi. The ensemble average entanglement density and the probability distribution function for the entanglement density are displayed in Figure 5a,c. Figure 5b shows the tube diameter, determined as the step length of the primitive path a = L /( Z k /2) [2,3]. The probability distribution function of the primitive path contour length is also shown in Figure 5d for various values of Wi. Note that the primitive path contour length, L , is essentially commensurate with the tube stretch, λ (see Figure 3d), which is the normalized primitive path contour length. These plots show that, within the linear viscoelastic regime, the entanglement network is practically unperturbed as compared to quiescent conditions. Specifically, the entanglement density and tube diameter do not change as the flow strength increases. The probability distribution function for the entanglement density, P(Z k ), follows a Poisson distribution and is independent of Wi in this regime. P L pp exhibits a similar behavior, except that it follows a Gaussian distribution. As Wi increases and shear rate enters into the nonlinear viscoelastic regime, the tube network begins to lose entanglements. Notably, there is no sharp boundary between the second (1 < Wi ≤ 38) and the third 58 ≤ Wi < 800) flow regimes, as discussed for the structural properties of the system. Rather, there is an initial stage of convective constraint release wherein the chains disentangle at a moderate rate in the region 1 < Wi < 500 such that Z k ∼ Wi −0.07 . Accordingly, the tube diameter increases moderately in this region. The probability distribution function for the entanglement density, P(Z k ), shifts to the left with increasing Wi as the chains disentangle. The shape of the distribution, however, remains approximately similar to that of the linear viscoelastic regime and still follows a Poisson distribution. On the other hand, P L pp shifts to the right and becomes wider (i.e., with a higher standard deviation); nevertheless, the distribution continues to follow a Gaussian distribution. These results suggest that, although by the end of this flow regime the system loses about 30% of its entanglements, the nature of the entanglement network does not change radically. Note that even at the highest shear rate within this regime, none of the chains has lost all of its entanglements. For instance, the curve for Wi = 117 in Figure 5c shows that all molecules possess 5 or more kinks.
At higher flow strength, (i.e., Wi ≥ 585), the entanglement density begins to drop dramatically as Wi increases with a power-law exponent of Z k −0.35 . The tube diameter also increases substantially in this region, such that at Wi = 2340 the tube diameter grows almost as large as the molecular radius of gyration. This means that a molecule could effectively diffuse as far as its size without feeling the confining tube. This interpretation essentially questions the existence of the tube concept and of an entangled system. These subtleties can be understood by examining the probability distribution of the entanglement density. Figure 5c shows that the distributions begin to deviate somewhat from Poisson distributions. More importantly, these distributions suggest that, unlike before, in this flow region some of the molecules have lost all their entanglements and have become virtually unentangled. The distribution of the primitive path contour length also deviates considerably from the Gaussian distribution in this region. All of these observations suggest that the entanglement network is effectively destroyed due to the strong flow. This also might explain why the system behavior at such high shear rates resembles that of a dilute solution, as has been argued for shorter chain C 400 H 802 and C 700 H 1402 liquids [3,18]. In this regime, the tumbling cycles are comparatively more regular, similar to those of dilute solutions. Tube stretch approaches its plateau value as macromolecular tumbling becomes the dominant dynamic mechanism.
to that of the linear viscoelastic regime and still follows a Poisson distribution. On the other hand, ( ) shifts to the right and becomes wider (i.e., with a higher standard deviation); nevertheless, the distribution continues to follow a Gaussian distribution. These results suggest that, although by the end of this flow regime the system loses about 30% of its entanglements, the nature of the entanglement network does not change radically. Note that even at the highest shear rate within this regime, none of the chains has lost all of its entanglements. For instance, the curve for = 117 in Figure 5c shows that all molecules possess 5 or more kinks. An important characteristic of the entanglement network is the tube orientation tensor, S, that is one of the principal variables of many tube-based constitutive models. Figure 6 displays the nonzero components of S as functions of Wi at steady state obtained from the NEMD data for the C 1000 H 2002 melt. The average orientation tensor of the tube segments in this figure is defined as S = u t u t , where u t is the unit end-to-end vector of an entanglement strand: knowing the positions of the entanglements (kinks) along the chain from the Z1 code, the end-to-end vectors of the entanglement strands can be easily identified and the appropriate ensemble averages of the components of the orientation tensor can then be readily calculated from the NEMD data. The component, S xy , begins to increase at very low Wi within the linear viscoelastic regime. This segmental orientation leads to an increase in the shear stress in this region, in agreement with theory. S xy passes through a maximum in the range 3 < Wi < 12, which is somewhat higher than the theoretical prediction of Wi ∼ 1. At higher shear rates, S xy decreases almost monotonically. Such behavior can lead to excessive shear thinning, as observed in vesrsions of the tube model that do not incorporate CCR, especially within the shear rate range τ −1 d < . γ < τ −1 R or the plateau region where the tube stretch is insignificant. In fact, models that incorporate CCR predict a nearly constant S xy , and consequently constant shear stress, in the plateau region in agreement with typical experiments. Hence, the decrease in S xy observed in the NEMD data calls into question the theoretical mechanism of CCR in some tube-based models like MLD. The diagonal components of S remain nearly constant in the linear viscoelastic regime and then diverge from their equilibrium value (~0.33) as Wi increases. At very high shear rates, i.e., . γ > τ −1 e , the rate of change in these components increases significantly. This is the shear rate range wherein the entanglement network begins to disintegrate. Generally, the features of these plots are qualitatively similar to those of λ and Z k (see Figures 3d and 5a), which could be indicative of an inherent connection between λ and Z k with the normal components of the tube segmental orientation tensor. This is specifically important from a modeling perspective as it suggests that the evolution equations for the tube stretch and entanglement density should be expressed in terms of the diagonal components of S rather than the shear component. Figure 7 indicates that does not change significantly within not only the linear viscoelastic region ( ≤ 1) but also in the nonlinear regime for 1 < ≤ 12. At higher shear rates, the longest relaxation time decreases with a power-law exponent of −0.71 ± 0.06; this is consistent with the scaling exponents of the C400H802 and C700H1402 liquids at high shear rates [2,3]. Unlike for the C1000H2002 liquid, the relaxation times of C400H802 and C700H1402 decreased with shear rate at all > 1, and hence a separate power-law exponent for the −1 <̇< −1 regime was reported in prior work [2,3,18]. Nevertheless, a separate scaling factor for low looks to be irrelevant here. This is perhaps caused by the higher entanglement density of the C1000H2002 melt, which possibly delays any meaningful change in the relaxation time until approximately = 10 when 〈 〉 begins to decrease-see Figure 5a.
also exhibits a power-law behavior that scales as ̇− 0.7±0.07 with flow strength. Although this value is slightly smaller than those of the C400H802 and C700H1402 melts (−0.78 and −0.75 respectively), they are all in reasonable agreement within statistical bounds. The ratio ⁄ averages about 7.3 over all ≥ 50, which is reasonably close to 2 , similarly to the prior cases [2,3]. These timescales are calculated based on fitting the autocorrelation function data of the end-to-end vector with the functional form u i (τ)·u i (τ + t) = exp(−t/τ d ) cos(2πt/τ rot ). Hence, τ d is the decorrelation time of the end-to-end vector, which is equal to the longest relaxation time (i.e., the disengagement time under quiescent conditions and within the linear viscoelastic regime). τ rot quantifies the period of the rotation and retraction cycle of the macromolecules, assuming that the cycles are quasi-periodic. A characteristic time for the tumbling period can be defined conceptually as τ r = τ rot /2π [2], displayed as diamonds in Figure 7. ). Hence, is the decorrelation time of the end-to-end vector, which is equal to the longest relaxation time (i.e., the disengagement time under quiescent conditions and within the linear viscoelastic regime). quantifies the period of the rotation and retraction cycle of the macromolecules, assuming that the cycles are quasi-periodic. A characteristic time for the tumbling period can be defined conceptually as = 2 ⁄ [2], displayed as diamonds in Figure 7. Figure 7 indicates that does not change significantly within not only the linear viscoelastic region ( ≤ 1) but also in the nonlinear regime for 1 < ≤ 12. At higher shear rates, the longest relaxation time decreases with a power-law exponent of −0.71 ± 0.06; this is consistent with the scaling exponents of the C400H802 and C700H1402 liquids at high shear rates [2,3]. Unlike for the C1000H2002 liquid, the relaxation times of C400H802 and C700H1402 decreased with shear rate at all > 1, and hence a separate power-law exponent for the −1 <̇< −1 regime was reported in prior work [2,3,18]. Nevertheless, a separate scaling factor for low looks to be irrelevant here. This is perhaps caused by the higher entanglement density of the C1000H2002 melt, which possibly delays any meaningful change in the relaxation time until approximately = 10 when 〈 〉 begins to decrease-see Figure 5a.
also exhibits a power-law behavior that scales as ̇− 0.7±0.07 with flow strength. Although this value is slightly smaller than those of the C400H802 and C700H1402 melts (−0.78 and −0.75 respectively), they are all in reasonable agreement within statistical bounds. The ratio ⁄ averages about 7.3 over all ≥ 50, which is reasonably close to 2 , similarly to the prior cases [2,3]. Figure 7 indicates that τ d does not change significantly within not only the linear viscoelastic region (Wi ≤ 1) but also in the nonlinear regime for 1 < Wi ≤ 12. At higher shear rates, the longest relaxation time decreases with a power-law exponent of −0.71 ± 0.06; this is consistent with the scaling exponents of the C 400 H 802 and C 700 H 1402 liquids at high shear rates [2,3]. Unlike for the C 1000 H 2002 liquid, the relaxation times of C 400 H 802 and C 700 H 1402 decreased with shear rate at all Wi > 1, and hence a separate power-law exponent for the τ −1 d < . γ < τ −1 e regime was reported in prior work [2,3,18]. Nevertheless, a separate scaling factor for low Wi looks to be irrelevant here. This is perhaps caused by the higher entanglement density of the C 1000 H 2002 melt, which possibly delays any meaningful change in the relaxation time until approximately Wi = 10 when Z k begins to decrease-see Figure 5a. τ rot also exhibits a power-law behavior that scales as . γ −0.7±0.07 with flow strength. Although this value is slightly smaller than those of the C 400 H 802 and C 700 H 1402 melts (−0.78 and −0.75 respectively), they are all in reasonable agreement within statistical bounds. The ratio τ rot /τ d averages about 7.3 over all Wi ≥ 50, which is reasonably close to 2π, similarly to the prior cases [2,3]. This suggests that a single timescale, one associated with the period of the molecular tumbling cycles, is the sole configurational relaxation mechanism of the C 1000 H 2002 chains for Figure 8 displays the steady-state rheological properties of the C 1000 H 2002 liquid as functions of Wi. As expected, the shear stress scales as . γ in the linear viscoelastic regime; however, at higher shear rates, the system's response is quite different from typical experimental observations, as evident from Figure 8a. Specifically, the shear stress passes through a maximum in the shear rate range 3 < Wi < 12 and a subsequent minimum in the range 58 < Wi < 117, in contradiction with the experimentally observed plateau region where the shear stress remains approximately constant or increases slightly as shear rate increases, usually within the shear rate ranges τ −1
Rheological Response
Considering the uncertainties of the calculations, it appears that the local maximum and minimum in the shear stress profile occur roughly at about . γ ∼ τ −1 d and .
γ ∼ τ −1 R , respectively, and the shear stress surpasses the local maximum value at a shear rate of approximately τ −1 e . This possibly implies that the flow is unstable over a fairly wide range of shear rates. Such behavior is enticingly consistent with the discussion of Doi and Edwards (see Figure 7.22 of Reference [34]) concerning the DE model predictions at high shear rates, who argued that the power-law exponent of the shear stress is very sensitive to the relaxation spectrum of the linear relaxation modulus. They argued that the absolute value of the exponent becomes smaller (closer to zero) as the relaxation spectrum becomes broader. Therefore, the shear stress should be approximately independent of the shear rate for polydisperse samples that are commonly used in experiments (hence the plateau), whereas a maximum in the shear stress profile could result from a completely monodisperse sample. Nevertheless, even for monodisperse samples, multiple relaxation processes tend to broaden the relaxation spectrum and weaken the shear rate dependence of the stress. However, as evident from Figure 7, the number of timescales becomes effectively unity for R becomes close to unity, the shear stress increases due to tube stretching [34]. This implies that if the number of entanglements is not large enough (i.e., τ d /τ R is not high enough), the shear rate dependence weakens. In Figure 8a, σ xy ∼ . γ −0.2 for 12 < Wi < 58, which is consistent with this argument. The plateau region in the shear stress profile has also been postulated to result from the onset of the molecular tumbling cycles that begin to manifest in this shear rate regime [2,3]. This hypothesis led to further investigations which indicated the possibility that shear banding, caused by the molecular periodicity, was a possible cause of the experimentally observed plateau in the shear stress profile [36][37][38]; however, it is unlikely that shear banding occurs in the present simulations since the p-SLLOD equations of motion impose a homogeneous linear velocity profile throughout the simulation cell in the NEMD simulations. That being said, however, recent DPD simulations have demonstrated shear banding in monodisperse polymers in the same range of molecular weight where the flow curve is non-monotonic [36][37][38][39]. For . γ > τ −1 e , the shear stress scales as . γ 0.3 . The power-law exponents for the C 400 H 802 and C 700 H 1402 melts over the same range of shear rates are approximately −0.5 and −0.4, respectively [2,3], which suggest a molecular weight dependence for the shear stress at these high shear rates. Figure 8b,c show the first normal stress coefficient Ψ 1 = N 1 / . γ 2 , and the second normal stress coefficient where N 1 = σ xx − σ yy and N 2 = σ yy − σ zz . Both coefficients exhibit strong shear thinning behavior in the nonlinear regime with power-law exponents of -1.7 and -1.8, in agreement with those of the C 700 H 1402 melt [2]. The ratio −Ψ 2 /Ψ 1 ranges over 0.04 < −Ψ 2 /Ψ 1 < 0.27 in the nonlinear regime, again in reasonable agreement with C 700 H 1402 melt [2] and typical experimental values [40].
relaxation spectrum and weaken the shear rate dependence of the stress. However, as evident from Figure 7, the number of timescales becomes effectively unity for ̇≥ −1 . Based on the DE model, , ~̇− 0.5 for −1 ≪̇≪ −1 and as ̇− 1 becomes close to unity, the shear stress increases due to tube stretching [34]. This implies that if the number of entanglements is not large enough (i.e., ⁄ is not high enough), the shear rate dependence weakens. In Figure 8a, ~̇− 0.2 for 12 < < 58, which is consistent with this argument.
Transient Behavior
The time-dependent microstructural and rheological properties of the C 1000 H 2002 melt were investigated under startup of simple shear flow similarly to those of the C 700 H 1402 liquid presented in a prior publication [17]. Figure 9 displays the transient shear viscosity, S xy , λ, and Z k as functions of time for various Wi obtained from the NEMD simulations. The data for the transient viscosity and S xy have been smoothed using a running time average over a number of successive sample times spanning 0.05-0.1 relaxation time at the corresponding Wi, as represented by the circles in Figure 7. It should be noted that λ is very sensitive to the box shape when calculated using the Z1 code; since the box shape continuously changes during the simulation due to the Lagrangian rhomboid periodic boundary conditions, it is difficult to calculate transient tube stretch using the Z1 code. A solution to this problem is to calculate the tube stretch only at time steps when the box is rectangular or slightly (e.g., less than 5%) tilted. The tube stretch profiles displayed in Figure 9 were obtained using this method. A major disadvantage of this method is that it significantly reduces the resolution of data, which could lead to the loss of important dynamical features, such as an overshoot or undershoot. However, unlike tube stretch, the entanglement density is not very sensitive to the simulation box shape; since the entanglement density has essentially similar dynamics as the primitive path contour length (and equivalently, the tube stretch-see Figure 9), it can be used to estimate the overshoot and undershoot times of the tube stretch. Note that an overshoot in tube stretch corresponds an undershoot in entanglement density, and vice-versa. ensemble-averaged quantities such as the stress tensor and tube variables. Cao and Likhtman [41] compared the startup shear behavior of entangled melts obtained from NEMD simulations using the SLLOD equations and a Langevin thermostat with those of boundary-driven DPD simulations. These comparisons suggested that the ensemble average shear stresses obtained from these two methods were consistent (although not identical) despite the presence of shear banding at the examined shear rates. . The dynamics of Z k are similar to those of λ except that the minimum in Z k corresponds to a maximum in λ. Note that in panel (c) there appear small gaps in some of the data profiles at long times where simulation data was accidentally deleted. Since these data points had no bearing on the present discussion, the simulations were not repeated.
The transient first and second normal stress differences are shown in Figure 10 for various Wi as functions of time. This figure also displays the tube orientation tensor differences S xx − S yy corresponding to N 1 and S zz − S yy corresponding to −N 2 for comparison. Note that the data have been smoothed using the same method as discussed above. The data are displayed at three Wi numbers, which were chosen to represent the three distinct nonlinear viscoelastic flow regimes: shear flow at = 12 (a), = 58 (b), and = 1,170 (c). The dynamics of 〈 〉 are similar to those of except that the minimum in 〈 〉 corresponds to a maximum in . Note that in panel (c) there appear small gaps in some of the data profiles at long times where simulation data was accidentally deleted. Since these data points had no bearing on the present discussion, the simulations were not repeated. Figure 10. Transient first (left panels) and second (right panels) normal stress differences as well as their corresponding tube orientation tensor differences as functions of time upon startup of shear flow. Normal stress differences are normalized with respect to the plateau modulus. Weissenberg numbers are 12, 58, and 1,170 from top to bottom rows, respectively.
Stress Overshoot and Undershoot
From Figure 9, it appears that the dynamic response of shear viscosity and are roughly synchronized over a wide range of . Specifically, the overshoot and undershoot of transient viscosity (if any) occur approximately at the same time as those of the component of the tube orientation tensor. On the contrary, tube stretch and entanglement density respond to the applied flow field with a notable lag as compared to + and . It is worth noting that the displayed numbers represent various flow regimes: = 12 lies in < 1 regime where tube stretch is It is evident from Figures 9 and 10 that the transient viscosity and normal stresses are in qualitative agreement with typical experimental data. Specifically, except for N 2 at Wi = 12, they all exhibit an overshoot for Wi ≥ 12 before they attain steady state. Additionally, the overshoot in shear viscosity is followed by an undershoot at least for Wi ≥ τ −1 R , again in agreement with typical experiments. These overshoots and undershoots (if any) also occur in the entanglement network variables, as shown in Figures 9 and 10. These figures make it possible to compare the dynamics of the stress tensor with those of tube variables (i.e., the tube segmental orientation tensor S, and the tube stretch λ) to investigate the origins of these phenomena, as discussed in the next section. It is worth mentioning that steady or transient shear banding might occur in the range τ −1 e . This phenomenon cannot be investigated here due to the use of p-SLLOD equations of motion, as discussed in Section 3.2.2. As a consequence, the quantities presented in this section could be affected, assuming shear banding occurs. However, we do not expect a significant change, especially in ensemble-averaged quantities such as the stress tensor and tube variables. Cao and Likhtman [41] compared the startup shear behavior of entangled melts obtained from NEMD simulations using the SLLOD equations and a Langevin thermostat with those of boundary-driven DPD simulations. These comparisons suggested that the ensemble average shear stresses obtained from these two methods were consistent (although not identical) despite the presence of shear banding at the examined shear rates.
Stress Overshoot and Undershoot
From Figure 9, it appears that the dynamic response of shear viscosity and S xy are roughly synchronized over a wide range of Wi. Specifically, the overshoot and undershoot of transient viscosity (if any) occur approximately at the same time as those of the S xy component of the tube orientation tensor. On the contrary, tube stretch and entanglement density respond to the applied flow field with a notable lag as compared to η + and S xy . It is worth noting that the displayed Wi numbers represent various flow regimes: Wi = 12 lies in Wi R < 1 regime where tube stretch is negligible; Wi = 58 is within the regime where tube stretch is significant, and Wi = 1170 is within the regime where molecular tumbling is dominant. It should also be noted that this classification is based on the steady-state responses and that it might not necessarily remain valid in transient situations. For instance, whereas the tube stretch is minor at Wi R < 1, it could exhibit an overshoot in transient situations. Although the magnitude of the shear viscosity (and stress) is a function of both tube orientation, S xy , and stretch, λ, these plots suggest that the dynamics of shear viscosity are mainly influenced by the tube segment orientation, S xy , which indicates that the principal origin of stress overshoot and undershoot is possibly tube segmental orientation. These plots also show that there is no significant undershoot in λ (or equivalently, an overshoot in Z k ) at any Wi. This observation, that also applies to other shear rates (not shown in Figure 9), practically rules out tube stretch as the origin of the stress undershoot at high shear rates. It is also evident from Figure 10 that the dynamics of N 1 and N 2 are also in good agreement with their corresponding components of the tube orientation tensor, i.e., S xx − S yy and S yy − S zz , respectively, suggesting that the overshoot in normal stresses arises from the tube segment orientation. Figure 11 shows the overshoot (panel (a)) and undershoot (panel (b)) times for the transient viscosity and S xy component of the tube orientation tensor as functions of Wi. It also displays the undershoot time for the entanglement density in both panels for comparison. Note that an undershoot in Z k corresponds to an overshoot in λ, as discussed before. It is evident that the transient viscosity overshoot and undershoot times effectively overlap with those of S xy at all Wi < 585. At higher Wi, although these two curves look to be diverging, the difference between the two times is not significant, considering the error associated with extracting these small values from the noisy data, as shown in Figure 9. On the other hand, it is obvious that there is a significant difference between the undershoot time in Z k and either the overshoot and undershoot times of shear viscosity. These results again imply that both the stress overshoot and undershoot are originated from similar phenomena in the tube segmental orientation. This conclusion is in agreement with observations for a C 700 H 1402 melt at high shear rates [17]. It also agrees with the results of Cao and Likhtman [42] for unentangled and mildly entangled systems, indicating that the origin of the stress overshoot at low shear rates is the orientation of the tube network rather than chain stretching. Jeong, et al. [9] also attributed the stress overshoot to the segmental orientation in a wide range of flow strength for a mildly entangled C 400 H 802 polyethylene melt. However, unlike the current results, they did not observe a clear overshoot in the primitive path of the contour length (and hence in the tube stretch) even at very strong flow fields. This may be due to the relatively low entanglement density of the C 400 H 802 molecules used in their NEMD simulations. Masubuchi et al. [43] also investigated the origin of the stress undershoot at high shear rates using primitive chain network simulations. They examined segmental orientation, tube stretch, and the ensemble average squared sine of the chain end-to-end orientation angle (representing the tumbling motion) and showed that all these variables exhibit undershoots, although not synchronized with shear stress. Masubuchi et al. [43] concluded that their results supported the mechanism proposed by Costanzo et al. [16] Polymers 2018, 10, x FOR PEER REVIEW 19 of 25 ≥ 585. The experimental value of strain at stress overshoot is also about 2 at low shear rates; however, it shifts to higher strains, as the shear rate exceeds −1 [45]. γt. This figure also shows the strain at the undershoot time for the entanglement density in both panels for comparison. The agreement between η + and S xy curves in the region Wi < 585 is not surprising, considering the results of Figure 11 and how the strain is calculated. The important point to notice is that up to very high Wi the overshoot in S xy occurs at about γ = 2 consistently. This suggests that regardless of flow strength, the material deforms affinely during the initial 2 strain units until S xy attains a maximum. However, it does not look to be the case at later times that S xy passes beyond its minimum, especially at intermediate and high The discussion concerning the overshoot and undershoot dynamics in the last few paragraphs should not lead to misinterpretation about the role of tube stretch in the stress overshoot and undershoot. Figure 13a shows the magnitude of the overshoots in the normalized shear stress and tube orientation , versus . The shear stress is normalized with the plateau modulus, 0 . Figure 13 also displays the magnitude of the tube stretch at the time of stress overshoot. Note that this quantity is different from the magnitude of the tube stretch overshoot. It is evident from this figure that for < 585 the shear stress closely mimics , while the tube stretch is fairly close to the equilibrium value of unity, or only mildly greater. This suggests that in this region tube stretch has a minor or negligible contribution to the stress overshoot, . At higher shear rates, whereas looks to become saturated and remain roughly constant, increases quickly as increases. The tube stretch in this region also begins to increase and diverge from its equilibrium value. This indicates that although the dynamics of the stress overshoot are essentially controlled by the tube segmental orientation (as discussed in the preceding paragraph), its magnitude is significantly influenced by the tube stretch at high flow strength. A similar argument can be made for the stress undershoot-see Figure 13b. This conclusion can be rationalized by hypothesizing that the tube stretch is itself originated from the tube orientation or another dynamic variable. It is however immediately evident from Figure 9 that the component could not be that variable, considering the significant differences between the features of and plots in this figure. The discussion concerning the overshoot and undershoot dynamics in the last few paragraphs should not lead to misinterpretation about the role of tube stretch in the stress overshoot and undershoot. Figure 13a shows the magnitude of the overshoots in the normalized shear stress and tube orientation S os xy , versus Wi. The shear stress is normalized with the plateau modulus, G 0 N . Figure 13 also displays the magnitude of the tube stretch at the time of stress overshoot. Note that this quantity is different from the magnitude of the tube stretch overshoot. It is evident from this figure that for Wi < 585 the shear stress closely mimics S os xy , while the tube stretch is fairly close to the equilibrium value of unity, or only mildly greater. This suggests that in this region tube stretch has a minor or negligible contribution to the stress overshoot, σ os xy . At higher shear rates, whereas S os xy looks to become saturated and remain roughly constant, σ os xy increases quickly as Wi increases. The tube stretch in this region also begins to increase and diverge from its equilibrium value. This indicates that although the dynamics of the stress overshoot are essentially controlled by the tube segmental orientation (as discussed in the preceding paragraph), its magnitude is significantly influenced by the tube stretch at high flow strength. A similar argument can be made for the stress undershoot-see Figure 13b. This conclusion can be rationalized by hypothesizing that the tube stretch is itself originated from the tube orientation or another dynamic variable. It is however immediately evident from Figure 9 that the S xy component could not be that variable, considering the significant differences between the features of S xy and λ plots in this figure. , and , of the tube orientation tensor. Overall, this figure shows that the overshoot times of these variables roughly overlap, within a wide range of including mildly to strongly nonlinear viscoelastic flow regimes. Specifically, there is a good agreement between the undershoot time of 〈 〉 and that of 〈 2 〉. It should be emphasized that 〈 2 〉 is essentially the trace of the ensemble average chain conformation tensor and represents the overall average extensional state of the molecules. Figure 14 implies that the entanglement network, and hence tube stretch dynamics, are mainly influenced by the diagonal components of the orientation tensor or by the overall extensional properties of the molecules rather than the shear component. Figure 14 shows the undershoot time for the entanglement density as well as the overshoot time for the ensemble average squared end-to-end distance, R 2 , and the normal (diagonal) components, S xx , S yy , and S zz , of the tube orientation tensor. Overall, this figure shows that the overshoot times of these variables roughly overlap, within a wide range of Wi including mildly to strongly nonlinear viscoelastic flow regimes. Specifically, there is a good agreement between the undershoot time of Z k and that of R 2 . It should be emphasized that R 2 is essentially the trace of the ensemble average chain conformation tensor and represents the overall average extensional state of the molecules. Figure 14 implies that the entanglement network, and hence tube stretch dynamics, are mainly influenced by the diagonal components of the orientation tensor or by the overall extensional properties of the molecules rather than the shear component.
Conclusions
Transient and steady-state dynamic responses of an entangled C1000H2002 polyethylene melt were examined via virtual experimentation using NEMD simulations. Under quiescent conditions, reptation theory could explain equilibrium properties fairly well. Under steady shear flow conditions, four flow regimes were recognized in agreement with prior results for moderately and mildly entangled C700H1402 and C400H802 liquids [2,3]. The first regime was the linear viscoelastic regime (̇< −1 ) where most of the structural and topological properties of the system remain unperturbed compared to the quiescent conditions. Orientation effects dominated the rheological response in this flow regime, although they are quite weak. In the second regime ( −1 <̇< −1 ), the molecules began to align with the flow direction and a significant degree of chain orientation was observed as increased. Additionally, the tube segments began to stretch mildly and chain molecules partially unraveled and disentagled as flow strength increased. However, the dominant relaxation mechanism in this region was the orientation of the tube segments. In the third regime ( −1 <̇< −1 ), while on average the chains were fully aligned with the flow direction, the molecular disentangling continued and tube stretching dominated the rheological response. Additionally, the rotation of molecules became a significant source of the overall system dynamics. In the fourth regime (̇> −1 ), the chain stretching decelerated, and tube stretch approached a plateau value. At the same time, flow-induced disentanglement continued and the entanglement network began to deteriorate such that some molecules became completely devoid of entanglements. The molecular tumbling, on the other hand, gradually became the dominant relaxation mechanism, and molecular configurations followed more regular cycles when compared to similar behavior at lower flow strength.
The comparison of transient shear viscosity, + , with the dynamic responses of key variables of the tube model, including the tube segmental orientation, S, and tube stretch, , revealed that the stress overshoot and undershoot in steady shear flow of entangled liquids were essentially originated and dynamically controlled by the component of the tube orientation tensor, rather than the tube stretch , over a wide range of flow strengths (including shear rates faster than −1 ). Nevertheless, the magnitude of the stress is significantly affected by at high shear rates. Comparison of the undershoot time for the entanglement density with the overshoot times for the ensemble average squared end-to-end distance, R 2 , and the normal (diagonal) components, S xx , S yy , and S zz , of the tube orientation tensor.
Conclusions
Transient and steady-state dynamic responses of an entangled C 1000 H 2002 polyethylene melt were examined via virtual experimentation using NEMD simulations. Under quiescent conditions, reptation theory could explain equilibrium properties fairly well. Under steady shear flow conditions, four flow regimes were recognized in agreement with prior results for moderately and mildly entangled C 700 H 1402 and C 400 H 802 liquids [2,3]. The first regime was the linear viscoelastic regime ( . γ < τ −1 d ) where most of the structural and topological properties of the system remain unperturbed compared to the quiescent conditions. Orientation effects dominated the rheological response in this flow regime, although they are quite weak. In the second regime (τ −1 d < . γ < τ −1 R ), the molecules began to align with the flow direction and a significant degree of chain orientation was observed as Wi increased. Additionally, the tube segments began to stretch mildly and chain molecules partially unraveled and disentagled as flow strength increased. However, the dominant relaxation mechanism in this region was the orientation of the tube segments. In the third regime (τ −1 R < . γ < τ −1 e ), while on average the chains were fully aligned with the flow direction, the molecular disentangling continued and tube stretching dominated the rheological response. Additionally, the rotation of molecules became a significant source of the overall system dynamics. In the fourth regime ( . γ > τ −1 e ), the chain stretching decelerated, and tube stretch approached a plateau value. At the same time, flow-induced disentanglement continued and the entanglement network began to deteriorate such that some molecules became completely devoid of entanglements. The molecular tumbling, on the other hand, gradually became the dominant relaxation mechanism, and molecular configurations followed more regular cycles when compared to similar behavior at lower flow strength.
The comparison of transient shear viscosity, η + , with the dynamic responses of key variables of the tube model, including the tube segmental orientation, S, and tube stretch, λ, revealed that the stress overshoot and undershoot in steady shear flow of entangled liquids were essentially originated and dynamically controlled by the S xy component of the tube orientation tensor, rather than the tube stretch λ, over a wide range of flow strengths (including shear rates faster than τ −1 R ). Nevertheless, the magnitude of the stress is significantly affected by λ at high shear rates. | 2019-04-10T13:03:23.047Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "b37408876ebf752912a66b994536f7d8a5028145",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/11/3/476/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b37408876ebf752912a66b994536f7d8a5028145",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
255986657 | pes2o/s2orc | v3-fos-license | Improved reference genome of the arboviral vector Aedes albopictus
The Asian tiger mosquito Aedes albopictus is globally expanding and has become the main vector for human arboviruses in Europe. With limited antiviral drugs and vaccines available, vector control is the primary approach to prevent mosquito-borne diseases. A reliable and accurate DNA sequence of the Ae. albopictus genome is essential to develop new approaches that involve genetic manipulation of mosquitoes. We use long-read sequencing methods and modern scaffolding techniques (PacBio, 10X, and Hi-C) to produce AalbF2, a dramatically improved assembly of the Ae. albopictus genome. AalbF2 reveals widespread viral insertions, novel microRNAs and piRNA clusters, the sex-determining locus, and new immunity genes, and enables genome-wide studies of geographically diverse Ae. albopictus populations and analyses of the developmental and stage-dependent network of expression data. Additionally, we build the first physical map for this species with 75% of the assembled genome anchored to the chromosomes. The AalbF2 genome assembly represents the most up-to-date collective knowledge of the Ae. albopictus genome. These resources represent a foundation to improve understanding of the adaptation potential and the epidemiological relevance of this species and foster the development of innovative control measures.
Background
Climate change, urbanization, and increased international mobility are predicted to further increase the spreading of the highly invasive mosquito Aedes albopictus and severely exacerbate the risk and burden of Aedes-transmitted human pathogens, in primis dengue, Zika, and chikungunya viruses, but also the veterinary-relevant parasite Dirofilaria immitis [1,2]. As a consequence, nearly a billion people could face their first exposure to arboviral transmission within the next century especially in subtropical and temperate regions of the world, including Europe [2].
The initial genome assembly of Ae. albopictus (AaloF1) from the Chinese Foshan strain represented a fundamental achievement for the genetic characterization of this mosquito [3]. From this analysis, based solely on the assembly of short DNA sequence reads, the genome of Ae. albopictus appears to be the largest mosquito genome sequenced to date (1.9 Gb). However, due to very high levels of repetitive DNA and reliance on short-read sequencing, AaloF1 remains highly fragmented with more than 150,000 scaffolds, limiting its utility.
Results
Using a cytofluorimetric approach, we estimated the genome length of Ae. albopictus to be similar to that of Ae. aegypti, between 1.190-1.275 Gb, across populations from the native home range (Thailand, Malaysia, Singapore), old-colonized regions (La Reunion Island), and recently invaded areas (Italy, the USA, and Mexico) (Fig. 1a).
To foster continuity, we chose to use the Foshan strain for further genome study. After six consecutive rounds of single sister-brother matings, we extracted highmolecular weight DNA from forty sibling mosquitoes. We then generated approximately 82 Gb of PacBio single-molecule long reads with a mean read length of 10 kb and an N50 length of 18 kb (N50 length: half of the data comprises sequences of this length or longer). Additionally, we prepared a Hi-C proximity ligation library from ten adult mosquitoes and collected 135 Gb of Illumina reads. We assembled the long-read PacBio data with Canu [6] and polished the resulting contigs with Arrow (https://www. pacb.com/products-and-services/analytical-software/smrt-analysis/) using the raw PacBio signal data. This initial assembly totaled 5.17 Gb, far exceeding the expected haploid genome size (~1.25 Gb), suggesting the presence of alleles that failed to collapse in the assembly. We hypothesized that this was due to high levels of heterozygosity in the pool of sequenced mosquitoes, resulting in multiple allelic variants assembled separately, as has been previously noted in long-read assemblies [7]. To partition this initial assembly into primary and alternative contig sets, we analyzed contig alignments and depth of coverage with Purge Haplotigs [7] along with BUSCO single-copy orthologs [8] to determine which contigs were likely to be redundant and should be designated as alternative alleles. Haplotig purging reduced the size of the primary assembly by nearly half to 2.54 Gb, which was then scaffolded via the Hi-C data using SALSA2 [9].
The final primary assembly, which we call AalbF2, consists of 2197 scaffolds with an N50 length of 55.7 Mb (Additional file 2: Table S1). This represents a continuity increase of two orders of magnitude compared to AaloF1 scaffold N50s of 201 kb [3]. This significant increase in continuity provides a more complete view of the genomic organization of Ae. albopictus and allows for a more accurate annotation of gene structures.
Analyses of single-copy orthologs via BUSCO [8] in AalbF2 showed an 8.3% increase in the percentage of complete, single-copy BUSCOs with respect to AaloF1 (Additional file 2: Table S1). AalbF2 has a BUSCO completeness of 93.2%, with an estimated 14.0% duplication. Additionally, using Barrnap (https://github.com/tseemann/barrnap), Fig. 1 Size of the Aedes albopictus genome and physical map. a Cytofluorimetric-based estimates of the genome size of Ae. albopictus strains, including Foshan and Rimini from which genome assemblies were derived based on short-read Illumina sequencing [3,4] and Ae. albopictus wild-collected samples from the native home range (Malaysia, Singapore, Thailand), an old-colonized region (La Reunion), and newly invaded areas (the USA, Mexico, Italy). The Ae. albopictus genome size is estimated to be in the range of 1095-1299 Mb, comparable or slightly larger than that of Ae. aegypti (1066-1309 Mb) [5]. b Physical genome map of Ae. albopictus based on 50 DNA probes hybridized in situ to mitotic chromosomes. Chromosomes and chromosome arms are indicated by numbers 1, 2, and 3 and letters p and q, respectively. Chromosome divisions and subdivisions are shown on the left sides of the idiograms. Scaffolds are indicated by arrows or lines. Arrows indicate orientations of the scaffolds. SC stands for scaffold; rDNA stands for ribosomal locus. c Examples of fluorescence in situ hybridization. Chromosomal locations of transcripts XM_019675405 and XM_020077126 from scaffolds 4 and 48, respectively; rDNA, polyphenol oxidase (PPO) gene clusters, and the largest viral integration in the genome (Canu-Flavi19) are demonstrated. Transcripts are indicated on a figure by the last four digits of their accession numbers. d Schematic illustration of chromosomal locations of PPO cluster triplication in the new assembly of Ae. albopictus (GCF_006496715.1). Comparative genomics analysis of the synteny in the AaegL5 genome reveals an array of 6 genes localized in a region of 123.44 kb at chromosome 2 (2:199,230,485-199,353,929) and was locally duplicated twice in Ae. albopictus, resulting in 18 PPO genes. PPO gene cluster array of chromosome 2 of Ae. aegypti includes AAEL015116 (PPO1), AAEL015113 (PPO2), AAEL013492 (PPO5), AAEL013493 (PPO7), AAEL013501 (PPO4), and AAEL013496 (PP08) the number of ribosomal RNA gene sequences was estimated to be 484 in AalbF2 (compared to 22 in AaloF1), a value close to the number (430) independently estimated from an Ae. albopictus haploid genome [10]. The rate of alignments of DNA and RNA sequencing data from published resources [11][12][13] and the percentage of properly paired reads were also analyzed and confirmed the quality and continuity of AalbF2 (Additional file 2: Table S1). The higher continuity of AalbF2 is also shown by the annotation of transposable elements (TE), which amount to 55.03% of the genome size, a value comparable to that of the most recent assembly of the Ae. aegypti genome, AaegL5 (Additional file 2: Table S2).
The original, unfiltered, and unsplit assembly (main and alternative scaffolds) had a BUSCO completeness of 97.6% with 81.8% duplication, indicating that the majority of genes were represented in the combined assembly by more than one allele. Despite promising improvements to long-read sequencing methods that have enabled genome assembly from a single Anopheles coluzzii mosquito [14], the larger genome size of Aedes spp. mosquitoes (i.e., 1.2 Gb vs. 280 Mb for An. coluzzii) required pooling of heterozygous individuals and the necessity of removing haplotypic duplications prior to the creation of haploid reference scaffolds [5].
A total of 26,856 protein-coding sequences were predicted in AalbF2 through the NCBI Eukaryotic Genome Annotation Pipeline (https://www.ncbi.nlm.nih.gov/genome/ annotation euk/process/). To help distinguish between artifacts and genuine gene duplications, which are resistant to proper assembly, and mitigating the heterozygosity effect from the original pooled DNA, we developed a pipeline based on the assumptions that selection acts mainly on the coding sequence of a gene and that homology between highly related paralogs drops in the flanking untranslated sequences (Additional file 1: Fig. S1). To perform the analysis, we compared 500 bp or 1000 bp of the flanking regions at the 5′ and 3′ ends of all candidate gene duplicate with an all-against-all BLASTn search with an e value of 1 × 10 −40 for each flanking region. We found 1329 (8.05% of total) genes with high similarity within 500 bp of their 5′ and 3′ flanking regions mapping on 452 of the 2196 scaffolds (Additional file 3). When we considered the extended 1000-bp regions, the number of candidates duplicated was lower (808 mapping on 300 scaffolds, 4.89% of total). Most of these artifacts involved a single duplicated gene (twins), and the number decreased with increasing copies. A list of gene duplications that are likely to be artifacts of the assembly is available for future reference in Additional file 3.
A significant improvement of AalbF2 is that more than 50% of the genome assembly is contained within the 13 largest scaffolds (e.g., L50 = 13; L75 = 58, Additional file 1: Fig. S2, Additional file 2: Table S1). We developed a physical genome map of the Ae. albopictus genome using in situ hybridization on mitotic chromosomes covering 57% of the genome assembly by targeting twenty of the largest genomic scaffolds and three minor scaffolds (Additional file 2: Table S3, Fig. 1b). A total of 4, 9, and 10 scaffolds were assigned to chromosomes 1, 2, and 3, respectively. Positions of the transcript from scaffolds 15, 48, and 55 hybridized to places already covered with other large scaffolds. The positions of all tested transcripts were consistent with their positions in the Ae. aegypti genome, which is assembled into chromosome-size scaffolds, providing an independent confirmation of the accuracy of the in situ hybridization results [5]. Based on probe mapping to the Ae. aegypti genome and homology between the Ae. aegypti and the Ae. albopictus chromosomes (Additional file 1: Fig. S2), we bioinformatically assigned the 58 longest scaffolds covering 75% of the genome to Ae. albopictus chromosomes (Additional file 2: Table S4).
Cytogenetic comparison (Table 1) between Ae. albopictus and Ae. aegypti demonstrated that the total chromosome length is 4.9 μm or 16.4% longer in Ae. albopictus (P < 0.0001), which suggests a slightly larger genome size in this species, as also suggested by cytofluorimetry. Chromosome proportions, such as relative chromosome and arm lengths, between the two species were also different. In Ae. albopictus, "chromosome 1" was shorter but chromosome 2 was longer relative to Ae. aegypti.
Besides positioning and orienting the largest scaffolds, we physically mapped the18S rDNA and other genomic features (e.g., viral integrations and representative immunity genes) described below (Fig. 1c, d). The 18S rDNA mapped in the region of the secondary constriction in region 1q22. The intensity of the signal significantly varied among chromosomes from individual mosquitoes suggesting variations in numbers of ribosomal genes.
The landscape of endogenous viral elements
The genome of Ae. albopictus harbors hundreds of integrated sequences from nonretroviral RNA viruses, called nonretroviral endogenous viral elements (nrEVE) or nonretroviral integrated RNA virus sequences (NIRVS) (Palatini et al.). Taking advantage of the contiguity of AalbF2 and using a viral database composed of 1563 viral species (Additional file 4), we revised the annotation of nrEVEs, while also providing correspondence with viral integrations previously annotated in AaloF1 (Additional file 5, Additional file 2: Table S5). Additionally, we used the identified viral integrations to screen the alternative assembly (NCBI accession GCA_006496715.1) and found alternative nrEVE alleles (Additional file 2: Table S6), confirming that the haplotig purging applied to the initial assembly effectively moved haplotypic variants into the alternative assembly. We confirmed that the majority of nrEVEs of Ae. albopictus genome have similarities to known insect-specific flaviviruses (ISFs) and rhabdoviruses (Fig. 2a, b), which tend to map less than 10 kb to each other, generating clusters of often rearranged or duplicated sequences (Additional file 1: Fig. S3), and are in tight association with transposable elements (TE), primarily Gypsy and Pao LTR (Fig. 2c). This association appears to be driven by the enrichment of LTR retrotransposons into piRNA clusters (Additional file 1: Fig. S3).
The largest viral integration (Canu-Flavi19) in AalbF2 reached 6593 bp and encompassed all of the structural proteins, the entire NS1 and NS2, and part of the NS3 and NS5 proteins of the 11,064 bp genome of Aedes flavivirus, with a 97,63 percentage of identity (Additional file 5). This viral integration was mapped by in situ hybridization to chromosome 2q close to the telomere, confirming it is integrated within the genome (Fig. 1b). Signals were also found in the centromeres of all three chromosomes, probably because these regions contain nrEVEs with sequence similarity to Canu-Flavi19 (Fig. 1b).
Using genome engineering approaches of both viral and mosquito genomes, selected nrEVEs were shown to exert antiviral activity with respect to cognate viral infections [15,16]. These results suggest nrEVEs are heritable immunity sequences, which implies . Each dot is a viral integration, plotted according to its length and color-coded based on its viral origin. nrEVEs range in length from 131 to 6593 nt, with an average of 1289 nt. Arrow points to Canu-Flavi19, the longest nrEVE. b Scatter plot representing the amino acid identity of each nrEVE and its best hit retrieved by blastx searches against NR database grouped by viral family. The average is shown by a line. Red dots are the novel viral integrations discovered in wild-caught mosquitoes. c Bar plots showing the type of the closest transposable element, which was identified upstream and downstream each nrEVE. Viral integrations are classified based on their viral origin as shown in a. d Scheme of the novel viral integrations identified in the genome of wild-collected mosquitoes with respect to a Flavivirus genome and their frequency occurrence in mosquitoes from Tampon (Reunion) and Tapachula (Mexico) that their distribution patterns may differ across geographic populations depending on viral exposure [17][18][19]. To address whether viral integrations different from those annotated in AalbF2 can be characterized in wild-caught mosquitoes (hereafter called novel nrEVEs), we collected and sequenced the genomes of 24 adult females from Tapachula (Mexico) and Tampon (La Reunion island) where several arboviruses are endemic. By using Vy-Per [20] followed by ViR [21], we identified one and two novel viral integrations in samples from Tapachula and Tampon, respectively, plus a novel viral integration common to both populations (Fig. 2d). Two of these novel viral integrations (nrEVEnew-3 and nrEVEnew-4) have similarities to AeFV, one (nrEVEnew-2) to CFAV and the other (nrEVENew-1) to KRV (Fig. 2d). All these novel viral integrations were molecularly validated by designing specific PCR primers (Additional file 2: TableS7, Additional file 1: Fig. S3B). Novel viral integrations were more frequent in mosquitoes from Tampon than Tapachula (Fig. 2d). Additionally, two of the Tampon novel viral integrations had a 90% amino acid identity with AeFV and CFAV, respectively (compared to the 72% average identity for annotated Flavi-EVEs), suggesting recent integration events. This result correlates with the invasion history of Ae. albopictus out of its native home range in Asia. Before the aggressive global invasion of Ae. albopictus, which started roughly 50 years ago, Ae. albopictus had reached the islands of the Indian and Pacific Oceans from South East Asia in the eighteenth to early twentieth centuries [22]. Thus, mosquitoes from La Reunion Island are considered "old" and have maintained large populations [23,24]. In contrast, Ae. albopictus was first detected in Tapachula in 2002, likely a secondary invasion from the USA or Italy [25,26].
Distribution and structure of piRNA clusters PIWI-interacting RNAs (piRNAs) are mostly known for their role in immunity against TEs in the germline [27]. This is best studied in the model organism Drosophila melanogaster. However, in Aedes spp. mosquitoes, the piRNA pathway acquired additional functions in antiviral immunity and can use viral RNAs as a substrate for piRNA production [28]. Most piRNAs are derived from large genomic regions termed piRNA clusters. These clusters present a memory of past transposon invasions and confer immunity against these elements, as piRNAs processed from transposon remnants within clusters can target active transposons encoded elsewhere in the genome [27].
Using the preceding AaloF1 genome assembly, a previous study reported 643 clusters with a maximum length of maximum 10 kb [29]. However, piRNA clusters can span up to several hundred kilobases [30,31]; therefore, a more continuous genome assembly can improve the annotation of these genomic regions. We used small RNA libraries generated from somatic tissues (female carcass) as well as germline tissues (ovaries) to annotate 1441 piRNA clusters with an average size of 10.911 kb (SD 634.885 kb; max: 139.92 kb) (Additional file 1: Fig. S4, Additional file 6), covering 0.62% of the genome. This is comparable to piRNA clusters annotated with the same approach in Ae. aegypti (Fig. 3a). In contrast, using the same annotation pipeline on the highly fragmented Ae. albopictus AaloF1 genome assembly, we recovered nearly twice as many (2467) but much smaller clusters (average size, 5.923 kb; SD, 306.239 kb; max, 64.225 kb) (Fig. 3a, b). Only a comparably small fraction (31.8% and 47.3%) of all piRNAs in the germline and soma, respectively, were included in piRNA clusters in AalbF2, while this fraction was nearly twice as large in Ae. aegypti (Fig. 3a). This is likely accounted for by the 14% of duplications still present in the assembly, leading to the exclusion of piRNA clusters without or with only very few uniquely mapping piRNAs; the presence of which was used as a criterium to annotate piRNA clusters. Consequently, when only considering unambiguously mapping piRNAs, the fraction of piRNAs included in clusters increases to 59.1% and 72.9% in germline and soma, respectively.
The vast majority of all clusters display piRNA expression biased towards one strand, and only approximately one fifth of all clusters were expressed from both strands (see exemplary clusters in Fig. 3c). Such dual strand clusters were mostly expressed in the germline (Additional file 1: Fig. S4). Interestingly, relative piRNA expression from clusters varied substantially between somatic and germline tissues, with some clusters showing a soma-dominant expression and others being predominantly expressed in the germline. Blood-feeding had little impact on cluster expression. Analysis of publicly available small RNA libraries derived from the widely used Ae. albopictus C6/36 and U4.4 cell lines showed piRNA production from both somatic and germline clusters (Additional file 1: Fig. S4).
While piRNA clusters are highly enriched with transposable elements in fruit flies [31], this is not the case in Ae. aegypti mosquitoes [32], even though their genomic transposon content is much higher. Comparably, only a minority of Ae. albopictus piR-NAs were derived from repetitive elements [29], and piRNA clusters were slightly depleted of all repetitive sequences except for helitrons and LTR-retrotransposons (Fig. 3c). Interestingly, nrEVEs were enriched compared to the rest of the genome (Additional file 1: Fig. S3), and 138 out of 456 elements were overlapping with piRNA clusters, suggesting strong evolutionary pressure to integrate viral sequences into piRNA clusters and/or maintain nrEVEs in piRNA-producing loci.
miRNA annotation
Small noncoding RNA pathways contribute to important biological and cellular processes like development, differentiation, and immunity. MicroRNAs (miRNAs) are an endogenous class of small regulatory RNAs that are crucial for post-transcriptional regulation of gene expression [33]. MiRNAs are processed from precursor hairpin structures (pre-miRNAs) which are present in the genome as single-copy loci or, due to gene duplication, as multiple copies of the same miRNA. A comprehensive inventory of Ae. albopictus miRNAs is an important resource for investigating small RNA function in vector biology and mosquito antiviral immunity. The official depository of miRNA genes across all species, miRbase [34], does not currently include Ae. albopictus miRNAs. Therefore, to annotate miRNA genes in AalbF2, we used the miRDeep2 algorithm [35] on data from small RNA libraries as described above, comprising more than 23 million miRNA-sized 18-24-nt reads. The majority of reads were derived from carcass samples, which is expected as small RNA libraries prepared from ovary samples are more biased towards piRNAs. Initially, miRDeep2 predicted 473 pre-miRNA loci in AalbF2, which was reduced to 229 loci representing 121 distinct pre-miRNA species (Additional file 7) after manual inspection and handling stringent prediction criteria. Among these predictions, 92 represent miRNAs previously annotated in the Ae. aegypti genome, three were predicted based on conservation to miRNAs in other insect species, and 26 were entirely novel miRNA genes. Using these predictions, we characterized the expression of miRNAs in ovaries and carcasses and analyzed changes induced by blood feeding. We found that most highly abundant miRNAs show a similar expression pattern between ovaries and carcass (Fig. 3e). Yet, a group of miRNAs, including miR-92a/ b, miR-309a, miR-989, miR-2941, miR-2946, and a newly predicted miRNA, miR-new5, were highly abundant (> 1000 reads per million miRNAs; rpmm) exclusively in the ovary samples (Fig. 3e). These findings are coherent with previous studies that identified the clustered miRNAs miR2941/2946 to be specifically expressed in Ae. aegypti ovaries [36]. miR-989 is known to be among the most abundant miRNA in mosquito ovaries, both in Anopheline and Aedes spp. mosquitoes [37,38]. Similarly, miR-309 was found to be predominantly expressed in Ae. aegypti ovary tissue and was furthermore shown to be strongly induced upon blood feeding both in Aedes and Anopheles spp. Small non-coding RNA annotation in AalbF2. a Summary statistics on annotated piRNA clusters using the genome assemblies for Ae. aegypti AaegL5 and Ae. albopictus AaloF1 or AalbF2. b Size distribution of piRNA clusters annotated with the old AaloF1 assembly or the most recent AalbF2 assembly. The density plot shows the number of clusters normalized to the total number of piRNA clusters. c Enrichment of repeat classes or nrEVEs in piRNA clusters compared to the whole genome. d log2 piRNA coverage on an exemplary uni-strand (left panel) or dual-strand (right panel) piRNA cluster (given as piRNAs per million mapped reads [rpm]). Annotated genes are indicated with arrows, repeat features, and nrEVEs with gray or red boxes for positive or negative strands, respectively. e miRNA abundance in Ae. albopictus carcass and ovary samples. Counts for individual miRNAs were normalized to the total number of miRNAs in each dataset and expressed as log2-transformed reads per million miRNAs (rpmm) + 1. The mean of two independent libraries for each condition is shown. The highly abundant miR-34 and selected miRNAs with high expression in the ovary, but not carcass, are indicated. f Fold induction of miRNA levels in blood-fed ovary samples compared to sugar-fed samples. The basal expression of each miRNA in the sugar-fed samples is indicated in gray scale. Only miRNAs with an induction ≥ 5-fold are shown. Color coding in a and b represents the basis for the miRNA prediction, as indicated mosquitoes [39,40]. When comparing sugar-and blood-fed Ae. albopictus, we observe a similar induction of miR-309a upon blood feeding (Fig. 3f). Likewise, miR-286b and miR-375, which we find to be strongly induced upon blood meal, have previously been shown to be upregulated after blood meal in Anopheles stephensi and Ae. aegypti, respectively [40,41], indicating that an orchestrated miRNA response to blood feeding is conserved between different mosquito species. We noted that most newly predicted miRNAs are predominantly expressed in ovary tissue (Fig. 3e), which likely reflects a sampling bias of previous studies that did not deep sequence and predict miRNAs from dissected ovary samples. Some of these predicted miRNA species are relatively highly abundant and are differentially expressed upon blood feeding, suggesting important functions in the physiological processes that are induced upon blood meal.
Curation of immunity repertoire
The capacity of mosquitoes to acquire, disseminate, and transmit viruses (i.e., vector competence) is a complex phenotype which is controlled by genetic elements of both the vector and the pathogen, as well as environmental variables [42]. Understanding the complex relationship between vectors and pathogens requires understanding innate immunity in mosquitoes. To catalog genes encoding the immune repertoire of Ae. albopictus, we searched with BLASTp the predicted peptides of the AalbF2 assembly using as a query 417 manually curated proteins of Ae. aegypti from ImmunoDB [43]. We combined phylogenetic comparisons and manual annotation to curate 663 putative immune-related genes encoding 979 predicted proteins, belonging to 27 functional groups ( Table 2, Additional file 8). This value is in line with that estimated in AaloF1 (521 genes), confirming the finding that the immune repertoire of Ae. albopictus is larger than that of other dipteran species [3,43]. A manual inspection of the 663 putative immune-related genes using our 5′ and 3′ flanking region pipeline identified a set of 78 suspicious genes that are distributed in half of the immune gene families (Table 2 and Additional file 8), reducing the total number of predicted immune genes to 622.
Immune system functions can be broadly categorized into three main phases, recognition, signal transduction, and effectors [43][44][45]. A detailed analysis of the immune repertoire of Ae. albopictus revealed extensive expansions in 16 of the 27 functional groups relative to Ae. aegypti. In the Toll and IMD pathways, genes involved in recognition and Toll-1/Spz signal transduction show expansion, whereas immune effectors do not display similar family-wide augmentations. Interestingly, while five cecropins (CEC) genes are known in Ae. aegypti, we only identified a single CEC gene in the new assembly. We found expansions in families involved in all immune phases of the melanization pathway [46]. The most extreme expansion event regards the CLIP family of regulators with 118 members compared to 67 and 56 genes reported for Ae. aegypti and An. gambiae, respectively (Table 2). Another interesting case involves the prophenoloxidase (PPO) gene family, which in Ae. aegypti includes six tandemly arrayed genes, namely PPO4, PPO8, PPO7, PPO5, PPO1, and PPO2. We found that the entire cluster of six genes has been locally duplicated twice in Ae. albopictus, resulting in 18 genes (Fig. 1d, Additional file 2: Table S8). We confirmed this triplication of the clusters using in situ hybridization (Fig. 1b). PPOs are enzymes that catalyze the production of melanin in response to infection [47]. Expansion of PPO genes is not common in insects [48], but in mosquitoes, the number of genes is higher than other insects. The high conservation of the PPO organization and order in the array in both Ae. aegypti and Ae. albopictus strongly suggests that these duplications are ancient events that occurred 71.4 Mya before the split between the two species [3]. Future studies focusing on dissecting the functional importance of specific family expansions in Ae. albopictus may determine their significance for its biology including vector competence and ecological adaptation.
The sex-determining M locus
In both Ae. aegypti and Ae. albopictus, sex is determined by a male-determining locus (M locus) that resides on one homolog of chromosome1. Nix, the dominant male- determining factor, was first discovered in the M locus of Ae. aegypti [49]. We searched AalbF2 for nix and located it in an approximately 917 kb scaffold (NW_021838423.1). The nix sequence is male-specific as indicated by the chromosome quotient analysis [50] using Illumina reads obtained from male and female mosquitoes of the Foshan strain [11]. A part of the nix gene was previously identified in Ae. albopictus [49,51], and its full-length sequence was described in the assembly of the Ae. albopictus C6/36 cell line [52]. The nix gene in the AalbF2 assembly is annotated as having two exons flanking a small intron (XM_019669557.1), similar to a previous report [5]. However, there is an apparently defective copy of nix approximately 22 kb away from XM_ 019669557.1. This copy does not have an intact open reading frame, and fragments showed up to 70% amino acid identity to XM_019669557.1 (Additional file 1: Fig. S5). Such duplication has not been reported in Ae. aegypti [49]. A second gene encoding a myosin heavy chain protein named myo-sex [53] has also been shown to be located in the M locus, together with nix in Ae. Aegypti [5]. Myo-sex is required for male flight in Ae. aegypti [54]. A myo-sex homolog (XM_019707039.1 or XP_019562584.1; Additional file 1: Fig. S5) has been found in two separate contigs (NW_021838603.1 and NW_021838542.1). It is not yet clear whether the gene that encodes XP_019562584.1 is also located in the M locus in Ae. albopictus, as the chromosome quotient analysis [50] was complicated by the presence of highly similar autosomal paralogs (e.g., AALF000603 and XP_019560880).
Genome-wide polymorphism and linkage disequilibrium
The level of genetic variability among populations of a given species is the substrate for evolution, which, for an invasive vector species like Ae. albopictus, includes processes of adaptation to new ecological settings, selection of resistance alleles against control tools (i.e., insecticides), and co-evolution with pathogens [55][56][57]. These are biological features important to estimate the epidemiological relevance of Ae. albopictus populations and to account for in the design of novel genetic-based strategies of vector control [42,58]. As for the analyses of the landscape of viral integrations, we used whole-genome sequencing (WGS) data of mosquitoes from Tapachula and Tampon [12] to show the usefulness of AalbF2 in understanding the genomic diversity of Ae. albopictus populations. The genetic diversity (π) estimates for the laboratory strain are lower than those for the wild populations, which is consistent with the hypothesis of a population bottleneck in the laboratory strain (Fig. 4a). Genetic diversity is slightly higher for the invasive Mexican population than the old population from La Reunion. Global estimates of genetic differentiation (F ST ) among the three samples range from 0.13 to 0.21, with Foshan being the most differentiated (Fig. 4b). Sliding window analyses across the genome showed regions of high and low genetic differentiation between the two wild populations (Fig. 4c) and varying levels of genetic diversity for the two wild populations and the Foshan strain (Fig. 4d). We also derived estimates of linkage disequilibrium (LD). Across the three samples studied, the r 2 Max/2 is approximately 1.3 kb (Fig. 4e). These estimates are strikingly smaller than the estimated values for Ae. aegypti, which range between 34 and 101 kb [5]. While comparing these LD, estimates may be complicated by differences in data collection platforms (WGS for Ae. albopictus and SNP-chip for Ae. aegypti), the striking difference may reflect the different colonization histories of Ae. aegypti and Ae. albopictus populations [22,59]. Aedes aegypti experienced a slow colonization process that started in the seventh century compared to a quick dispersal in the past 50 years for Ae. albopictus that resulted in genetic admixture among the invasive populations [23,24,60]. The age of mutations can affect LD with younger mutations, giving higher LD values, it is possible that SNPchip data and WGS data differ in the average age of mutations, as SNPs are estimated across the whole-genome with no or prior analyses with WGS approaches [61,62]. The improved continuity of AalbF2 improves our ability to understand the spatial context of genetic signals and long-range patterns.
Developmental transcriptional profile
Understanding the network of expression throughout development could provide insights into biological functions implicated in the adaptation of this invasive species to different environments and, coupled with the ability to manipulate genes and their expression, the basis to study gene function. Additionally, cis-regulatory elements that guide the expression in a tissue-or time-specific manner could be identified from the analyses of transcriptional profiles and be co-opted in novel genetic-based strategies of vector control. AalbF2 and its predicted gene models served as the basis to establish a comprehensive global view of gene expression dynamics throughout Ae. albopictus development taking advantage of recently produced Illumina RNA sequencing (RNA-seq) data from 47 unique samples representing 34 distinct stages of mosquito development [63] (Fig. 5a). These RNA-seq data amounted to 1.56 billion reads corresponding to total sequence output of 78.19 Gb (Additional file 9). A total of 94.1% of the reads were mapped to AalbF2. The number of spliced alignments increased substantially from 39, Fig. 4 Genome-wide polymorphism in Aedes albopictus. a Mean nucleotide diversity. b Global F ST , the mean, and standard deviation were calculated from sliding windows analysis. c F ST estimates between the two wild populations, Tapachula (green) and Tampon (Blue). F ST estimates measured across the whole genome with sliding windows of 50 kb with 10-kb steps. Scaffolds that have been assigned to chromosomes 1, 2, and 3 are on the left side of the plot. The remaining unassigned scaffolds are shown on the right side. The unassigned scaffolds were placed in alphabetical order from left to right. d Overview of the pattern of nucleotide diversity across the genome. Nucleotide diversity was measured using 50-kb-long sliding windows and 10-kb steps. e Linkage disequilibrium (r 2 ) for two wild populations and a laboratory strain. Red, green, and blue lines represent the fitting curve estimated with the ngsLD package, and shaded areas around the lines represent confidence intervals from 100 bootstraps 991,260 in the assembly of the C6/36 Ae. albopictus cell line (canu_80X_arrow2.2, 17) to 56,243,825 in AalbF2 (40.64% increase), again confirming a more complete annotation in AalbF2 (Fig. 5b). The number of uniquely mapped reads also increased significantly likely due to the removal of extensively duplicated regions found in the C6/36 assembly [52]. The analyses of gene expression profiles across all developmental time points showed that the number of expressed genes (transcripts per million ≥ 1) gradually increases through embryogenesis, reaching its highest peak at 68-72 h (Additional file 10). As previously observed, there is an increase in the number of expressed genes during the early pupal stages, and the male germline expresses the highest number of genes among all samples [63]. After a blood meal, female mosquitoes undergo a series of physiological changes to support oogenesis. In PBM ovaries, the number of genes expressed in the female germline changes dramatically from 12 to 36 h.
Pairwise correlation analysis revealed that almost every developmental stage is most highly correlated with its adjacent stage and is very similar to what was previously found (Additional file 1: Fig. S6) [63]. To visualize the various patterns of gene expression and the relationships between the samples, hierarchical clustering and principal component analyses were performed (Additional file 1: Fig. S6).
Based on these analyses, embryos, PBM ovary, pupa, larva, and PBM female carcass samples tend to cluster closer together which is expected since their gene expression profiles are similar as these are developmentally related samples. Two notable exceptions include the male testes and early embryos (0-1 h, 0-4 h, and 4-8 h), likely due to transcripts related to the maternal-to-zygotic transition (Additional file 1: Fig. S6). The male testes sample clusters away from all other samples, reflecting a distinguishing difference between this sample and other samples sequenced (Additional file 1: Fig. S6).
Discussion
AalbF2 and its associated gene set, databases of nrEVEs, miRNAs, and piRNA clusters are collective resources that will enable great advances in Ae. albopictus biology. Additionally, we developed the first physical map of Ae. albopictus, which consists of fifty DNA markers that cover the largest genomic scaffolds, rDNA, PPO gene clusters, and the largest viral integration in the genome. Overall, FISH data were consistent with the assembled genome, confirming its large-scale structural accuracy. Combining in situ and bioinformatic approaches, we anchored to the Ae. albopictus chromosome 58 scaffolds, whose length sum makes 75% of the genome. Analyses of mitotic chromosomes also showed that the Ae. albopictus chromosomes are slightly longer than Ae. aegypti ones, which is consistent with cytofluorimetry results.
Small RNA analyses identified 121 miRNAs including 26 novel miRNAs, some of which are strongly induced upon blood feeding, suggesting important functions for these miRNAs in reproduction and development. piRNA cluster annotation has provided a high confidence set of piRNA clusters, setting the stage for their inactivation or modification to understand their functions and to explore avenues to exploit them to prevent arbovirus transmission. Moreover, the strong enrichment of newly annotated nrEVE sequences in piRNA clusters provides fuel for the hypothesis that they may provide a potential inherited antiviral defense system [17,18,28]. Curation of immunity gene annotation, among the predicted 26,856 protein-coding sequences and the M locus, will unable insights into the immunity pathways that contribute to Ae. albopictus vector competence and provide venues for novel genetic-based strategies of control, including those for population suppression based on gene drive systems creating malebiased populations [64]. The developmental transcriptome analysis described here demonstrates that the new genome assembly has produced a significantly more complete gene set with less gene duplications as compared to the previously available genome. The quantification data across developmental time points and multiple tissues will provide the community with an invaluable resource for further exploration of Ae. albopictus biology.
Mosquito samples and DNA preparation
Aedes albopictus mosquitoes of the Foshan strain are reared at the insectary of the University of Pavia as previously described [11]. We performed a single pair cross between a male and a female individual; from the progeny of this cross, we randomly picked a male and a female and made them mate. We repeated this procedure for six generations, after which we let the progeny of a single-pair mating interbreed. We used 1st, 2nd, 3rd, and 4th instar larva stages), and P (yellow, pupae, early male and female, and late male and female pupa stages). b Read mapping analysis of Ae. albopictus developmental samples against the C6/36 cell line assembly (canu_80X_arrow2.2) and AalbF2 genome assemblies. The distribution reflects the percentage of fragments mapped to too many loci (maroon), fragments mapped to multiple loci (blue), and uniquely mapped fragments (dark blue). There is a significant reduction of duplication in AalbF2 genome assembly compared to the C6/36 cell line (Canu_80X_arrow2.2) genome. More transcripts fell under the uniquely mapped category in the AalbF2 genome pupae from within the 2nd to 3rd generation of the inbred single pair for highmolecular weight (HMW) DNA extraction.
We also used DNA from two wild populations, one from the West African island of La Reunion and Le Tampon city, and one from North America, Mexico, and Jardin Pantheon city. Whole genomic DNA from individual adult mosquito samples was extracted using the QIAGEN Blood and Tissue kit (Qiagen, Hilden, Germany) following the manufacturer's instructions. PCR-free sequencing libraries were prepared using a custom pipeline including the TruSeq DNA PCR-Free kit (Illumina) [65]. Samples were sequenced on the Illumina HiSeqX sequencing platform pooling 32 samples per flowcell at Verily Life Sciences (South San Francisco, CA) resulting in an average of 100 million 150-bp reads per sample as previously described [12].
Flow cytometry
The genome size of Ae. albopictus mosquitoes from different strains was estimated by flow cytometry as previously described [66]. Briefly, the nuclei were released from the heads of a mosquito and a Drosophila virilis standard (1C-328 Mb) in 1 ml of cold Galbraith buffer using 15 strokes of a pestle in a 2-ml Kontes Dounce Tissue grinder. The released nuclei were filtered through 40-μm nylon mesh, stained with 25 ml of 1 mg/ml propidium iodide, allowed to stain for 3 h in the cold and dark, and then scored for relative red (PI) fluorescence using a Cytoflex flow cytometer. The 1C genome size of the sample was estimated as the ratio of the relative fluorescence of the 2C peaks of the sample and standard multiplied times the 1C amount of DNA in the standard. A minimum of 1000 nuclei were scored under each peak. All scored peaks were symmetric with a CV below 2.0.
Pacific Biosciences library construction and sequencing
HMW DNA extraction for Pacific Bioscience sequences by the Berkley genome facility which also built and sequenced libraries. To obtain HMW DNA, fresh frozen pupae (around 80 male sibling pupae) were disassembled using a Pyrex mortar in 2-ml ATL with 4 μl RNase. Samples were then incubated at 37°C for 30 min with a parafilm cover and gentle agitation (300 rpm). After the addition of 100-200 μl proteinase K, samples were incubated overnight at 37°C overnight. DNA was then purified from proteins using a standard phenol-chloroform extraction protocol, followed by precipitation in 100% ice-cold ethanol. DNA was then washed with 70% ethanol at room temperature (max speed spins 10 min). Purified DNA was resuspended in elution buffer with no EDTA, and samples were left rotating slowly overnight at 4°C to resuspend.
In situ hybridization and physical map construction
We developed a new mapping approach based on the amplification of DNA probes using cDNA instead of bacterial artificial chromosome (BAC) clones. DNA probes derived from the largest genomic scaffolds, 18S rDNA, PPO genes, and Canu-Flavi19 were mapped to the chromosomes using FISH. To identify genes that could be used for the physical mapping of the Ae. albopictus genome, transcripts of Ae. albopictus C6/36 cell lines were aligned against AalbF2 [52]. DNA fragments were amplified by PCR using a Q5 high-fidelity DNA polymerase (New England Biolab, Ipswich, MA, USA). cDNA or genomic DNA fragments were used as templates to amplify transcript fragments or large exons; cDNA or genomic DNA fragments were used as templates to amplify transcript fragments or large exons, respectively. RNA was obtained from mosquito ovaries following the Zymo Research Direct-Zol DNA/RNA mini prep protocol (Zymo Research Corporation, Irvine, CA, USA). cDNA was synthesized using~200 ng RNA and primed with oligo (dT) following the Thermo Fisher Scientific Superscript III first-stand synthesis system protocol (Thermo Fisher, Ashville, NC, USA). Laboratory protocols for performing preparations and principal steps for FISH have been described earlier [71,72]. Transcript fragments or large exons with minimal length of 3.8 kb were used as probes for FISH. PCR-amplified DNA was labeled with two fluorescence dyes Cy3-or Cy5-dUTP (Enzo Life Sciences, Farmingdale, NY, USA) by nick-translation. A pair of DNA probes was hybridized simultaneously to the chromosomes [71,73]. Slides of mitotic chromosomes were prepared from imaginal discs of 4th instar larvae from the Foshan strain following the published protocols [71,72,74]. Chromosomes were stained with a YOYO-1 dye (Thermo Fisher, Ashville, NC, USA), and slides were mounted with a Prolog Gold reagent (Thermo Fisher, Ashville, NC, USA). FISH results were analyzed using a Zeiss LSM 880 Laser Scanning Microscope (Carl Zeiss Microscopy, LLC, White Plains, NY, USA) at × 600 magnification. Chromosome idiograms were developed using previously described protocols [72,74]. Chromosome proportions, such as relative chromosome length and centromeric index (relative length of the p arm), were calculated based on measurements of 60 chromosomes. The statistical analysis was performed using the JPM Pro 15 software program at 95% confidence intervals [75]. One-way ANOVA was used to calculate P values for comparison chromosome proportions between Ae. albopictus and Ae. aegypti. Chromosomes were subdivided into 96 bands with 4 different intensities.
Pair-wise comparison between Aedes aegypti chromosomes and Aedes albopictus scaffolds The Ae. aegypti AaegL5 genome assembly was downloaded from VectorBase (https:// www.vectorbase.org/). The first 58 scaffolds of AalbF2 (corresponding to the L75 of the assembly) were aligned to Ae. aegypti chromosomes with minimap2 [67]. Only hits with a percentage of identity higher than 40% were retained. Alignment results were summarized and visualized as a comparative genome dot plot using D-GENIES (http:// dgenies.toulouse.inra.fr/).
Identification of Aedes albopictus nrEVE
The AalbF2 genome assembly was screened for integrations from nonretroviral RNA viruses using a blast-based approach [76]. To this purpose, a database of viral proteins was created. The database included all complete amino acid sequences belonging to ssRNA, dsRNA, and unclassified RNA viruses with a tropism for vertebrates present in the NCBI RefSeq database as of August 2018 (Additional file 4). The database was updated including the Xinmoviridae and Phenuiviridae families. Candidate viral integrations were identified using the AalbF2 genome assembly as a query and the viral database and running the BLASTx [76] algorithm with an e value threshold of 1e −6 . Resulting hits were merged and refined with the EveFinder Pipeline [17]. Putative viral integrations were blasted against all proteins available in the NCBI RefSeq and nonredundant (NR) database, and a custom pipeline was used to recognize and remove false positive, including sequences with certain homology to eukaryotic proteins. Additionally, viral integrations closer than 100 bp and derived from the same viral species were joined. Each viral integration was assigned to a viral family based on its most similar virus in the NR database. The upstream and downstream 1-kb regions of each viral integration were inspected for repeated elements using a custom script based on BLASTn and a database of Ae. albopictus repeats predicted using RepeatModeler with default settings (http://www.repeatmasker.org/RepeatModeler/). This database was used to run RepeatMasker (http://www.repeatmasker.org) with default parameters to find and classify TEs (Additional file 2: Table S2).
Whole-genome sequencing (WGS) data of mosquitoes from La Reunion and Mexico were analyzed with VyPER [20], followed by custom scripts, to verify for the presence of additional viral integrations, different than what was characterized in AalbF2. Bioinformatic predictions of each novel viral integration were molecularly tested by PCR using specific primers (Additional file 2: Table S7).
The correspondence between AaloF1 and AalboF2 viral integrations was analyzed using a BLASTn-based script (Additional file 2: Table S5). Viral integrations annotated in AalbF2 were also used to test whether the haplotig purging pipeline effectively moved to the secondary assembly alternative haplotypes. BLASTn was used to find hits in the secondary assembly for each viral integration with the exception of the unclassified and Chuviridae-like nrEVEs, which are too redundant among themselves to provide reliable results. Hits were retained when at least 98% of the query length was present with a minimum percentage of identity of 95% (Additional file 2: Table S6).
piRNA cluster annotation
One-week-old female mosquitoes from the Foshan strain were provided with a 2-ml rabbit blood meal at the Institut Pasteur (Paris). A total of 60 fully engorged females were kept at 28°C by feeding on a 10% sucrose solution ad libitum. Thirty females were collected at 14 and 21 days post-blood meal (PBM). Each female was dissected, the ovaries were removed from carcasses, and both ovaries and carcasses were pooled into two groups of 15 for each time point. In parallel, 60 mosquitoes were kept on a sugar diet under the same conditions and sampled as described before. Total RNA was extracted from each pool using the Nucleospin miRNA kit by Macherey Nagel following the manufacturer's instructions. Extracted RNA was sent to the Beijing Genomics Institute (BGI) for sequencing. Total RNA was used for custom DNBseq library preparation and sequenced on a BGI-SEQ 500 to obtain 40 million reads SE50 per sample. Small RNA sequencing data were deposited to the SRA (BioProject PRJNA607026).
Ambiguous (multi-mapping) reads from the small RNA-seq libraries described above were either randomly distributed over all possible mapping positions (--best -strata -M1), or alternatively, ambiguous mapping reads were excluded (-m1) to obtain all uniquely mapping reads unambiguously assigned to single piRNA loci. For piRNA cluster annotation, reads in the size range from 25 to 30 bp were normalized to one million mapped piRNAs [ppm] to account for the lower amount of piRNAs relative to other sRNA classes in somatic tissues compared to the germline, and piRNAs were trimmed to their 5′ terminal nucleotide. Clusters were annotated similar to the approach used in fruit flies [31], optimizing minimal requirements for a larger and more repetitive genome like Aedes albopictus (Additional file 1: Fig. S4). Briefly, the genome was scanned with non-overlapping 5-kb windows; windows with 10 or more ppm and a maximum distance of 5 kb were merged into a cluster. Clusters were then filtered for being covered by at least 5 unique ppm, mapping to at least 5 different positions. Borders of the clusters were defined by the two furthest piRNAs, and clusters that were either very small (< 1 kb) or large but only covered by few piRNAs (piRNA density < 10 ppm/kb) were excluded. We performed separate annotations for germline and soma to avoid averaging out clusters that are only expressed in one but not the other tissue and that might fall below some of the set thresholds. The final dataset of piRNA clusters was obtained by merging the two datasets, and two clusters that were exclusively determined by rRNA reads were manually excluded from the list. piRNA cluster annotation was solely guided by piRNA coverage of the respective genomic regions but did not include assumptions on nucleotide biases or strand asymmetry, as used for example for the annotation of piRNA clusters in Aedes aegypti Aag2 cells [17] as culicine mosquitoes encode developmentally relevant piRNAs without 1 U bias [77].
The expression of clusters was confirmed and quantified using small RNA libraries not used for the initial cluster annotation. Expression was normalized to one million mapped piRNAs to compare somatic and germline tissues with different proportions of piRNAs among total small RNAs, or to million mapped small RNAs to plot coverage of the clusters. Enrichment of repeat classes was calculated as the quotient of the genomic fraction of nucleotides annotated with the respective repeat in clusters compared to the whole genome.
miRNA predictions and expression analysis
Small RNA libraries from samples collected 14 days PBM were mapped to the new AalbF2 assembly or, alternatively, the previous AaloF1 assembly with bowtie (v1.2.2) [78] without allowing mismatches. Mapped small RNA reads were size selected for 18-24 nucleotides, converted into a single concatenated fasta file comprising 23,644,778 reads and used as input for the mapping module of miRDeep2 [35]. The program (Galaxy version 2.0.0.8.) was accessed through the Mississippi Galaxy instance available at http://mississippi.fr and the settings were -k 19 -m -p -r 100. The obtained output files in fasta and ARF format were used as input for the miRDeep2 module together with a list of known precursor and mature miRNA sequences from the Ae. aegypti genome, downloaded in fasta format from miRBase 28 May 2019 [34]. In addition, precursor miRNAs from Culex quinquefasciatus, Anopheles gambiae, Drosophila melanogaster, Apis mellifera, and Bombyx mori were used as input. All other settings were left as default, and a detailed fasta output was requested. The resulting tabular output file was split into three lists: (1) known and predicted miRNAs based on the Ae. aegypti reference datasets, (2) known but unpredicted miRNAs, and (3) novel predicted miRNAs which include predictions supported by the reference data from other insect species provided as well as entirely new predictions. The list of known miRNAs was inspected for miRNA predictions in which the known 3′ miRNA was mapped on a 5′ arm of a putative hairpin and vice versa. These isoforms generally had very low miRDeep scores compared to the true copy (3p miRNA mapped on 3′ arm and/or 5p miRNA mapped on 5′ arm) and were manually deleted from the list. From the list of known but unpredicted miRNA, only predicted miRNAs that were supported by at least 10 mature miRNA counts were considered. Their genomic position is not provided in the miR-Deep2 program and was determined using the NCBI BLASTn algorithm with the pre-miRNA sequences from the Ae. aegypti genome and the AalbF2 assembly as query and subject inputs, respectively. The list of novel miRNA predictions was manually curated using the following stringent parameters [79]. More than 80% of the mature miRNAs were required to have the same 5′ end on the precursor. More than 80% of the predicted miRNA star reads were required to start and end at nucleotide positions predicted to give rise to a characteristic Drosha/Dicer product allowing a margin of ± 1 bp at both the 5′ and 3′ ends. Predictions that were not supported by any predicted miRNA star read were excluded unless the precursor showed high similarity to a known insect miRNA and was supported by > 1000 mapped reads. Precursors with > 1000 BLAST hits were also excluded. miRNA expression analysis was performed in the public server of the Galaxy toolshed [80] using small RNA datasets as described above. Small RNAs were mapped to the AalbF2 assembly and their genomic positions were intersected with the location of known and predicted pre-miRNAs obtained from the miRDeep2 analysis using BEDtools intersect intervals (Galaxy Version 2.29.0; settings: *same* strand, -wo,-abam). The obtained output was filtered for an overlap of small RNA reads and miRNA precursors of at least 18 bp and no more than 24 bp. The occurrence of each pre-miRNA was then counted, and raw counts were exported to Microsoft Excel. The read count per pre-miRNA was normalized to the total number of miRNA reads in each dataset and expressed as reads per million miRNAs (RPMM). Where indicated counts were transformed to log 2 (RPMM + 1). Expression data were plotted in GraphPad Prism.
Generation of RefSeq gene set annotation
The NCBI Eukaryotic Genome Annotation Pipeline was used to annotate genes, transcripts, and proteins on the primary assembly of AalbF2, Aalbo_primary.1 (accession GCF_006496715.1). Due to the highly repetitive nature of the genome, masking was done with RepeatMasker using a collection of repeats generated with RepeatModeler [52] and WindowMasker [81] and resulted in 74% of the genome being masked. Nearly 8 billion RNA-seq reads from 170 Ae. albopictus BioSamples were retrieved from SRA and aligned to the masked genome using BLAST [82] followed by Splign [83], along with 366 known RefSeq transcripts, 6046 GenBank transcripts, and 302,415 ESTs from the Aedes genus. The set of proteins aligned to the masked genome consisted of 30,044 known RefSeq proteins from Dr. melanogaster, 27,814 model RefSeq proteins from Ae. aegypti; 100,517 GenBank proteins from insects, 1084 known RefSeq proteins from Nasonia vitripennis, and 528 known RefSeq proteins from Apis mellifera. The gene models' structures and boundaries were primarily derived from these alignments. Ab initio extension and joining/filling of partial ORFs in compatible frame were performed by Gnomon (https://www.ncbi.nlm.nih.gov/genome/annotation_euk/gnomon/), using a hidden Markov model trained on Ae. albopictus where alignments did not define a complete model but the coding propensity of the region was sufficiently high to predict a coding gene with confidence. tRNAs were predicted with tRNAscan-SE:1.23 [84], and small non-coding RNAs were predicted by searching the RFAM 12.0 HMMs for eukaryotes using cmsearch from the Infernal package [85]. The annotation of the Aalbo_primary.1 assembly, Ae. albopictus Annotation Release 102 (https://www.ncbi. nlm.nih.gov/genome/annotation_euk/Aedes_albopictus/102/) or AR 102 resulted in 26, 856 protein-coding genes (84% fully supported by experimental evidence, and 12% with more than 5% ab initio), 9530 non-coding, genes and 4108 pseudogenes.
Artifacts and gene duplication detection in AalbF2
To identify highly similar sequences in AalbF2 while obtaining their position in the scaffolds, we performed a BLASTp all_vs_all of the 40,086 peptide sequences with an e value of 1e −40 . After excluding self-alignments, we extracted the sequences of suspected gene duplications including the 500 bp and 1000 bp in both the 5′ and 3′ flanking regions of the coding sequence. A BLASTn analysis using as queries the 500-bp gene regions and 1000-bp gene dataset against the new assembly was then performed (Additional file 1: Fig.S1, step 3). All matches with 100% coverage over the entire sequence, and 98% of identity were filtered and collated into candidate artifact pairs list.
Identification of immunity genes and manual curation of their annotation
A protein sequence homology analysis pipeline was developed to identify immunerelated genes in AalbF2. A dataset of 417 manually curated protein sequences from 27 immune functions of Ae. aegypti [8] was used as a query to search by BLASTp against the peptide database (GCF_006496715.1). Local alignments were selected based on the associated e value 1e −20 and a cutoff of ≧ 60% of identity. This was followed by sequence extraction and filtering of isoform sequences and comparative analyses and manual curation to map synteny, phylogeny, and sequence identity. Gene duplication events were detected using as a reference the orthologous immune-related genes of Ae. aegypti. We also performed a genome mapping analysis to uncover paralogous generated by tandem duplications. The evolutionary history of each expanded immune family protein was then inferred using a maximum Likelihood method with Phylogeny.fr platform [86]. The pipeline One-Click mode setting was used as default, which includes MUSCLE for multiple alignments [87], Gblocks for alignment curation [88]. Improvement of phylogenies was done after removing divergent and ambiguously aligned blocks from protein sequence alignments [89], and TreeDyn for tree drawing was used to reconstruct a robust phylogenetic tree from a set of sequences [90]. The nwk files obtained were edited in the iTOL platform (https://itol.embl.de/login.cgi) and exported as SVG files.
Orthogroups, orthologues, and single-copy gene clusters of immune-related genes across multiple species were defined by clustering the immune-related peptides of Ae. albopictus and the complete peptides of Ae. aegypti (Liverpool-AaegL5 assembly), An. gambiae (PEST-AgamP4), and Dr. melanogaster (Dmel-r6.26) species. Two approaches were performed using OrthoVenn2 [91] and OrthoFinder [92]. Parameters for Ortho-Venn2 considered the e value cutoff for all-to-all protein similarity comparisons, the inflation value for the generation of orthologous clusters using the Markov cluster algorithm (e value = 1 × 10 −2 and inflation value = 1.5).
Analyses of the sex-determining M locus
The annotated Ae. albopictus nix transcript (XM_019669557 or LOC109397226) was used as a query to perform a BLASTn (e value cutoff 1e −5 ) against AalbF2. A 1436-bp mRNA sequence showed a 100% match to contig NW_021838423.1 from position 209, 080 to 210,622, except for a small intron in the genomic sequence. When the Ae. albopictus NIX protein sequence (XP_019525102) was used as a query to perform tBLASTn against AalbF2, a possibly duplicated copy was found (Additional file 1: Fig. S5) in addition to the annotated LOC109397226. Although the precise beginning of the open reading frame of the duplicated copy is unclear, the duplicated copy is likely to be approximately 20 kb away from the annotated nix gene. It is not clear whether the duplicated copy is functional as its open reading frame appears to be interrupted by premature stop codons and indels (Additional file 1: Fig. S5). The duplicated copy is significantly related to and only to NIX at the amino acid level. The duplication appears to have occurred a long time ago as the previously mentioned BLASTn searches did not show a significant match between the annotated Ae. albopictus nix transcript (XM_ 019669557 or LOC109397226) and the duplicated copy (e value cutoff 1e −5 ). Male specificity of both LOC109397226 and the duplicated nix sequences was confirmed by using the chromosome quotient analysis [50] with Illumina reads obtained from Foshan strain male and female mosquitoes [11]. The Ae. aegypti myo-sex protein sequence [53] was used in a tBLASTn search to identify the A. albopictus homologs of myo-sex, and phylogenetic analysis was conducted using Phylogeny.fr [86,93].
Analyses of genome-wide polymorphism and linkage disequilibrium
We processed WGS datasets of mosquitoes from the Foshan strain, La Reunion, and Mexico [11,12] to discover single nucleotide polymorphism (SNP) and derive estimates of linkage disequilibrium (LD) and other population genetics parameters. Paired-end reads were aligned to AalbF2 using BWA-MEM version 0.7.17 [94]. We discarded unmapped reads as well as reads with mapping quality below a mapQ of 30 using SAMtools version 1.9 [95]. Next, we used SAMtools to merge and sort the paired-and single-end pseudoreads read alignments into a single BAM file to be used in subsequent analyses. First, we used GATK version 3.8 [96]. to perform realignments around indels. Second, we used Picard tools version 2.9.0 (https://broadinstitute.github.io/picard/) to remove optical and PCR duplicates. Third, we generated an uncompressed BCF using SAMtools mpileup version 1.3.1 with indel calling disabled, skipping bases with baseQ/BAQ less than 30, and with mapQ adjustment (-C) set to 30. Fourth, we converted it to a VCF file using bcftools version 1.5. (http://samtools.github.io/bcftools/ bcftools.html) We filtered out low-quality SNPs with SNPcleaner version 2.4.1 [97] and removed sites that had a total depth across all individuals less than 1500 reads or had less than 10 individuals with at least two reads each. Finally, additional sites were filtered out based on the default settings within the SNPcleaner script. We obtained a set of robust sites for each population comprising the sites that passed all our filtering thresholds. We restrict our analyses to these robust sites using the option -sites of ANGSD version 0.929-21 [98]. Within ANGSD, we used uniquely mapped reads with minimum map quality and base quality thresholds of 30 and 20, respectively. For linkage disequilibrium (LD) analyses we used ANGSD genotype likelihoods to directly estimate decay using ngsLD version 1.1.0 [99]. We used ANGSD to calculated global Weir and Cockerham F ST [100] between populations and diversity (π) within populations directly from the estimated allele frequencies from the sequencing read data. We obtained approximately 359 million robust sites per population during our filtering. We then performed a sliding window analysis to estimate F ST and π across all scaffolds of the new genome with 50,000-bp windows and 10,000-bp steps, with a total of 85,844 windows. We plotted windows with at least 2000 sites with each window being a point in the plots. We estimated the pairwise LD using the ngsLD package [99], which takes the uncertainty of genotype assignment into account by avoiding hard call genotypes entirely and using genotype likelihoods (GLs). The program has two algorithms to estimate LD levels from GLs. One is a maximum likelihood approach to estimate the haplotype frequencies between pairs of sites to estimate D, D′, and r 2 and the other is based on the squared Pearson correlation (r 2 ) between expected genotypes using their posterior probabilities. All LD estimates were done with 100 bootstraps, and we tested different bin sizes until we obtained small confidence intervals. We estimated the LD pairwise comparisons for all sites and randomly picked 0.01% of the comparisons to run the ngsLD algorithms for fitting and plotting. The 0.01% sampling data points represent at least 1.5 million r 2 comparisons. We used new and previously publish SNP chip data from Ae. aegypti to estimate LD for this species and compare to our results [5]. We generated our plots in R using the built-in functions and the R packages ggplot2 [101], Sushi [102], and qqman [103].
Developmental profile analyses
We used wild-type Ae. albopictus mosquitoes from San Gabriel Valley, located in the Los Angeles County, CA, for RNA extraction. Mosquito rearing, total RNA isolation, and RNA-seq were carried out as previously described [63]. RNA-seq libraries were aligned to AalbF2 using STAR aligner [104]. Gene models were downloaded from NCBI (GCF_006496715.1_Aalbo_primary.1_genomic.gtf) and quantified with feature-Counts [105]. Transcripts per million (TPM) and fragments per kilobase million (FPKM) values were calculated from count data using Perl scripts. All sequencing data has been made publicly available at NCBI SRA under BioProject PRJNA563095 (genomic) and PRJNA563095 (transcriptomic). | 2023-01-19T21:55:51.205Z | 2020-08-26T00:00:00.000 | {
"year": 2020,
"sha1": "4d287cb439744ca0aabd33451e16e9f6cfedd3a3",
"oa_license": "CCBY",
"oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/s13059-020-02141-w",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "4d287cb439744ca0aabd33451e16e9f6cfedd3a3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
267732639 | pes2o/s2orc | v3-fos-license | Fluoroscopic-guided hysteroscopic tubal cannulation results in high technical success and pregnancy rates comparable with the more traditional laparoscopically guided hysteroscopic tubal cannulation
Objective To compare women with proximal tubal obstruction (PTO) undergoing hysteroscopic tubal cannulation with fluoroscopic guidance vs. laparoscopic guidance. Design Retrospective cohort study. Setting All fluoroscopically-guided hysteroscopic tubal cannulations were performed in an ambulatory suite. All laparoscopically-guided hysteroscopic tubal cannulations were performed in a hospital operating room. Patients Infertile women with unilateral or bilateral PTO on hysterosalpingography who failed selective salpingography in the radiology suite and had a planned laparoscopy or hysteroscopy in the operating room for defects seen on sonohysterography were studied. Intervention All women had a Novy catheter system positioned hysteroscopically to cannulate the occluded fallopian tube(s). Women undergoing fluoroscopically guided hysteroscopic tubal cannulation (FHTC), which used contrast and C-arm pelvic imaging at an ambulatory center, were compared with those undergoing hospital-based laparoscopically guided hysteroscopic tubal cannulation (LHTC) with laparoscopic visualization. Main Outcome Measurements Tubal cannulation success; bilateral cannulation success; tubal perforations; post-FHTC non–in vitro fertilization (non-IVF) intrauterine pregnancies; days from procedure to pregnancy for non-IVF intrauterine pregnancies; and time to non-IVF pregnancy hazards ratio. Results A total of 76 infertile women undergoing either FHTC (34 women) or LHTC (42 women) between 2015 and 2019 were included. Demographic variables were similar among the 2 groups. A total of 31 (92%) of 34 of patients undergoing FHTC and 36 (86%) of 42 of patients undergoing LHTC had at least one tube successfully cannulated. In total, 30 (78%) of 34 of patients undergoing FHTC and 32 (79%) of 42 patients undergoing LHTC had all occluded tubes successfully cannulated. Tubal perforation occurred in 1 (3%) of 34 FHTC cases and 3 (7%) of 42 LHTC cases. A similar percentage of non-IVF treatment-induced intrauterine pregnancies were achieved in the FHTC and LHTC groups (10/34 [29%] vs. 12/42 [29%]). Among patients who conceived without IVF, time from procedure to pregnancy was lower in the FHTC group (101 ± 124.6 days) compared with the LHTC group (228 ± 216 days). There was a significant difference in time to pregnancy when only those who conceived were considered (hazard ratio, 9.39; 95% confidence interval, 2.42–36.51); however, there was no significant difference when all subjects regardless of pregnancy outcome were analyzed (hazard ratio, 1.48; 95% confidence interval, 0.64–3.446). Conclusion Fluoroscopically guided hysteroscopic tubal cannulation is a safe, effective, incision free procedure that results in comparable rates of tubal patency and intrauterine pregnancies as LHTC. This technique should be considered in women undergoing treatment of PTO when operative laparoscopy is not otherwise indicated.
T
ubal disease is responsible for 25%-35% of female infertility, with 10%-25% of these cases because of proximal tubal obstruction (PTO) (1).In 1977, the first transcervical cannulation using selective salpingography was performed by injecting the contrast medium directly into the fallopian tube (1,2).Technological advances and further development of cannulation instrumentation allowed for the first transcervical balloon tuboplasty in the 1980's (2).In 1988, Novy et al. (3) introduced the use of transcervical cannulation of the proximal oviduct using hysteroscopic cannulation under laparoscopic guidance.Patency was demonstrated in 11 (91.7%) of 12 obstructed tubes after hysteroscopy fallopian tube cannulation.We recently reported a novel technique, fluoroscopically guided hysteroscopic tubal cannulation (FHTC), demonstrating a 90% successful cannulation rate and a 34.5% pregnancy rate without in vitro fertilization (IVF) treatment (4).
In our current study, we report on a single surgeon's contemporaneous success rates for achieving tubal patency in PTO as well as non-IVF treatment-induced pregnancy rates when comparing laparoscopic vs. fluoroscopic tubal guidance during hysteroscopic tubal cannulation and assessment.To our knowledge, this is the first study to investigate the time to non-IVF treatment-induced intrauterine pregnancy in FHTC vs. laparoscopically guided hysteroscopic tubal cannulation (LHTC).
MATERIALS AND METHODS
This retrospective study included all women who had undergone FHTC or LHTC between 2015 and 2019 during their fertility workup and treatment by a single reproductive surgeon (S.R.).Inclusion criteria were women with infertility with either unilateral or bilateral PTO aged 18-44 years.We excluded subjects with bilateral distal tubal occlusion, severe male factor infertility, or other indications, requiring the subject to go directly to IVF treatment.Hysterosalpingography (HSG) was performed on nearly all patients by an interventional radiologist (S.R.).If tubal occlusion was seen, then fluoroscopic selective salpingography was attempted using a curved catheter, performed without anesthesia.When fluoroscopic selective salpingography failed and when proximal obstruction was noted and no hydrosalpinx was seen, they were offered FHTC, using fluoroscopic guidance for hysteroscopic tubal cannulation.This technique, as described below, is separate from widely known fluoroscopic guidance for tubal cannulation often performed in the radiologic suite.When laparoscopy was otherwise indicated, they were offered LHTC.A finding of a unilateral hydrosalpinx, a positive chlamydia serology, or when a markedly damaged tube was seen with PTO, the patient was then indicated for laparoscopy with LHTC.
At the time of the study, all women undergoing FHTC had an indication for hysteroscopic tubal cannulation under fluoroscopic guidance on the basis of HSG findings.Patients underwent laryngeal mask airway anesthesia in the dorsal lithotomy position, with intravenous propofol and an inhalation agent to facilitate uterine relaxation.Then, the hysteroscope was placed in the uterine cavity, and the ostia was visualized.Indicated hysteroscopic procedures, such as polypectomy and lysis of adhesions to find the ostia, were performed before tubal cannulation.Extensive myomectomy, or extensive metroplasty, was sometimes delayed and performed after tubal cannulation to allow for completion without bleeding or intravasation.The Novy catheter system, Cook G17478 (Cook Medical, Bloomington, IN), was placed through a visualizable ostia or an obstructed ostia where an ostia was presumed to be located.The C-arm was then maneuvered into position over the uterus and fallopian tubes.A single image confirmed the location of the catheter.Hypaque contrast dye (Amersham Health, Inc., Princeton, NJ) was then injected under real-time fluoroscopic imaging.If no contrast was seen entering the fallopian tube, then tubal cannulation was performed with a 3-French inner catheter and a 2-French inner guidewire snaked through the outer catheter and inserted laterally through the intramural and into the isthmic portions of the fallopian tube using direct wire visualization.Repeat contrast injection was performed.If perforation was suggested by contrast spillage directly intraperitoneally and around the uterine cornual region without visualization of the fallopian tube, then the procedure was halted on that side.When the contrast flowed into the fallopian tube without suggestion of perforation, we assessed for dilation-free spill and loculation.
For the LHTC group, women underwent laparoscopy for one of several indications: pelvic pain, expected endometriosis, ovarian cysts, fibroids (intramural and subserosal), suspected pelvic inflammatory disease (history of a positive chlamydia serology in association with infertility), and abnormalities on HSG (distal tubal abnormalities).For the LHTC group, all subjects underwent general anesthesia with intubation.A 5-mm port and laparoscope were inserted through the umbilicus using the open-entry technique (5).Additional 5-mm pelvic lateral ports were placed as needed.Once all indicated laparoscopic procedures were completed, all patients underwent chromopertubation using both low-pressure flow and obstruction of one tube to see when injected dye flowed in the contralateral tube.Chromopertubation was performed with a Clearview manipulator, using very dilute methylene blue.When there was bilateral tubal patency, patients would not have been enrolled in this study.When there was bilateral PTO with normal distal tubes, they underwent bilateral LHTC.If a unilateral occlusion appeared, then the patent tube was obstructed to see whether further instillation and pumping action would overcome the obstruction.If the obstruction was still not overcome, then it would be counted as a unilateral obstruction.All indicated hysteroscopic procedures were first performed; however, no procedure requiring incisions was performed before tubal cannulation.We then placed the Novy catheter into the ostium, or presumed ostial area, and injected contrast.When the injected contrast failed to show tubal patency, we placed the 2-French wire over a 3-French catheter, further trying to insert them through the intramural and into the isthmic portions of the tube using direct wire visualization.We checked for patency on removal of the wire with injected dye.If perforation was suggested by contrast spillage directly intraperitoneally and around the uterine cornual region without visualization of the fallopian tube, then the procedure was halted on that side.We considered the procedure a technical success when there was contrast flow coming out of the distal fallopian tube on the laparoscopic view.
Those in the FHTC returned home the same day and were able to resume work the day after their procedure.The LHTC group returned home the same day of their procedure and were able to return to work 3-4 days later.All women had at least 6 months of observation for complications and pregnancy outcomes.Ongoing clinical pregnancy was defined as a fetal heartbeat on transvaginal ultrasound that persisted through the first trimester.
Procedural success, perforation rates, pregnancy rates, and time to intrauterine pregnancy were analyzed.The ttest and Wilcoxon rank sum analysis were used to compare continuous variables, and a Fisher's exact test was used for categorical variables.Kaplan-Meier analysis was used to compare times to pregnancy.Informed consent for surgery was obtained from all patients, including an explanation of the risks and benefits of LHTC and FHTC as well as the alternative options to treat tubal factor infertility.The study was approved by the institutional review board at Brown University.
For the FHTC group, all subjects had attempted HSG; 34 (100%) of 34 subjects; however, one subject was unable to tolerate the procedure, thus the HSG was inconclusive and there was no confirmed occlusion.For the LHTC group, 39 (92.9%) of 42 subjects had an HSG performed, whereas all subjects had pathology indicating the need for hysteroscopy (P¼ .42).Of those with an HSG performed, in the LHTC group, 20 (51.3%) of 39 were found to have a unilateral occlusion, whereas 27 (81.8%) of 33 in the FHTC group had a unilateral occlusion (P¼ .01).Additionally, of those that had an HSG performed, the LHTC group was found to have 11 (28.2%) of 39 patients with a bilateral occlusion, whereas for the FHTC group, 6 (18.2%) of 33 patients had a bilateral occlusion (P¼ .33).For the LHTC group, there were 8 subjects who were found to not have an occlusion on HSG.However, all of these patients had other indications for laparoscopy.Therefore, at the time of laparoscopy, all patients underwent chromopertubation, and were found to have PTOs at that time that were unresponsive to our usual noncannulation techniques for tubal spasm as presented in the methods.
For the FHTC group, 19 (55%) of 34 patients had a positive pregnancy test, including pregnancies achieved with IVF, whereas in the LHTC group, 27 (64%) of 42 patients had a positive pregnancy test (P¼ .46).A similar percentage of non-IVF intrauterine pregnancies were achieved in the FHTC and LHTC groups.In the FHTC group 10 (29%) of 34 non-IVF intrauterine pregnancies were achieved, whereas in the LHTC group, there were 12 (29%) of 42 non-IVF intrauterine pregnancies (P¼ .94).
When analyzing pregnancy rates for those with unilateral occlusion between groups, results were similar.For the FHTC group, 7 (27%) of 26 patients had a non-IVF intrauterine pregnancy, whereas 8 (40%) of 20 patients in the LHTC group had a non-IVF intrauterine pregnancy (P¼ .35).In the case of bilateral occlusion, for the FHTC group, 3 (38%) of 8 patients and for the LHTC group 4 (18%) of 22 patients had a non-IVF intrauterine (IU) pregnancy (P¼ .28).
After tubal cannulation, the FHTC group had 4 (11.8%) of 34 subjects continue to IVF directly, whereas the LHTC group had 2 (4.8%) of 42 subjects continue to IVF directly (P¼ .3); the remaining patients continued with intrauterine insemination or timed intercourse.For those that were not successful with non-IVF methods in the FHTC group and continued treatment all 9 (100%) of 9 continued to IVF and for the LHTC group all 16 (100%) of 16 subjects continued to IVF (P¼0.1).A similar percentage of intrauterine clinical pregnancies, including patients who underwent IVF, were achieved in the FHTC and LHTC groups.In the FHTC group 17 (50%) of 34 intrauterine pregnancies were achieved, whereas in the LHTC group there were 26 (61.9%) of 42 intrauterine pregnancies (P¼ .30).Of the total clinical pregnancies achieved, in the FHTC group 3 (17.6%) of 17 pregnancies, resulted in miscarriage whereas in the LHTC group 3 (11.5%) of 26 patients resulted miscarriage (P¼ .8).All miscarriages were spontaneous abortions, except for one in the LHTC group, which was induced.Two ectopic pregnancies occurred, one in the LHTC group and the other in the FHTC group.
Among patients who conceived without IVF, days from procedure to pregnancy was significantly lower in the fluoroscopically guided group 101.45 AE 124.6 as compared with the laparoscopically guided group 228.2 AE 216 (P¼ .01).There was a statistical difference in time to pregnancy (excluding IVF pregnancies) when considering only those that successfully conceived (hazard ratio, 9.39; 95% confidence intervals [CIs], 2.42-36.51).However, there was no statistical difference in time to pregnancy when all subjects regardless of pregnancy were analyzed (hazard ratio, 1.48; 95% CI, 0.64-3.446).Additionally, the calculated relative risk for non-IVF intrauterine pregnancies is 1.01 (95% CI, 0.75-1.35),thus indicating no significant difference among the 2 groups.
Among patients who conceived with IVF, days from procedure to pregnancy was lower in the fluoroscopically guided group, 71.64 AE 52.71, as compared with the laparoscopically guided group 151.42 AE 85.2 (P¼ .0013).There was a statistical difference in time to pregnancy, including IVF pregnancies when considering only those that successfully conceived (hazard ratio, 2.907; 95% CI, 1.52-5.57).However, there was no statistical difference in time to pregnancy when all subjects regardless of pregnancy were analyzed (hazard ratio, 1.14; 95% CI, 0.62-2.11).
DISCUSSION
This study compares laparoscopically guided tubal cannulation with the novel procedure of fluoroscopically guided hysteroscopic tubal cannulation.It is evident that LHTC and FHTC are comparable in terms of tubal patency, perforations, and IU pregnancy rate, and there is a shorter time to pregnancy when employing FHTC.This may be because of confounding variables, such as degree of tubal disease, endometriosis, or may be because of quicker recovery time with hysteroscopy than laparoscopy.Time to pregnancy is unlikely because of the fluoroscopic guidance itself.Further, FHTC can be conveniently performed at the time of hysteroscopy, which many fertility patients require because of uterine pathology (4).Hysteroscopy with tubal cannulation is relatively simple to perform as most reproductive endocrinology and infertility physicians have extensive experience with hysteroscopic techniques.Additionally, the C-arm roentgenogram technology is simple to employ by the reproductive endocrinology and infertility physicians or radiology technician.This benefit is in conjunction with the fact that FHTC is less invasive and further that hysteroscopic guidance has been shown to produce lower pain scores than laparoscopic guidance in other fertility procedures (6).Fluoroscopically guided hysteroscopic tubal cannulation would be preferred when hysteroscopy is indicated but laparoscopy is not necessary.Although fluoroscopically guided tubal cannulation at the time of HSG is still the least invasive option, it is rarely offered (7).Fluoroscopically guided hysteroscopic tubal cannulation is the preferred option if the fluoroscopic selective salpingography fails or the patient requires hysteroscopy for another indication.However, its use may be limited by the availability of a C-arm at hysteroscopic surgery centers.
In our LHTC group, tubal patency, conception rate, and time to pregnancy were comparable with other studies.Prior studies have shown a recanalization rate of 80% with LHTC and up to 100% with all fluoroscopic radiologic techniques (8,9).Further studies have shown an overall conception rate of 33% using LHTC (10).In our study, LHTC achieved an 86% recanalization rate and a 29% intrauterine pregnancy rate in FHTC we achieved 92% tubal patency and 29% nonassisted reproductive technology IU pregnancy.Other studies have shown a 27% clinical pregnancy rate after general tubal catheterization for unilateral and bilateral PTO (11,12).Thus when using LHTC, we demonstrated success similar to what is currently in the literature.
In addition to comparing procedural approaches to PTO, it is also important to consider whether any PTO procedure is an appropriate alternative to IVF.Currently, there are no trials comparing pregnancy rates after tubal surgery with IVF (11).However, it may be beneficial to compare some of the advantages and disadvantages.Fluoroscopically guided hysteroscopic tubal cannulation provides a less costly fertility treatment option for patients with PTO when compared with IVF.The charge to the patient for the FHTC portion of the surgery is $1,500, whereas the average cost and charge to the patient for IVF is $15,000 per cycle at our institution.Another report showed the cost of tubal cannulation was $750, with a range of $500-$1,000 (10).At our institution, hysteroscopy is $1,500 and anesthesia is an additional $500, thus tubal cannulation can be up to $3,000 in total.Additionally some couples prefer to avoid IVF and attempt natural means to conceive.
The limitations of this study include that it is retrospective, thus we cannot eliminate all possible confounders between patients who underwent FHTC and LHTC, such as the higher rates of endometriosis among those in the LHTC group.
The patients undergoing LHTC had more comorbidities, which may limit interpretation of the data (13,14).However, intrauterine pathology was compared, and the FHTC group was found to have a higher incidence of abnormalities.Furthermore, in the LHTC group all pathology encountered via laparoscopy and hysteroscopy was corrected whereas in the FHTC group only hysteroscopically visualizable and treatable pathology was corrected.This may have provided an advantage to those in the LHTC group.An additional limitation of this study is that our patients initially underwent HSG to assess for tubal factor, which has a high false-positive rate.Although they additionally underwent selective salpingography at the time of HSG, this does not eliminate all false positives results.Additionally, a small number of patients subsequently decided to move directly to IVF.In our study, after tubal cannulation the FHTC group had 4 (11.8%) of 34 subjects continue to IVF directly, whereas the LHTC group had 2 (4.8%) of 42 subjects continue to IVF directly (P¼ .3).This limited a full evaluation of the non-assisted reproductive technology pregnancy success rates after FHTC.It is unknown whether pregnancies in women with unilateral PTO were achieved through the previously blocked tube.
CONCLUSION
To our knowledge, this is the first study comparing the novel procedure of FHTC and the more traditional LHTC for success rates in the treatment of PTO by a single provider.It demonstrates that FHTC has a high-technical success rate, a lowperforation rate, and a successful conception rate similar to LHTC both in our hands and when compared with prior literature.These results demonstrate that FHTC is a viable alternative to conventional laparoscopically guided hysteroscopic tubal cannulation, particularly among patients planning to delay IVF.
Martin Keltz :
Conceptualization, Methodology, Investigation, Resources, Data curation, Writing -original draft, writing -review & editing, Project administration.Sarah C. Rubin: Conceptualization, Methodology, Software, Validation, Formal analysis, Writing -original draft, Writing -review & editing, Visualization.Emma Brown: Data curation.Moses Bibi: Writing -review & editing.May-Tal Sauerbrun-Cutler: Investigation, Writing -review & editing, Project administration, Supervision.Declaration of Interests M.K. has nothing to disclose.S.C.R. has nothing to disclose.E.B. has nothing to disclose.M.B. has nothing to disclose.M.T.S. has nothing to disclose. | 2024-02-18T16:10:54.202Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "04804efa055adf129ac56709d143b76d0258eb2a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.xfre.2024.02.008",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8fdcc4fd3380c5ca2a0a791d4aec5a2306eee757",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229370883 | pes2o/s2orc | v3-fos-license | Antifungal Activity on the Strain of Lasiodiplodia theobromae and Phytochemical Study of Ageratum conyzoides and Newbouldia laevis from the Kisangani Region / DR Congo
Aims: To extract, identify and evaluate in vitro the antifungal activity of the phytochemical groups of Ageratum conyzoides and Newbouldia laevis on the strain of Lasiodiplodia theobromae. Study Design: Exploitation of medicinal plants to combat the growth of L. theobromae, responsible for the decline of cocoa cultivation. Original Research Article Kwembe et al.; IJPR, 5(4): 1-10, 2020; Article no.IJPR.62240 2 Location and Duration of Studies: Faculty of Sciences, University of Kisangani, between April 2017 and February 2018. Methodology: The crude extracts of the dry leaves of A. conyzoides and N. laevis were tested (at 100 mg/mL). Potato dextrose agar was used as the culture medium. After chemical screening, abundant phytochemical groups were isolated and tested. Results: The aqueous, 95% ethanolic and ethereal crude extracts of A. conyzoides are more antifungal (respective percentages of inhibition PI: 80.74; 84.10 and 85.64%) than those of N. laevis (63.28; 72.64 and 75.23%). The minimum inhibitory concentration (MIC) of the aqueous crude extract of A. conyzoides is lower (25 mg/mL) than that of the ethanolic extract (50 mg/mL). Tannins are very abundant in A. conyzoides and in N. laevis. Saponins, sterols and terpenes are abundant in both plants. The extraction yields of tannins and saponins are respectively 20.67 and 2.43% in A. conyzoides and 10.47 and 2.38% in N. laevis. A. conyzoides contains the gallic tannins while N. laevis, the condensates and catechics. The saponins and tannins of A. conyzoides are more antifungal (respective PI: 84.40 and 54.44%) than those of N. laevis (PI: 75.56 and 32.96%). Discussion: The saponins of A. conyzoides and N. laevis are more active on the strain of L. theobromae than the tannins. Saponins are surfactants that can destabilize membrane structure of microorganisms including fungi. Conclusion: The saponins of the two plants have shown a very interesting antifungal power on the strain of L. theobromae. The identification of their active molecules is ongoing.
INTRODUCTION
The cocoa cultivation is often threatened by fungi, which significantly reduce the crop yield. Around the 1980s, cocoa orchards were damaged by brown pod rot in Cameroon, affecting 100% of cocoa trees in some plantations.
After several investigations, Lasiodiplodia theobromae (syn. Botryodiplodia theobromae) a common endophyte and opportunistic pathogen was identified to be responsible [1]. This fungus has been isolated in the tropics and subtropics, including in Cameroon, India, Western Samoa and Philippines [2][3][4][5][6]. L. theobromae has also been found in the Kisangani region in the Democratic Republic of the Congo (DRC) [7].
It is therefore essential to find the means to fight against this phytopathogen. The use of pesticides has harmful consequences on the ecosystem [8]. This can cause resistance due to genetic mutations. Hence the current attraction towards biofungicides [9]. This is because plant extracts are known for their antimicrobial and/or antifungal effects on certain phytopathogenic or zoopathogenic germs. This is particularly the case of Ageratum conyzoides and Newbouldia laevis [10][11][12][13][14][15]. Besides, plants are known to be efficient, non-polluting and accessible to everyone. The use of plant extract fungicides complies with new environmental regulations that discourage the use of synthetic fungicides [16,17].
To our knowledge, there is no study on the antifungal activity of these two plants on the strain of L. theobromae. Therefore our research team began a series of studies on the inhibitory effect of plant extracts on this fungal strain [7,12,18].
The objective of the present study is to identify the phytochemical groups contained in A. conyzoides and N. laevis, to determine the active principle responsible for the antifungal activity on the strain of L. theobromae.
Study Area
This work was carried out in the region of Kisangani, the capital of the Province of Tshopo in the DRC. This city is 428 meters above sea level and is located at 0°31'North latitude and 25°11' East longitude [19,20].
Plant Material
The plant material consists of the leaves of A. conyzoides and N. laevis collected in the Kisangani region. After their identification at the Herbarium service of the Sciences Faculty of the University of Kisangani, the leaves were dried, crushed and sieved. Ten grams of powder were macerated for 48 hours in 50mL of solvent (water, ethanol and diethyl ether). The filtrates were evaporated to obtain the dry residue, used to prepare various extract solutions. For the minimum inhibitory concentration (MIC), the concentrations of 12.5; 25; 50; 100 and 200 mg/mL were used.
Chemical Screening and Extraction
Universal protocols [21-25] were used for the identification of phytochemical groups on leaves powder. Only the major groups were extracted, particularly saponins and tannins [26][27][28].
Fungal Strain
Brown rot cocoa pods were used to isolate the strain of L. theobromae. The Potato dextrose agar (PDA) medium was used according to wellknown protocol [7,12].
Antifungal Activity
The antifungal activity was determined by evaluating the percent of inhibition (PI) of mycelial growth of extracts from the plants on the strain of L. theobromae, with six repeats. 12 mL of PDA were poured into each 90 mm diameter Petri dish. A midline was drawn on each Petri dish. On one side the extract was applied and on the other, the 5mm diameter mycelial implant was placed at 2.5mm from the midline [29]. Mycelial growth was measured on either side of the midline (Fungal radius, FR) every 24 hours until the Petri dish was filled. The negative control consisted of PDA on which only the mycelial implant was placed.
The calculation of PI was performed by the formula: The standard deviation was calculated by the standard deviations, represented by error bars on the histograms.
Statistical Analysis
Statistical analyzes were performed using R 3.4.0 software. 4.14 Extraction yield (%)
Aqueous extract Ethanolic extract Ethereal extract
The aqueous extract from the leaves of A. conyzoides gives the highest yield, 22.98%, while the ethereal extract from the leaves of N. laevis has the lowest yield, at 4.14%.
The yield of total extracts for both plants decreases from water to diethyl ether via ethanol. This is because water, due to its high polarity, extracts more of the polar compounds. Many of the constituents of these two plants would therefore be polar [18,30].
The low yields observed for the total extracts of N. laevis compared to those of A. conyzoides would be due to the type of leaves of N. laevis, which naturally are made of several ribs. Compared to our previous work, the aqueous extract of A. conyzoides (22.98%) gives a higher yield than those of Mitracarpus villosus (20.81%) and Moringa oleifera (17.01%) [18]. On the other hand, the aqueous extract of N. laevis (8.14%) has a low yield.
Percentage of Inhibition of Total Extracts
The PIs of the various total extracts of the plants studied on the strain of L. theobromae after two days of incubation are given in Fig. 2.
The aqueous, ethanolic and ethereal extracts of A. conyzoides have higher PIs, respectively 80.74, 84.10, and 85.64% compared to those of N. laevis. The total extracts prepared at 100 mg/mL of these two plants have PIs much higher than those found in our previous work. This difference is due to the preparation method of the extracts. In fact, in our previous work solutions were of lower concentration [7,12]. All
Percentage of inhibition (%) Plants used
Aqueous extract Ethanolic extract Ethereal extract
Minimum Inhibitory Concentration
As A. conyzoides have shown higher PI, the MIC, IC 50 , IC 75 , PI to MIC, and the ratio of its total extracts were determined and are given in Table 1.
The MIC of the aqueous extract is two times lower (25 mg / mL) than that of the ethanolic extract (50 mg / mL). This trend is confirmed by the values of IC 50 and IC 75 .
These results show that the aqueous extract is more interesting than the ethanolic extract considering its low MIC value. Morinda morindoides showed antifungal activity on Cryptococcus neoformans, with IC 50 of 14.3 and 6.3 mg/mL for these aqueous and ethanolic extracts [31]. According to Saraka [32], the aqueous and ethanolic extracts of Mallotus oppositifolius have respective MICs of 100 and 25 mg / mL on Fusarium sp., whereas on Phytophthora sp., they are respectively 100 and 50 mg/ mL. The rate of inhibition of a fungal strain would therefore depend on the nature of the plant, the concentration of the substrate and the target fungal species.
Phytochemical Groups
A. conyzoides contains tannins in very high abundance, saponins, sterols, and terpenes in abundance as well as flavonoids in trace amounts. While N. laevis contains saponins, tannins, sterols, and terpenes. Alkaloids, anthocyanins, quinones are absent in both plants.
These two plants display a similar phytochemical profile. However, the contents of tannins and flavonoids are different.
The phytochemical composition of A. conyzoides appears to be like that of M. villosus [18]. These results are similar to those found by other researchers for the leaves of A. conyzoides [33,34] and N. laevis [35][36][37].
The characterization of the tannins shows that A. conyzoides contains gallic tannins and N. laevis, condensed and catechetical tannins.
Due to the abundance of tannins and saponins in the two plants studied and the antifungal activity of these secondary metabolites [18, 38,39], it is useful to extract these two phytochemical groups and evaluate their antifungal activities on the strain of L. theobromae. Indeed, saponins are surfactants that can destabilize membrane structure of microorganisms including fungi. They can interact with sterols, proteins and phospholipids of cell membranes of fungi leading to loss of their integrity [39].
. Percentage of inhibition of tannins and saponins (100 mg / mL) of A. conyzoides and N. laevis on the strain of L. theobromae after two days of incubation
Also, the PIs of tannins and saponins are higher for A. conyzoides than those for N. laevis. This confirms the results of the PIs of the crude extracts (fig 2). This would justify the more frequent use of A. conyzoides in traditional medicine against fungi compared to N. laevis [40,41]. According to the classification of plants with antifungal activity [29], taking into account the PI values of the tannins and saponins found in A. conyzoides, the latter is among the plants very active on the strain of L. theobromae [12].
Maximum Growth Time
The maximum growth times (MGT, in days) of the strain of L. theobromae in the presence of the different substances tested are given in Table 2.
Tannins Saponins
This table shows that the MGT of L. theobromae is only two days for the control and three days for the solvents. In the presence of saponins, the value of MGT reaches 10 days for A. conyzoides and five days for N. laevis.
These results confirm the high antifungal activity of saponins. Indeed, when the PI is significantly high, the fungal strain is slowed down, thus leading to an increase in MGT.
The saponins of both plants are therefore more active than the tannins. This confirms the results of the evaluation of the activity of these two phytochemicals on the strain of L. theobromae ( fig 5). Furthermore, the MGT values of these two secondary metabolites are higher for A. conyzoides than for N. laevis. The antifungal supply (higher) on the strain of L. theobromae, would therefore be linked to the nature of the plant species.
The values of PI and MGT obtained ( Fig. 5 and Table 2) show that the saponins and tannins of two plants are more active than the total aqueous, ethanolic and ethereal extracts on the strain of L. theobromae [7,12]. Thus the phytochemical groups are more active than the total extracts. The total extracts contain, apart from the saponins and tannins, other less active chemical groups that can interfere with the active ingredients.
CONCLUSION
The objective of this work was to, extract identify, and evaluate the antifungal activity of the secondary metabolites of the leaves of Ageratum conyzoides and Newbouldia laevis on the strain of Lasiodiplodia theobromae. | 2020-11-26T09:06:07.211Z | 2020-11-21T00:00:00.000 | {
"year": 2020,
"sha1": "02da31fc41545450acc0b019eb728a8384a8fff3",
"oa_license": null,
"oa_url": "https://www.journalijpr.com/index.php/IJPR/article/download/30138/56554",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bfe2baef187898ba99d9eea42aca4775cb300f7d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
234388100 | pes2o/s2orc | v3-fos-license | Prediction of solid rocket motor performance characteristics using computational fluid dynamics and validation with experimental data
Solid Propulsion is widely used in Missile and Space applications. Accurate prediction of Specific impulse (Isp) is important for design of Solid rocket motor. These predictions can be done using empirical relations. Obtaining these empirical relations require large amount of data. The renowned aerospace organizations use their own code for prediction. These codes are not open source and also developing a code requires big team effort with challenges of verification and validations. The present work is aimed at developing numerical model for prediction of Specific Impulse. The model is validated against the experimental data of static test of solid rocket motor taken from literature.
F = Thrust in N
̇ = mass flow rate in kg/s A= Area in m 2 V = Velocity in m/s P = pressure in Pa ρ = density in kg/m 3 I sp = Vacuum Specific Impulse g 0 = Acceleration due to gravity in m/s 2
Subscripts
1 nozzle inlet or chamber 2 nozzle exit 3 atmospheric or ambient
INTRODUCTION
Propulsion is the act of changing the motion of a body. The propulsive force of a solid rocket motor is obtained by ejecting propellant at high velocity. Accurate prediction of propulsion parameters is important for design of Solid rocket motor. CFD is a useful tool for the prediction of propulsion CFD has a very important role in rocket propulsion. Chongankimet al. [1] explained the role of CFD in development of rocket propulsion. Nozzle is a component that increases the performance of air breathing and non-air breathing engines. G Srinivaset al. [2] analyzed the performance of convergent nozzles. It has a good comparison with experimental results. To achieve dependable CFD predictions, it is important that the numerical model be based on first principle. The models based on correlation may not be fully reliable. Jiri blazeket al. [3] described a flow solver useful to simulate rocket motors. The reliability of code is of utmost importance for analysis of flow in nozzles. Bogdon-AlexandruBelegaet al. [4] used fluent 6.3 for analysis of convergent divergent nozzle with GAMBIT 2.4 software for numerical modeling and generation of mesh. Nathan Spottset al. [5] used Metacomp CFD++ software to study compressible flow through convergent conical nozzles. The results have good comparison with experimental data.
Laura et al. [6] used Hopsan a multi domain software to analyse the functioning of sounding rocket. The results obtained are comparable with real performance. The simulations were further useful for improvement of performance and increase in altitude a rocket can reach. Supersonic exhaust diffusers were also analyzed using CFD. M Srinivasa Rao et al. [7] performed numerical simulations for various rocket chamber pressures. The results have good comparison with experimental data.SukantaRogaet al. [8] analyzed the scramjet combustor using CFD. He identified important parameters for optimization of injection system. K. Schomberget al. [9] use CFD for design of higharea-ratio nozzle contours using circular arcs. The design offers an improvement in thrust coefficient and reduction in average length. S. Sahaet al. [10] analysed combustion instability using CFD in solid rocket motors. The analysis was validated against motor test data available in literature.
Specific Impulse is one of the important characteristic of rocket propulsion. It is a measure of fuel efficiency of the rocket, that is, the thrust imparted to the rocket per kilogram of the propellant expelled. If two different rocket motors have twodifferent values of specific impulse, then the motor with higher value of specific impulse is treated more efficient. This is because the motor will produce more thrust for the same amount of propellant. It gives us an easy way to size a motor during preliminary analysis. The rocket weight will define the required value of thrust. Dividing the thrust required by the specific impulse will tell us how much weight flow of propellants our motor must produce. This information determines the physical size of the motor. Accurate prediction of Specific impulse (I sp ) is important for design of Solid rocket motor.
OBJECTIVE
To perform computational study on the rocket motor nozzle using AnsysFluent 14.0 for understanding the flow losses and compare results with published data To understand the flow losses involved in solid rocket propulsion and predict the vacuum specific impulse.
METHODOLOGY FOR CFD PREDICTIONS FOR ROCKET PROPULSION
The work flow for the CFD predictions for rocket propulsion is given below:
4.1Selectionof CFD Software
In the present paper, the computational study is done on the rocket motor nozzle using ANSYS Fluent 14.0 for understanding the flow losses involved in nozzle and predicting the vacuum specific impulse I sp and thrust. The code solves the following Navier-Stokes equations through finite volume method.
Input Parameters
The rocket motor used for testing had an overall length of 720 mm and outer diameter of 198 mm. The throat diameter is 41 mm and nozzle exit diameter is 317.7 mm. The motor had a propellant weight of 23.345 kg and operational pressure of 7.34 MPa.
4.3.1Geometric
Modelling. Using the coordinates as in ref 5 a 2D axi-symmetric geometric modelling is done using AutoCAD software. The Model is given in ' Figure 1'.
Run forChosen No of Iterations and Validation.
For running the iterations the residuals for each flow equation was set at 10-10. Initially 10 iterations were run for initialization under hybrid initialization. Then multiples of 500 iterations were run till the residual condition was met. It was noticed that there was no further change after 3700 iterations. The residuals were also found within acceptable limits. The residual plots is shown below in ' Figure 2'. Figure 3'the boundary condition defined at chamber inlet is 7.34 Mpa. The CFD predicted the outlet pressure at nozzle exit as 3.39kPa. This is because of gas expansion in the nozzle. This is a shock free expansion in the nozzle. The expansion waves are noticed at the start of nozzle divergent contour
Temperature Contour Plot.
The boundary condition of gas temperature defined at chamber inlet is 3410˚K. The gas temperature dropped from 3410˚K to 938˚K at nozzle exit. This is because of gas expansion in the nozzle. The kinetic energy required to expand into empty space comes from heat energy that gives the gas temperature. So the temperature is dropped. The ' Figure 4' shows variation in temperature across length of nozzle. The propulsion characteristics predicted through CFD is compared with the experimental data. The comparison table is given in Table 2. The I sp predicted using model developed in fluent is 305.1s and matches very closely to experimental data. The results are within 3% accuracy. The pressure, velocity, temperature, Mach and density plots as discussed above follow the trend expected in an isentropic nozzle expansion. Thus it can be safely said the model developed is validated and the results can be used for design purpose.
CONCLUSIONS
CFD is a useful methodology in predicting the rocket motor / nozzle performance. The current project reveals that the accuracy of the prediction is better than 3%, which makes it reliable for the designers, scientists and engineer community. The CFD results can be used to accurately predict other parameters like temperature, pressure, velocity and density at any point in the nozzle. This data will help designers for selection of appropriate materials fornozzle fabrication. Flow features like expansion waves and oblique shocks occurring inside the nozzle can be captured accurately. This will help the designer to design the divergent contour free of any flow irregularities and flow separation. There is a future scope of improving the accuracy by using 3D simulations, which can be taken for further tasks up on availability of computational hardware. | 2020-12-24T09:04:31.157Z | 2020-12-23T00:00:00.000 | {
"year": 2020,
"sha1": "99a97753d301c25dae98fb0714744eb7debfe79f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/998/1/012014",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "00e04b0f353c88c3b4cceed55224097c29404fa9",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
238703230 | pes2o/s2orc | v3-fos-license | Metabolomic profiling of Burkholderia cenocepacia in synthetic cystic fibrosis sputum medium reveals nutrient environment-specific production of virulence factors
Infections by Burkholderia cenocepacia lead to life-threatening disease in immunocompromised individuals, including those living with cystic fibrosis (CF). While genetic variation in various B. cenocepacia strains has been reported, it remains unclear how the chemical environment of CF lung influences the production of small molecule virulence factors by these strains. Here we compare metabolomes of three clinical B. cenocepacia strains in synthetic CF sputum medium (SCFM2) and in a routine laboratory medium (LB), in the presence and absence of the antibiotic trimethoprim. Using a mass spectrometry-based untargeted metabolomics approach, we identify several compound classes which are differentially produced in SCFM2 compared to LB media, including siderophores, antimicrobials, quorum sensing signals, and various lipids. Furthermore, we describe that specific metabolites are induced in the presence of the antibiotic trimethoprim only in SCFM2 when compared to LB. Herein, C13-acyl-homoserine lactone, a quorum sensing signal previously not known to be produced by B. cenocepacia as well as pyochelin-type siderophores were exclusively detected during growth in SCFM2 in the presence of trimethoprim. The comparative metabolomics approach described in this study provides insight into environment-dependent production of secondary metabolites by B. cenocepacia strains and suggests future work which could identify personalized strain-specific regulatory mechanisms involved in production of secondary metabolites. Investigations into whether antibiotics with different mechanisms of action induce similar metabolic alterations will inform development of combination treatments aimed at effective clearance of Burkholderia spp. pathogens.
growth media such as LB is inadequate in modeling bacterial physiology during infection, as nutrient availability can vary markedly between infection sites and standard growth media. To overcome this challenge, the previously developed SCFM2 was used to model the physical and chemical environment of human sputum from patients with cystic fibrosis. In this study, we employed an untargeted metabolomic approach to compare the metabolomes of three different strains of B. cenocepacia namely, C5424, K56-2, and J2315 when cultured in SCFM2 and LB in triplicate (Fig. 1a). These strains are clonal and are associated with the highly transmissible, epidemic ET12 lineage of B. cenocepacia 19 . In addition, comparative metabolomic analysis was carried out in the presence and absence of trimethoprim, which is an antibiotic used clinically for treatment of Burkholderia infections 40 . This antibiotic is known to upregulate biosynthetic pathways involved in production of secondary metabolites in B. thailandensis and in the selected strains of B. cenocepacia in LB, but its effect has not been investigated in SCFM2 [41][42][43] . In order to capture a diverse range of compound classes, extractions were performed on bacterial cultures using both liquid-liquid (with ethyl acetate, EtOAc) and solid-phase extraction (SPE) methods. Extracts were then analyzed using high resolution tandem mass spectrometry coupled with ultra-highperformance liquid chromatography (UHPLC-HRMS/MS). Metabolite features representing analytes detected at a unique m/z and retention time were extracted, aligned, and quantified using the open-source MZmine2 software and feature-based molecular networking was performed (Fig. 1b) 44,45 .
We first compared the metabolome of each strain irrespective of media type, extraction method, and exposure to antibiotic (Fig. 2a). The largest number of unique metabolite features (20.5%) was detected exclusively in the extracts of the strain K56-2, whereas 8.5% unique features were detected in the extracts of J2315 strain and 1.1% in the extracts of the strain C5424. While 30.2% of features were shared by all three strains, the largest number of features were shared between the strains J2315 and C54524 (31.7%). Among other factors, the similar metabotype of these two strains may be reflective of their ability to produce a brown pigment known as pyomelanin, which is not produced by the strain K56-2 46,47 . Next, we compared the metabolomes acquired using the SPE and EtOAc extraction methods separately at 48 h post-inoculum. This analysis revealed that more metabolite features were exclusively detected in extracts generated using SPE as compared to extracts generated with EtOAc (Fig. 2b). A total of 32.5% of the metabolite features were detected only with the SPE method, while 9.5% of the total features were unique to EtOAc extraction method (Fig. 2b). Thus, the extraction method employed results in biased metabolomics comparisons when only one type of extraction strategy is employed. Lastly, we generated an UpSet plot to visualize the number of features unique to each strain under different growth conditions as well as exposure to sub-lethal dose of antibiotic trimethoprim after subtraction of media background (Fig. 2c) 48 . This analysis revealed that media specific differences were the largest driver of metabolomic diversity within this study, with 878 features shared between all LB samples and 563 features shared between all SCFM2 samples . In comparison, 513 features were detected in all samples. The UpSet plot also demonstrates that exposure to trimethoprim results in a unique metabolomic response by the K56-2 strain, with 362 metabolomic features exclusively detected in SCFM2 samples , 59 exclusive to LB samples , and 44 detected in both media types . Interestingly, 456 metabolomic features were uniquely detected in all LB samples except for K56-2 with trimethoprim and 37 in all SCFM2 samples except for K56-2 with trimethoprim . While our prior studies have shown that K56-2 exhibits a unique metabolomic response to trimethoprim, our analysis demonstrates that this response is even more apparent in an environment representative of CF sputum 43 . Another pattern observed using UpSet plot highlighted that many features are uniquely detected as shared between the pigmented J2315 and C5424 strains, with 203 features uniquely detected in LB , 90 in SCFM2 , and 93 across both media types . An additional 59 features were uniquely detected in C5424 and J2315 strains cultured in LB and SCFM2 in the presence of trimethoprim , hinting at a distinct response to trimethoprim that is associated with pyomelanin production. Such responses are hallmark of the strain-specific phenotypes of Burkholderia observed while growing under conditions encountered during infection [49][50][51][52] . Future investigations into phenotypic differences as well as differences in gene expression via transcriptomics will provide insights into the biochemical underpinnings driving these observations. Specific metabolites underlying these personalized chemotypes are discussed below.
Differences in siderophore production between LB and SCFM2 cultures. Siderophores are compounds secreted by bacteria to acquire iron from the surrounding environment, and were among the metabolites which were differentially produced by B. cenocepacia strains in LB and SCFM2 in this study. In healthy mammalian hosts, the pool of free iron is limited due to poor solubility of iron in its ferric state under physiological conditions, and because the majority of iron is either located in intracellular compartments or bound by host proteins such as hemoglobin, transferrin, lactoferrin, and ferritin 53,54 . In contrast, levels of both free and ferritinbound iron are higher in CF sputum compared to sputum from healthy hosts 55,56 . Since this cofactor is essential for many important biological processes, iron-acquisition is required for survival in the host environment and can influence microbe-microbe interactions 54,57 . Siderophores which are known to be produced by Burkholderia include pyochelin, ornibactins, salicylic acid, and cepabactin 43,54,58 . Among these, B. cenocepacia have been shown to produce ornibactin and pyochelin 54 . In our study, various structural analogs of pyochelin were detected exclusively in SCFM2 in the presence of trimethoprim (Fig. 3). Pyochelin itself was not detected in this www.nature.com/scientificreports/ study, despite prior reports of low-level production by B. cenocepacia. These prior reports did not use mass spectrometry or NMR to report pyochelin production, but relied on fluorescence-based thin layer chromatography 59 .
Using MS n analysis, we report production of a methylated derivative of pyochelin rather than pyochelin by B. cenocepacia strains used in this study as described below. The feature with m/z 339.083 had a database annotation as a pyochelin methyl ester in GNPS ( Supplementary Fig. S1a) 60 . However, pyochelin methyl ester has only been reported as a synthetic product generated to facilitate NMR characterization, and there is no available biosynthetic evidence that would support methylation on the carboxylic acid to produce an ester 61 . This feature was detected exclusively in cultures grown in SCFM2 supplemented with trimethoprim in both the K56-2 and C5424 strains, albeit with lower levels observed in C5424 (Fig. 3a). Comparisons with the GNPS library MS 2 spectra of pyochelin revealed that the fragment peaks with m/z 120.045, 180.048, and 190.032 are shared between the two molecules while others were shifted by 14.015 Da (-CH 2 ), supporting the annotation as a methylated analog of pyochelin ( Supplementary Fig. S1b). A structure search of pyochelin in SciFinder revealed thiazostatin as a potential candidate. Thiazostatin A/B (1) are previously reported stereoisomeric natural products that are related to pyochelin by an additional C4″-methylation of the thiazolidine ring ( Fig. 3a) 62,63 . This thiazolidine C4″-methylation has also been observed in structurally homologous metabolites including isopyochelin, watasemycin, and yersiniabactin [64][65][66] . By MS 2 analysis alone, the location of the methylation cannot be unambiguously determined, although the mass shifts of the fragment ion with m/z 146.027 in the MS 2 spectrum of pyochelin to m/z 160.043 in our unknown metabolite's MS 2 spectrum suggests it is found on the terminal thiazolidine ring. To confirm the position of methyl group, we conducted b) a) c) 187 www.nature.com/scientificreports/ MS 3 analysis on the fragment ion with m/z 186.058. This analysis revealed that fragmentation of 186.058 yields a MS 3 ion with m/z 158.063, which indicates methylation on the thiazolidine ring rather than at the carboxylic acid ( Supplementary Fig. S2). As this detected metabolite is likely produced by same biosynthetic gene cluster as pyochelin, it is likely that the stereochemistry is also the same. Thus, this feature is putatively annotated as enantiothiazostatin, labeled in Fig. 3 as methylated. Thiazostatin A/B have been found to display antioxidant activity, and further screening is necessary to determine if the detected compound possesses additional bioactivities 62 . Using MASST through the GNPS platform, we searched the MS 2 spectrum of this compound against all public spectral datasets ( Supplementary Fig. S3) 67 . This MASST search found dataset matches in extracts of Pseudomonas spp. grown in vitro, as well as in datasets analyzing the metabolomes of humans with various inflammatory diseases (including CF, diabetes, irritable bowel syndrome, rheumatoid arthritis, and HIV). Detection of this molecule in humans with inflammatory disease raises the possibility that this molecule may be important in influencing microbiome structure in the host, although further studies will be needed to explore whether this observation is truly associated with any biological significance. Another metabolite with m/z 307.021 was detected exclusively in SCFM2 medium extracts in both the B. cenocepacia strains K56-2 and J2315. In our molecular network, this feature matched with 2′-(2-hydroxyphenyl)-4′-thiazolyl-2,4-thiazolinyl-4-carboxylic acid (HPTzTn-COOH, 2) in the GNPS spectral library, which was further verified by manually comparing experimental MS 2 spectra with a previously published MS 2 spectra (Fig. 3a, Supplementary Fig. S4a) 68 . Like pyochelin, HPTzTn-COOH is a siderophore that is dependent on salicylic acid and cysteine as biosynthetic precursors and is capable of chelating Fe 3+ in addition to other metal ions including Al 3+ , Ni 2+ , and Ca 2+68,69 . HPTzTn-COOH was detected exclusively in SCFM2 medium, and primarily in the K56-2 strain, although also at low levels in J2315 (Fig 3a). This metabolite exhibited increased production in the presence of trimethoprim for the K56-2 strain and was not detected in the absence of trimethoprim for the J2315 strain.
Next, a feature with m/z 222.022 was detected exclusively in K56-2 cultures grown in SCFM2 medium in the presence of trimethoprim (Fig. 3b, Supplementary Fig. S4b). This feature was annotated as aeruginoic acid (3), which is a shunt product in the pyochelin biosynthesis pathway observed in Pseudomonas and Burkholderia spp. 70,71 . Aeruginoic acid is the oxidized form of aeruginaldehyde (also known as the integrated quorum sensing signal, or IQS), which has been proposed as a "fourth QS molecule" in P. aeruginosa, although this claim has been disputed 72,73 . Due to the presence of the reactive aldehyde moiety in aeruginaldehyde, it reacts with complex natural products such as malleonitrone or mindapyrrole B 74,75 . Interestingly, pyochelin has been shown to spontaneously undergo cleavage and subsequent transformation into aeruginaldehyde when incubated in buffer solution at 30 °C 74 same conditions as aeruginoic acid, and annotated as aerugine (4) (Fig. 3b, Supplementary Fig. S4c). Aerugine has been previously isolated from Pseudomonas fluorescens and exhibits selective antifungal activity 76 . Recent work by Kaplan et al. has indicated that aeruginaldehyde, aerugine, and aeruginoic acid have iron-binding activity with aeruginoic acid binding to Fe 3+ with a 1:1 ratio, aeruginaldehyde binding with a 2:1 ratio, and aerugine binding with a 3:1 ratio 71 . Unlike pyochelin, aeruginoic acid has a specific affinity for iron compared to other biologically relevant metals 71 . The largest abundance of methylated pyochelin is observed in the strain K56-2, which is likely why these intermediates are also detected in the strain K56-2.
In contrast to metabolites from the pyochelin pathway described above, the ornibactin class of siderophores was detected in the culture extracts of LB medium and not detected in culture extracts of SCFM2 ( Supplementary Fig. S5). The SCFM2 medium is prepared by adding 3.60 µM iron in the form of Fe 3 SO 4 , which was observed to be the average iron concentration present in expectorated sputum collected from CF patients 23 . Recently, it has been reported that commercial sources of mucin can be contaminating sources of iron that lead to altered siderophore production in P. aeruginosa cultured in SCFM2 22 . To determine whether the concentration of iron might account for the differential production of siderophores observed between SCFM2 and LB media, inductively coupled plasma mass spectrometry (ICP-MS) was performed on media aliquots. ICP-MS analysis revealed that SCFM2 contained a mean iron concentration of 5.25 µM (standard deviation of 0.52 µM) while LB contained a similar mean concentration of 4.73 µM (standard deviation of 0.62 µM) ( Supplementary Fig. S6). Therefore, factors other than iron availability likely play a role in the expression of the pyochelin and ornibactin biosynthetic gene clusters 11,25 . QS systems have been previously implicated in regulating production of both pyochelin and ornibactin, with the CepR transcriptional regulator repressing production of ornibactin and the CepR2 regulator activating production of pyochelin in B. cenocepacia H111 77 . In our experiment, production of siderophores was observed to be dependent on strain and nutritional environment. The production of metabolites from the pyochelin pathway was further induced by the antibiotic trimethoprim, while ornibactins were not. Thus, a specific chemical cue in SCFM2 in presence of trimethoprim might play a role in induction of pyochelin production under these conditions and warrants detailed investigation in the future with knockout strains that lack QS circuitry. This observation is noteworthy as pyochelin production by B. cepacia was previously suggested to be correlated with morbidity and mortality in patients with CF 78 . Mechanistic investigations into selective induction of siderophore biosynthesis pathways are critical to understanding their relevance to infections, and as such require further inquiry.
N-acyl-homoserine lactones production in LB compared to SCFM2. QS mediates bacte-
rial response to changing environmental conditions through cell-density dependent global changes in gene expression 79,80 . Burkholderia can utilize QS to coordinate various metabolic processes and modulate cellular phenotypes such as swarming, aggregation, spatial structuring, and biofilm formation 81,82 . QS also plays an important role in infection by regulating genes involved in virulence, and so establishing how the external environment influences production of QS signals is important to understanding their role in pathogenesis [83][84][85] . The CepIR and CciIR QS systems have been described in B. cenocepacia strains which primarily produce and sense N-octanoyl-homoserine lactone (C8-AHL, 5) and N-hexanoyl-homoserine lactone (C6-AHL) respectively. In addition, an orphan LuxR-type regulator called CepR2, which is antagonized by C8-AHL, is also present in these strains 77,[86][87][88] . The gene for the orphan CepR2 regulator lacks an adjacent gene required for synthesis for a cognate N-acyl-homoserine lactone. In a previous untargeted metabolomic experiment, we observed production of a wide diversity of AHLs by Burkholderia spp. grown in LB medium, as well as their corresponding acylhomoserine products formed by hydrolysis of the lactone ring (referred to hereafter as "hydrolyzed AHLs") 43 . In this study, we queried whether these signals are differentially detected in SCFM2 compared to LB media. The C8-AHL (m/z 228.160, 5) was detected in both LB and SCFM2 media, and its hydrolyzed form (6) was detected in LB medium alone (Fig. 4). Additionally, we detected hydrolyzed C13-AHL (7), hydrolyzed C13-AHL:1db (8), the sodium adduct of hydrolyzed C13-AHL (9), and the sodium adduct of hydrolyzed 3-OH C13-AHL (10) exclusively in SCFM2 when trimethoprim was present (Fig. 4). Thus, the production of these C13-AHLs were induced by trimethoprim only in SCFM2. Detection of C8-AHL, hydrolyzed C8-AHL and hydrolyzed C13-AHL was confirmed using commercial AHL standards. These AHL standards were treated with sodium hydroxide to promote hydrolysis of the lactone ring to verify the detection of hydrolyzed AHLs (Supplementary Fig. S7). Naturally produced AHLs with an odd number of carbons in the acyl sidechain are relatively rare, and their functions are not well-characterized 89 . In a previous untargeted metabolomics study of 10 different Burkholderia strains grown in LB medium, we detected hydrolyzed 3-oxo-C13-AHL:1db exclusively in extracts of B. thailandensis E264 43 . To our knowledge, the present study represents the first description of C13 AHLs being produced by B. cenocepacia. Further studies will be needed to explore the function and biochemical basis for production of C13-AHLs by B. cenocepacia in SCFM2 medium, and identify the mechanism by which trimethoprim upregulates the production of this AHL.
Fragin and pyrazine secondary metabolites. Fragin (11) is a diazeniumdiolate metallophore with antifungal activity that is produced by Burkholderia and Pseudomonas spp. 90 . In a previous untargeted metabolomics study, we reported significantly increased production of fragin and its structural analogs in B. cenocepacia K56-2 cultures grown in LB medium when supplemented with the antibiotic trimethoprim 43 . Fragin production was only observed in the K56-2 strain, despite the fact that the ham gene cluster responsible for fragin biosynthesis is identical in the closely related J2315 and C5424 strains. This result highlighted that even genetically similar Burkholderia strains can exhibit markedly different responses to external stimuli, such as trimethoprim exposure. In the current study, fragin was detected in both media conditions, but the majority of other nodes in the cluster (11/17) were detected exclusively in either LB or SCFM2 media (Fig. 5a). These analogs differ in the www.nature.com/scientificreports/ length of the acyl group added by the HamF enzyme, likely a result of differential availability of compounds containing variable acyl chain lengths which act as substrates for HamF. The MS 2 spectra of these analogs showed a characteristic fragment corresponding to the loss of NO group (29.998 Da), that was also observed in the MS 1 spectrum as an in-source fragment 43 . Fragin analogs with differential production in the two media types include nodes with m/z 302.243 (and the corresponding in-source fragment with m/z 272.246), 316.223 (in-source fragment with m/z 286.225), and 318.239 (in-source fragment with m/z 288.240) (Fig. 5a). We discovered another cluster in our molecular network containing several molecules which, like fragin, are exclusively detected in K56-2 samples and show increased production in the presence of trimethoprim (Fig. 5b). These metabolites were reported by our group in a previous study as unknown metabolites 43 . Here we employed the recently developed CANOPUS tool to classify these unknown compounds into ClassyFire chemical classes, leading to their annotation as pyrazines 91,92 . This information led us to conduct a literature search of pyrazines produced by Burkholderia, enabling annotation of these metabolites as pyrazine N-oxides (PNOs), which was verified using standards provided by Li and colleagues (Supplementary Figs. S8, S9) 93 . The expression of pvfB and pvfC genes in animal pathogen Pseudomonas entomophila L48 and the plant pathogen Pseudomonas syringae pv. syringae UMAF0158 were previously shown to lead to production of a family of PNOs, namely PNO B (12, m/z 181.135), PNO A (13, m/z 197.126), and dPNO (14, m/z 199.145) 93 . The pvfB and pvfC genes are homologous to the Burkholderia genes hamC and hamD present in the biosynthetic gene cluster of fragin. Disruption of pvfA-D cluster in animal pathogen Pseudomonas entomophila L48 and the plant pathogen Pseudomonas syringae pv. syringae UMAF0158 was shown to significantly reduce the virulence of these strains 93 . Subsequent studies found that the pvf gene cluster is involved in synthesis of a signaling molecule which regulates production of small molecule virulence factors such as monalysin in P. entomophilia and mangotoxin in P. syringae [94][95][96] . Both dPNO and PNO B appeared in the feature-based molecular network, while PNO A did not due to low abundance. However, a node corresponding to PNO A was observed when data was analyzed with a classical molecular network (Fig. 5b). Two additional nodes in this network were putatively annotated as 2-isopropyl-3-methoxypyrazine (m/z 153.102, 15) and 2,5-diisopropylpyrazine (m/z 165.138, 16) based upon available literature of bacterially produced pyrazines (Fig. 5b, Supplementary Fig. S9) 97 . The MS 2 spectra of these related molecules did not have fragment ions in common, and so molecular networking methods alone failed to highlight the structural www.nature.com/scientificreports/ relatedness of these three compounds. Nevertheless, CANOPUS enabled us to independently predict that each of these compounds were pyrazines, ultimately leading to their annotation. These observations reveal that the ham gene cluster that is responsible for fragin biosynthesis also leads to production of PNOs in B. cenocepacia. PNO production was also induced by the addition of trimethoprim in both the SCFM2 and LB culture media. Thus, unlike the siderophores described above which showed mediadependent induction by trimethoprim, fragin and PNOs are similarly produced in the presence of antibiotic trimethoprim in both LB and SCFM2. The production of fragin and pyrazines including PNOs in the presence of trimethoprim in both SCFM2 and LB media highlights that antibiotics can serve as signaling molecules capable of significantly modulating expression of the genes encoding virulence factors across multiple nutritional environments.
Differential detection of lipids in LB and SCFM2. The overall lipid composition of bacterial cells has been reported to be influenced by several local environmental factors, including pH, nutrient availability, oxygen levels, temperature, and buildup of metabolic waste products 98 . Several clusters with hits to lipids in the GNPS spectral database were detected at higher levels in bacterial extracts grown in SCFM2, including hopanoids, www.nature.com/scientificreports/ phytomonic acid, monosaturated monoacylglycerols (MGs), and phosphatidylethanolamines (PEs), described below (Fig. 6). Annotation of the hopanoid cluster was performed by first searching for candidate features with m/z calculated for known bacterial hopanoids. Identified candidate features were then verified by comparing experimental MS 2 spectra to spectra previously published in literature 99,100 . Once annotations were supported through spectral matching, these features were used to further propagate annotations to connected nodes within the molecular network based on mass differences. Hopanoids are pentacyclic triterpenoids frequently found in bacterial membranes 101,102 . These polycyclic lipids are structurally analogous to eukaryotic sterols and are thought to have similar functions in regulating fluidity, permeability, and stabilization of bacterial membranes. Hopanoid biosynthesis has been previously reported in B. cenocepacia, where they are involved in resistance to low pH and antibiotics while also being important for swimming and swarming motility 101,102 . A metabolite feature with m/z 708.540 is annotated as bacteriohopanetetrol (BHT) cyclitol ether (17) and was detected only in extracts of SCFM2 (Fig. 6a, Supplementary Fig. S10a) 99,102 . Production of BHT cyclitol ether has been previously reported in B. cenocepacia, and our predicted annotation was supported by comparing experimental MS 2 spectra with previously published spectra available in literature 99,102 . In addition, we annotated the node with m/z 706.526 www.nature.com/scientificreports/ as unsaturated BHT cyclitol ether (18) (Fig. 6a). Both Δ 6 and Δ 11 monosaturated BHT analogs have been previously characterized in bacteria 100 . Although our mass spectrometry analysis alone is insufficient to pinpoint the location of this unsaturation, we have putatively annotated this feature as bacteriohop-6-enetetrol cyclitol ether (18) since unsaturation at this location has been reported in B. cepacia strains 102,103 . This feature was detected in all three B. cenocepacia strains when grown in SCFM2 medium with the highest intensity observed in the K56-2 strain, but when cultured in LB it was only detected in J2315 cultures at low levels. Annotation of another feature with m/z 724.535 is consistent with a gain of a hydroxyl group from BHT cyclitol ether (17). This feature was exclusively detected in K56-2 strains grown in SCFM2 medium and the associated MS 2 spectra indicates this feature is likely bacteriohopanepentol (BHP) cyclitol ether (19) (Supplementary Fig. S10b). BHP derivatives have been fully characterized only in Acetobacter spp., Azotobacter vinelandii, and Nostoc spp., although our observation is supported by prior evidence suggesting BHP derivatives are produced by B. cepacia strains as well [103][104][105][106] . The feature with m/z 722.520 was detected in all three B. cenocepacia strains (largely in K56-2 samples) grown in SCFM2 medium, which is presumably bacteriohop-6-enepentol cyclitol ether (20) although the location of unsaturation cannot be unambiguously determined as mentioned above. Next, a feature with m/z 750.551 was detected exclusively in SCFM2 containing cultures with a mass difference of 42.011 (C 2 H 2 O) from BHT cyclitol ether representative of acetylation. The final feature in the hopanoid cluster with m/z of 748.543 is annotated as unsaturated analog of the acetylated BHT cyclitol ether (m/z 750.551), with the unsaturation most likely occurring at the C6 position. Another metabolite in the lipid family is annotated as phytomonic acid (21) based on the spectral match to the MS 2 spectrum in the GNPS database. This annotation was confirmed using a commercial analytical standard (Fig. 6b, Supplementary Fig. S11a). Phytomonic acid was detected in both LB and SCFM2 media, although consistently higher levels were detected in SCFM2 medium for all three strains ( Supplementary Fig. S11b). Production of cyclopropane acids have been previously reported in B. multivorans 107 . Cyclopropane fatty acids such as phytomonic acid are suggested to regulate membrane fluidity and stability and have been shown to increase extracellular survival in acidic or under conditions of high osmolarity 108,109 . These lipids also are major components of the membranes of intracellular pathogens such as Brucella abortus and Mycobacterium tuberculosis 108 (Fig. 6c, Supplementary Fig. S12). For both 19:1 MG (22) and 17:1 MG (24), two unique metabolic features were detected with identical masses and MS 2 patterns but slightly different retention times, possibly corresponding to distinct cis/trans isomers. Burkholderia spp. are known to produce the lipase LipA, which yields monoacylglycerols as a product during degradation of di-and tri-acylglycerides 110,111 . In B. cenocepacia, production of the LipA lipase is induced as part of the CepIR quorum sensing system 86,112 . Due to the ability of monoacylglycerols to destabilize bacterial cell membranes, several have been reported to demonstrate antimicrobial activities which vary based on chain length and degree of unsaturation 113,114 .
25-
Finally, several phosphatidylethanolamines (PEs) were detected in this study, all consistently detected under similar conditions. In Gram-negative bacteria, the inner leaflet of the outer membrane is made up of phospholipids, of which PEs are the major component 115 (Fig. 6d) 116 . Production of 2-OH-PEs has been observed in B. cepacia, and is reported to increase during stress observed under growth in high temperature 117 . Differential metabolism of the antibiotic trimethoprim between LB and SCFM2. Several compounds that were differentially detected between growth in SCFM2 and LB were found to be related to trimethoprim as evidenced by MS 2 spectral similarity. As previously described, MS2LDA was used to discover metabolites that contained trimethoprim as substructure. Thus, being higher in molecular weight than trimethoprim, these metabolites represent biochemical transformations of trimethoprim itself by B. cenocepacia bacteria 43 . In this study, MS2LDA Mass2Motif 543 was annotated as a trimethoprim substructure. We compared the presence of these metabolites in the extracts of bacteria cultured in LB and SCFM2 (Fig. 7). Metabolism of trimethoprim was carried out by only the pigmented strains J2315 and C5424 and not by the non-pigmented K56-2 strain, as reported previously 43 . We generated a volcano plot of molecules containing the trimethoprim motif to visualize compounds which were differentially detected across the two media conditions (Fig. 7a). This analysis revealed that the majority of the trimethoprim metabolites showed significantly higher production in SCFM2 as compared to LB. Structural characterization of these metabolites will provide insight into the pathways used by pigmented Bcc strains to metabolize xenobiotic compounds like the antibiotic trimethoprim and will facilitate future studies exploring how biotransformation will impact antibacterial activity.
Conclusion
It is well established that environmental conditions are major drivers of secondary metabolite production in bacteria. Therefore, selecting an appropriate culture media for in vitro bacterial growth is crucial for designing metabolomic studies relevant to the biological system of interest. In this study, we characterized the metabolomic profiles of three clinical B. cenocepacia isolates in SCFM2 and LB media. Sublethal concentrations of www.nature.com/scientificreports/ trimethoprim has previously been shown to induce secondary metabolite production in Burkholderia spp. Thus, culturing was also performed with and without this antibiotic to understand how trimethoprim-induced metabolomic responses vary under CF relevant environmental conditions 42,43 . We demonstrate considerable metabolic variability between SCFM2 and LB media. In particular, we report that growth in SCFM2 medium upregulates production of pyochelin-type siderophores in the presence of trimethoprim while downregulating production of ornibactin siderophores compared to LB medium. We also note that AHL quorum sensing signals are differentially produced between the two media, with C13-AHLs exclusively detected in SCFM2 in the presence of antibiotic trimethoprim. Thus, trimethoprim induces both the production of pyochelin-type siderophores and a specific AHL signal in SCFM2 medium and not in LB. Moreover, we observed that metabolism of trimethoprim itself is significantly upregulated in SCFM2 medium compared to LB. Finally, we show that several lipid families (including hopanoids, cyclopropane fatty acids, monoacylglycerols, and phosphatidylethanolamines) exhibit increased production in SCFM2 compared to LB. While further work is needed to fully understand the biochemical mechanisms underlying these findings, this study provides insight into variable production of secondary metabolite production by Burkholderia cenocepacia spp. and is important for delineating personalized metabotypes of different strains of the same species in an in vitro model of CF. Even though metabolomics approaches have been significantly advanced in the last decade enabling detection of low-abundance metabolites, it remains a challenge to directly detect and identify a diversity of pathogen-specific chemical signals in clinical samples such as human sputum since host biomass typically vastly exceeds microbial biomass. Thus, interrogation of pathogenic strains isolated from infection sites in simplified model systems capable of inducing chemical signals observed during infection serves as an excellent discovery tool. Characterizing metabolomic profiles of B. cenocepacia strains isolated from infection sites and cultured in SCFM2 medium in the presence of antibiotics can become a valuable approach to guide appropriate use of combination treatments for effective pathogen clearance.
Methods
Bacterial strains. Burkholderia cenocepacia C5424, K56-2, and J2315 clinical isolates were used for culturing during this study 59,118,119 . These strains belong to the highly transmissible and epidemic ET12 lineage, wherein the J2315 and K56-2 strains are clonally related. As reported, the C5424 and J2315 strains produced observable amounts of the pigment pyomelanin whereas the K56-2 strain did not 46 .
Media formulation and growth conditions. SCFM2 medium was prepared as previously described by Turner and co-workers 31 . All culturing was performed following previously established methods 43 . Briefly, B. cenocepacia C5424, J2315, and K56-2 frozen glycerol stocks were streaked onto LB agar plates, and then incubated overnight at 37 °C. Subsequently, 5 mL of LB media was inoculated using colonies from the overnight incubated plates, which was then incubated overnight at 37 °C while shaking at 250 rpm. www.nature.com/scientificreports/ was added to whole cultures and mixed every 30 min for 2 h before centrifuging at 2000×g for 3 min. The organic layer (top) was aspirated off with a glass pipette and transferred to a scintillation vial before drying in vacuo with a centrifugal evaporator. For solid-phase extractions, cultures were centrifuged at 10,000×g for 10 min and decanted to remove supernatant fluid. SPE columns were initially washed with 10 mL of 100% MeCN, subsequently equilibrated with water (10 mL) and then loaded with the culture supernatant. Analytes were then sequentially eluted from the column using 5 mL each of 20%, 50% and 100% MeCN. These three fractions were pooled together and dried in vacuo with a centrifugal evaporator.
Ultra-high-performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS) data acquisition. Dried (1) were performed using a Waters Corporation Cortecs UPLC T3 column (2.1 × 150 mm, 1.6 µm particle size) coupled to a high-resolution accurate mass Orbitrap ID-X tribrid mass spectrometer. The chromatographic method for sample analysis involved elution with 100% water 0.1% formic acid (mobile phase A) and MeCN and 0.1% formic acid (mobile phase B) using the following gradient program: 0 min 95% A; 0.5 min 95% A; 6 min 0% A (curve 7); 9.4 min 0% A; 9.5 min 95% A; 11 min 95% A. The flow rate was set at 0.4 mL/min. The column temperature was set to 40 °C, and the injection volume was 1 µL. The Orbitrap ID-X is a tribrid spectrometer that utilizes quadrupole isolation with dual detectors, an orbitrap and an ion trap, with a maximum resolving power of 500,000 full width at half maximum (FWHM) at m/z 200 and mass accuracy of < 1 ppm. The heated electrospray ionization (HESI) source was operated at a vaporizer temperature of 275 °C, a spray voltage of 3.5 kV, and sheath, auxiliary, and sweep gas flows of 40, 8, and 1 in arbitrary units, respectively. The instrument acquired full MS data in the 100-1000 m/z range in positive ionization mode at 30,000 resolution. MS 3 data was collected with a MS 1 isolation window of 0.8 mz, HCD activation of 30% ± 50%, a MS 2 isolation window of 1.8 mz, MS 2 HCD activation of 40%, and product ion detection in the orbitrap at 30,000 resolution. Data processing, feature-based molecular networking, and feature annotation. The LC-MS/ MS data presented in this manuscript is deposited in the online repository MassIVE and publicly available (MSV000087793). The Compass DataAnalysis software was used to convert all the raw spectral files (.d format) to centroided, lock-mass corrected format (.mzXML) for downstream analyses. The converted spectral files were uploaded to MZmine2 (v.2.5.1) for feature detection 44 . Data files were batch processed and filtered using a MS 1 mass detection signal threshold of 5.0 × 10 2 counts. The following parameters were applied: (i) chromatogram builder (minimum time span: 0.05 min; minimum intensity of the highest data point in the chromatogram: 1.5 × 10 3 ; m/z tolerance: 10 ppm); (ii) chromatogram deconvolution (local minimum search, m/z range for MS 2 scan pairing: 0.025 Da; retention time range for MS 2 scan pairing: 0.2 min); (iii) isotopic peaks grouper (m/z tolerance: 10 ppm; retention time tolerance absolute: 0.1 min; maximum charge: + 3; monotonic shape: true; representative isotope: most intense); (iv) join aligner (m/z tolerance: 10 ppm; retention time tolerance: 0.1 min); (v) peak finder (intensity tolerance: 10%; retention time tolerance (absolute): 0.1; m/z tolerance: 10 ppm). A table with ion intensities for each feature was exported (.csv format) for statistical analyses and the "Export for SIRIUS" module was used to generate an .mgf file for batch analysis with SIRUS 4. Additionally, the "Export for GNPS" module was used to convert and export the feature quantification table (.csv format) and the corresponding list of MS 2 spectra linked to MS 1 features (.mgf format) needed to generate a feature-based molecular network. The feature quantitation table and .mgf file were submitted to the Global Natural Products Molecular Networking (GNPS) platform along with a metadata file (.txt format) containing sample information, and a feature-based molecular network was created 45 . The molecular network and parameters used can be accessed using the following link: https:// gnps. ucsd. edu/ Prote oSAFe/ status. jsp? task= a80b0 5b26a 1747c fa6ca f7f94 46edd 75. Briefly, the data was filtered by removing all MS/MS peaks within ± 17 Da of the precursor m/z. A parent mass tolerance of 0.01 Da and a MS/MS fragment ion tolerance of 0.05 Da were applied to create consensus spectra. The network was created with the edges filtered to have a cosine score above 0.7 and at least 4 matched peaks, www.nature.com/scientificreports/ and edges connecting two nodes were set to be kept in the network if each of the nodes appeared in each other's respective top 10 most similar nodes. All the mass spectra in the generated network were queried against the spectral libraries available on GNPS. The matched experimental and library spectra were set to have a similarity score above 0.7 and more than 4 matched peaks. This network was exported to Cytoscape for visualization, and nodes appearing in media or solvent blanks were removed for clarity (unless otherwise indicated) 120 . Boxplots were constructed for features of interest using the Plotter Dashboard (v.0.4) available on the GNPS platform. The UpSet plot was generated with the UpSetR package in R 48 . Metabolomic features of interest that remained unknown after library searching were further annotated by first searching through literature for molecules which are known to be produced by Burkholderia spp. and developing an in-house database. Our database was curated by inserting structures manipulated using MarvinSketch (v.20.9.0, ChemAxon Ltd.) into a spreadsheet installed with JChem for Excel (v.20.8.0.62, ChemAxon Ltd.). This database was used to identify candidate annotations for features, and these annotations were confirmed by manually inspecting MS 2 spectra and either comparing against published MS 2 spectra (when available) or against the MS 2 spectra of commercial analytical standards. We then applied the MS2LDA workflow to recognize patterns of common MS 2 fragments and neutral losses ("Mass2Motifs") corresponding to molecular substructures in our dataset 121 . This MS2LDA analysis is available at the following link: https:// gnps. ucsd. edu/ Prote oSAFe/ status. jsp? task= 62862 2833d 00460 48dba 64e65 ced54 2c. In addition, we performed batch analysis on the dataset using SIRIUS 4 integrated with CSI:FingerID and CANOPUS 91,92,[122][123][124][125] . SIRIUS 4 (v.4.0.1) was employed (using the default settings for a qTOF instrument) to predict molecular formulas for unknown features and develop fragmentation trees for manual annotation of MS 2 spectra 122-124 . CSI:FingerID was then utilized to predict molecular properties of unknown features, which were then queried against molecular properties predicted for compounds in all available molecular databases 125 . This in silico tool led to a ranked list of predicted structures, even when published MS 2 spectra were not available for these structures. Next, we deployed CANOPUS to classify features into molecular families using ClassyFire, providing biological insight in the absence of structural annotations 91,92 . MASST searching was used to find metabolomic datasets collected by other researchers containing a match to the metabolic feature annotated as enantiothiazostatin (1) 67 . This MASST job and the parameters used are available at: https:// gnps. ucsd. edu/ Prote oSAFe/ result. jsp? task= c5d7b 5cc74 59443 286fe d543e 5140c a4& view= view_ all_ datas ets_ match ed.
To determine differences in trimethoprim metabolism between B. cenocepacia strains grown in SCFM2 and LB media, metabolic features containing a trimethoprim substructure (appearing as Mass2Motif 543 in the MS2LDA analysis) were extracted from the feature quantification table (.csv format) generated by MZmine2. These features were then subjected to further statistical analysis on the MetaboAnalyst web server 126 . After uploading the peak intensity table for trimethoprim metabolites, data was log transformed and Pareto scaling was applied. The volcano plot comparing trimethoprim against non-trimethoprim cultures were then created using a fold-change threshold of 2.0 and an FDR adjusted p value threshold of 0.05, as calculated by a t-test.
ICP-MS analysis of LB and SCFM2 media. ICP-MS was performed on aliquots of LB and SCFM2 media on a Perkin Elmer NexION 2000 with a S10 Autosampler at the Emory University Mass Spectrometry Center. Samples were diluted 1:10 and 1:100 with Type I deionized water before aspirating into the plasma using the NexION 2000 Peristaltic pump. A kinetic energy discrimination method with helium flow set to 5 arbitrary units was used to detect the 57 Fe isotope using a pulse counting detection method. Calibration was done from 0.1 to 1000 ppb. Samples were analyzed in technical triplicate for each media type. To determine the amount of iron in each media sample, the mean and standard deviation of all concentrations measured across both dilutions was calculated and converted from ppb to micromolar units.
Data availability
The data obtained in this study has been deposited with the Mass Spectrometry Interactive Virtual Environment (MassIVE) with the identifier MSV000087793 and is accessible at ftp:// massi ve. ucsd. edu/ MSV00 00877 93/. www.nature.com/scientificreports/ | 2021-09-27T18:35:43.909Z | 2021-08-18T00:00:00.000 | {
"year": 2021,
"sha1": "14e8d9596ca771bc0dee84c5c400b0a0230e6235",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-00421-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d97bb4aaa6842eda63caeb806381ca3b5068e425",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
18773034 | pes2o/s2orc | v3-fos-license | May the Best Meme Win!: New Exploration of Competitive Epidemic Spreading over Arbitrary Multi-Layer Networks
This study extends the SIS epidemic model for single virus propagation over an arbitrary graph to an SI1SI2S epidemic model of two exclusive, competitive viruses over a two-layer network with generic structure, where network layers represent the distinct transmission routes of the viruses. We find analytical results determining extinction, mutual exclusion, and coexistence of the viruses by introducing the concepts of survival threshold and winning threshold. Furthermore, we show the possibility of coexistence in SIS-type competitive spreading over multilayer networks. Not only do we rigorously prove a region of coexistence, we quantitate it via interrelation of central nodes across the network layers. Little to no overlapping of layers central nodes is the key determinant of coexistence. Specifically, we show coexistence is impossible if network layers are identical yet possible if the network layers have distinct dominant eigenvectors and node degree vectors. For example, we show both analytically and numerically that positive correlation of network layers makes it difficult for a virus to survive while in a network with negatively correlated layers survival is easier but total removal of the other virus is more difficult. We believe our methodology has great potentials for application to broader classes of multi-pathogen spreading over multi-layer and interconnected networks.
I. INTRODUCTION
Multiple viral spreading within a single population involves very rich dynamics [1], attracting substantial attention [2][3][4]. Applications of these types of models extend beyond physiological viruses, as 'virus' may refer to products [5], memes [6], pathogens [7], etc. Multiple virus propagation is a mathematically challenging problem. This problem becomes particularly much more complicated if the network through which viruses propagate are distinct. Current knowledge of how hybridity of underlying topology influences fate of the pathogens is very little and limited. These systems are usually mathematically intractable, hindering conclusive results on spreading of multiple viruses on multi-layer networks.
Another source of complexity for this problem are multiple interaction possibilities among viruses. For example, viruses may be reinforcing [8], weakening [9], exclusive [10], or asymmetric [3,11]. Newman [10] employed bound percolation to study the spread of two SIR viruses in a host population through a single contact network, where a virus takes over the network, then a second virus spreads through the resulting residual network. The paper proved a coexistence threshold above the classical epidemic threshold, indicating the possibility of coexistence in SIR model. Karrer and Newman [1] extended the work to the more general case where both viruses spread simultaneously. For SIS epidemic spreading, Wang et el. [12] studied competitive viruses and proved exclusive, competitive SIS viruses cannot coexist in scale-free * Electronic address: faryad@ksu.edu networks.
Multilayer networks generate interesting results for competitive viral spreading. This type of models have implications in several applications like product adoption (e.g. Apple vs. Android smart phones), virus-antidode propagation, meme propagation, opposing opinions propagation, and etc. In competitive spreading scenario, if infected by one virus, a node (individual) cannot be infected by the other virus. Funk and Jansen [2] extended the bond percolation analysis of two competitive viruses to the case of a two-layer network, investigating effects of layer overlapping. Granell et al. [9] studied the interplay between disease and information co-propagation in a two-layer network consisting of one physical contact network spreading the disease and a virtual overlay network propagating information to stop the disease. They found a meta-critical point for the epidemic onset leading to disease suppression. Importantly, this critical point depends on awareness dynamics and the overlay network structure. Wei et al. [13] studied SIS spreading of two competitive viruses on an arbitrary two-layer network, deriving sufficient conditions for exponential die-out of both viruses. They introduced a statistical tool, Eigen-Predict, to predict viral dominance of one competitive virus over the other [4].
In this paper, we address the problem of two competitive viruses propagating in a host population where each virus has distinct contact network for propagation. In particular, we study an SI 1 SI 2 S model as the simplest extension from SIS model for single virus propagation to competitive spreading of two viruses on a two-layer network. From topology point of view, our study is comprehensive because our multilayer network is allowed to have any arbitrary structure.
Our paper is most relevant to [13] and [4]. Wei et al. conjectured in [13] and numerically observed in [4] that "the meme whose first eigenvalue 1 is larger tends to prevail eventually in the composite networks." We challenge this argument from two aspects: First, the definition of viral dominance in [4] is related to comparison of fractions of nodes infected by each virus. However, when comparing two viruses with two different contact networks, having a larger eigenvalue is not a direct indicator of a higher final fraction of infected nodes. In fact, it is possible to create two distinct network layers where a meme spreading in the population with smaller eigenvalue takes over a much larger fraction of the population. We find the definition of viral dominance presented in [4] cannot be corroborated with eigenvalues without severe restriction to a specific family of networks.
Second, and of paramount interest in this paper, largest eigenvalue is a graph property 2 of the layers in isolation and thus does not have the capacity to discuss the joint influence of the network topology, unless some sort of symmetry or homogeneity is assumed. In fact, the generation of one layer in their synthetic multi-layer network via the Erdos Reyni model [4] dictated a homogeneity in their multilayer networks, creating a biased platform for further observations of layer interrelations. Our work more accurately addresses network interrelation than presented by Wei et al. [4] in moving beyond viral aggressivity in isolation. We derived formulae more accurately and fully describing effect of individual network layers and their interrelatedness.
We quantitate interrelations of contact layers in terms of spectral properties of a set of matrices. Therefore, our results are not limited to any homogeneity assumption or degree distribution and network model arguments. We find analytical results determining extinction, mutual exclusion, and coexistence of the viruses by introducing concepts of survival threshold and winning threshold. Furthermore, we show possibility of coexistence in SIS-type competitive spreading over multilayer networks. Not only do we prove a coexistence region rigorously, we quantitate it via interrelation of central nodes across the network layers. None or small overlapping of central nodes of each layer is the key determinant of coexistence. We employ a novel multilayer network generation framework to obtain a set of networks so that individual layers have identical graph properties while the interrelation of network layers varies. Therefore, any difference in outputs is purely the result of interrelation. This makes ours a paradigmatic contribution to shed light on topology hybridity in multilayer networks. 1 Wei et al. [4] defined first eigenvalue of of a meme as βλ 1 − δ, where β is infection probabiltiy, δ is curing probabilty, and λ 1 is spectral radius of the underlying graph layer. 2 A graph property is any property on a graph which is invariant under relabeling of nodes. Eigenvalues, degree moments, graph diameter, etc. are examples of graph property.
II. COMPETITIVE EPIDEMICS IN MULTI-LAYER NETWORKS
In this paper, we study a continuous time SI 1 SI 2 S model of two competitive viruses propagating on a twolayer network, initially proposed in discrete time 3 [13].
A. Multilayer Network Topology
Consider a population of size N among which two viruses propagate, acquiring distinct transmission routes. Represented mathematically, the network topology is a multi-layer network because two link types are present; one type allows transmission of virus 1 and the other type. allows transmission of virus 2. We represent this multilayer network as G(V, E A , E B ), where V is the set of vertices (nodes) and E A and E B are set of edges (links). By labeling vertices from 1 to N , adjacency matrices A [a ij ] N ×N and B [b ij ] N ×N correspond to edge sets E A and E B , respectively, where a ij = 1 if node j can transmit virus 1 to node i, otherwise a ij = 0 , and similarly b ij = 1 if node j can transmit virus 2 to node i, otherwise b ij = 0. We assume the network layers are symmetric, i.e., a ij = a ji and b ij = b ji . Corresponding to adjacency matrices A, we define d A as the node degree vector, i.e., d A,i = N j=1 a ij , λ 1 (A) as the largest eigenvalue (or spectral radius) of A, and v A as the normalized dominant eigenvector, i.e., Av We similarly define d A , λ 1 (A), and v A for adjacency matrix B.
Unlike simple, single-layer graphs, multilayer networks have not been studied much in network science. We define simple graphs G A (V, E A ) and G B (V, E B ) to refer to each isolated layer of the multilayer network G(V, E A , E B ). This allows us to argue multilayer network G in terms of simple graphs G A and G B properties and their interrelation. FIG. 1 shows a schematics of the two-layer network.
B. SI1SI2S Model
The SI 1 SI 2 S model is an extension of continuous-time SIS spreading of a single virus on a simple graph [14,15] to modeling of competitive viruses on a two-layer network. In this model, each node is either 'Susceptible,' 'I 1 −Infected,' or 'I 2 −Infected ' (i.e.,infected by virus 1 or 2, respectively), while virus 1 spreads through E A edges and virus 2 spreads through E B edges.
In this competitive scenario the two viruses are exclusive: a node cannot be infected by virus 1 and virus 2 simultaneously. Consistent with SIS propagation on a single graph (cf. [14,15]), the infection and curing processes for virus 1 and 2 are characterized by (β 1 , δ 1 ) and (β 2 , δ 2 ), respectively. To illustrate, the curing process for I 1 −infected node i is a Poisson process with curing rate δ 1 > 0. The infection process for susceptible node i effectively occurs at rate β 1 Y i (t), where Y i (t) is the number of I 1 −infected neighbors of node i at time t in layer G A . Effective infection rate of a virus, defined as the ratio of the infection rate over the curing rate, measures the expected number of attempts of an infected node to infect its neighbor before recovering, thus quantifying aggressiveness of a virus per contact. Curing and infection processes for virus 2 are similarly described. FIG. 2 depicts a schematic of the SI 1 SI 2 S competitive epidemic spreading model over a two-layer network.
The SI 1 SI 2 S model is essentially a coupled Markov process. For a network with arbitrary structure, this model becomes mathematically intractable due to exponential explosion of its Markov state space size [16]. To overcome this issue with coupled Markov processes, applying closure techniques results in approximate models with much smaller state space size, however at the expense of accuracy. Specifically, a first order mean-field type approximation [16] suggests the following differential equations for the evolution of infection probabilities of virus 1 and 2, denoted by p 1,i and p 2,i for node i, respectively: for i ∈ {1, ..., N }, with the state-space size of 2N . This model is an extension of NIMFA model [14] for SIS spreading on simple graphs.
Our competitive virus propagation model (1-2) exhibits rich dynamical behavior dependent on epidemic parameters and contact network multi-layer structure.
Values of effective infection rates τ 1 β1 δ1 and τ 2 β2 δ2 of virus 1 and 2 yields several possible outcomes for SI 1 SI 2 S model (1-2). In particular, both viruses may extinct ultimately, or one removes the other one, or both coexist.
C. Problem Statement
Linearization of our SI 1 SI 2 S model (1-2) at the healthy equilibrium (i.e. p 1,i = p 2,i = 0, i ∈ {1, ..., N }) demonstrates the exponential extinction condition for both viruses. When τ 1 < 1/λ 1 (A) and τ 2 < 1/λ 1 (B), any initial infections exponentially die out. In this paper, we refer to such critical value as no-spreading threshold because a virus with a lower effective infection rate is too weak to spread in the population even in the absence of any viral competition.
2. Which characteristics of multi-layer network structure allow for coexistence?
These questions pertain to long term behaviors of competitive spreading dynamics. To address these questions, we perform a steady-state analysis of SI 1 SI 2 S model. Specifically, bifurcation techniques are used to find two critical values: survival threshold and winning threshold to determine if a virus will survive and whether it can completely remove the other virus. Significantly, we go beyond these threshold conditions and examine interrelation of network layers. Using eigenvalue perturbation, we find interrelations of dominant eigenvectors and nodedegree vectors of network layers are critical determinants in ultimate behaviors of competitive viral dynamics.
III. MAIN RESULTS
Given our stated objective to study long-term behavior of SI 1 SI 2 S model for competitive viruses, we use bifurcation analysis to study the steady-state behavior of SI 1 SI 2 S model. Application of bifurcation analysis to the SIS model of a single virus on a simple graph determines the critical value at which a non-healthy equilibrium emerges [14], determining a survival threshold for the virus. Interestingly, no-spreading threshold and survival threshold coincide for this SIS model. However, we expect these two critical values are distinct for SI 1 SI 2 S because a virus may initially spread in an almost entirely susceptible population but then die out from competition with a simultaneous virus having a sufficiently stronger infection rate.
In fact, the survival threshold is larger than the nospreading threshold, monotonically increasing with the aggressivity of the other competitive virus. Furthermore, a surviving virus can even be so aggressive to completely remove the other virus. Consequently, competitive spreading induces an additional threshold conceptthe winning threshold-determining the critical value of effective infection rate for a virus to prevail as sole survivor.
The determination of the two thresholds for each virus involves four quantities. We are able to deduce winning thresholds from survival thresholds, which then become our sole focus. Furthermore, with no loss of generality, we only find survival threshold of virus 1 because of expressions duality.
Unfortunately, any conclusive understanding of the system is hindered by the complex interdependency of survival threshold of one virus on the multilayer network topology and the aggressiveness of the competitive virus. While complete analytical solution of survival threshold appears impossible, we characterize possible solutions with explicit analytical expressions. This step is a unique contribution to current understanding of competitive spreading over multi-layer networks with solid and quantitative implications on role of multilayer network topology.
A. Threshold Equations
Bifurcation analysis of SI 1 SI 2 S model equilibriums finds the survival threshold. Our competitive virus propagation model (1-2) yields the equilibriums equations: for i ∈ {1, ..., N }. The healthy equilibrium (i.e., p * 1,i = p * 2,i = 0, ∀i) is always a solution to the above equilibrium equation (3)(4). Long term persistence of infection in the population is associated with non-zero solution for the equilibrium equations [14]. We use bifurcation theory to identify critical values for effective infection rates τ 1 and τ 2 such that a second equilibrium, aside from the healthy equilibrium, emerges. The critical value for one virus is a function of the effective infection rate of the other virus. Without loss of generality, we determine the survival threshold for virus 1 by finding the critical effective infection rate τ 1c as a function of τ 2 .
The above definition for survival threshold value indicates that exactly at the threshold value, p * 1,i | τ1=τ1c = 0 and dp * 1,i dτ1 | τ1=τ1c > 0 for all i ∈ {1, ..., N }. Taking the derivative of equilibrium equations (3) with respect to τ 1 , and defining we find the survival threshold τ 1c is the value for which nontrivial solution exists for w i > 0 in where y i is the solution of: according to equilibrium equation (4). Equation (6) is an eigenvalue problem. Among all the possible solutions, only is acceptable; according to Perron-Frobenius Theorem, only the dominant eigenvector of the matrix diag{1 − y i }A has all positive entries, yielding w i = dp * The eigenvalue problem (6) gives a mathematical way to find the survival threshold τ 1c , depending on the value of τ 2 . Unfortunately, this implicit dependence hinders clear understanding of the propagation interplay between virus 1 and virus 2.
Finding y i for all possible values of τ 2 , then finding the threshold value τ c1 from (8), we obtain survival threshold curve Φ 1 (τ 2 ) for virus 1. This curve divides the region of (τ 1 , τ 2 ) into two regions, where virus 1 survives and one where virus 1 extincts. We can use analogous equations to find survival threshold curve Φ 2 (τ 1 ) for virus 2. Given τ 2 , we can find τ 1c such that for τ 1 > τ 1c , virus 1 can survive.
B. Characterization of Threshold Curves
Complete analytical solution of survival threshold curves is not feasible. Instead, we quantitate interrelations of contact layers to formulate our analytical assertions. We describe conditions for viral coexistence through attaining explicit analytical quantities giving conditions for mutual exclusion and coexistence of viruses. Our approach to this problem finds explicit solutions to (6) and (7) for values of τ 2 close to 1/λ 1 (B) and for very large values of τ 2 to quantitate the survival epidemic curves. Since we know solution to (7) and the survival threshold value τ 1c at both extreme values, we can employ eigenvalue perturbation techniques to find explicit solutions for τ 2 close to 1/λ 1 (B) and τ 2 very large. Results for τ 2 close to 1/λ 1 (B) apply where competitive viruses are non-aggressive, whereas results for τ 2 very large corresponds to aggressive competition. Behavior of the competitive spreading process for moderate aggressiveness is an interpolation of the extreme scenarios of non-aggressive and aggressive propagation.
First, we perform perturbation analysis to find τ c1 for values of τ 2 close to 1/λ 1 (B). We know at τ 2 = 1/λ 1 (B), y i = 0 solves (7), thus τ c1 = 1/λ 1 (A) is the survival threshold according to (7). For values of τ 2 close to 1/λ 1 (B), we use eigenvalue perturbation technique and study sensitivity of threshold equation (6) respective to deviation in τ 2 from 1/λ 1 (B). As detailed in the Appendix, we find expressing the dependency of virus 1 survival threshold (τ 1c ) to effective infection rate of virus 2 (τ 2 ) for values of τ 2 close to 1/λ 1 (B). Expression (10) consists of two components: λ1(B) λ1(A) , the spectral radius ratio of each network layers in isolation, and (10), the die-out threshold curve Φ 1 (τ 2 ) can be approximated close to (τ 2 , τ 1 ) = ( 1 λ1(B) , 1 λ1(A) ) as Studying threshold equations (6)-(7) for τ 2 → ∞, we find τ1c τ2 | τ2→∞ is the inverse of the spectral radius of D −1 B A (see Appendix for detailed derivation): expressing the dependency of virus 1 survival threshold (τ 1c ) on effective infection rate of virus 2 (τ 2 ) for large values of τ 2 . This expression (12) directly highlights the influence of interrelations of the two layers. Significantly, if λ 1 (D −1 B A) is large, expression (10) suggests that virus 1 survival threshold does not increase significantly by virus 2 infection rate. Similar arguments about interpretation of (10) apply to aggressive competitive viruses where τ 1 and τ 2 are relatively large. The main difference in case of aggressive competitive spreading is that node degree is the determinant of centrality. From (12), the die-out threshold curve Φ 1 (τ 2 ) asymptotically becomes We prove conditions for coexistence by showing there is overlapping between regions where viruses survive.
Theorem 1 In SI 1 SI 2 S model (1-2) for competitive epidemics over multi-layer networks, if the two network layers G A and G B are identical, coexistence is impossible, i.e., a virus with even a slightly larger effective infection rate dominates and completely removes the other virus. Otherwise, if node-degree vectors of G A and G B are not parallel, i.e., d A = cd B , or dominant eigenvectors of G A and G B do not completely overlap, i.e., v A = v B the multi-layer structure of the underlying topology allows a nontrivial coexistence region.
Proof. If G A = G B , then equation (7) suggests τ c1 = τ 2 solves threshold equation (6). Similarly τ 2c = τ 1 , suggesting τ † 1 = τ 2 according to (9), i.e., survival and winning thresholds coincide. Therefore, the virus with even a slightly larger effective infection rate dominates and completely removes the other virus if the two network layers are identical.
In order to show possibility of coexistence for nonaggressive competitive viruses, we show the survival regions overlap by proving Using expression (10) and its counterpart for dτ2,c dτ1 (see Appendix), we find condition (14) is always true except for the special case where dominant eigenvectors of G A and G B completely overlap, i.e., v A = v B .
In order to show possibility of coexistence for aggressive competitive viruses, we show the survival regions overlap by proving Using expression (12) and its counterpart for τ2c τ1 | τ2→∞ (see Appendix), we find that condition (15) is always true except for the special case where node-degree vectors of G A and G B are parallel, i.e., d A = cd B .
When dominant eigenvectors of G A and G B are not identical, condition (14) indicates non-aggressive viruses can coexist. When propagation of competitive viruses is aggressive, condition (15) indicates viruses can coexist if node-degree vectors of G A and G B are not parallel. However, the rare scenario where G A and G B are not identical and d A = cd B and v A = v B hold simultaneously demands further exploration.
The above theorem and equations (10) and (12) prove the importance of interrelation of network layers. As will be discussed in the simulation section, one approach capturing only the effect of interrelation is generating multilayer networks from two graphs G A and G B through simple relabeling vertices of G B . We thus have a set of multilayer networks whose layers have identical graph properties but correpondence of nodes in one layer to the nodes of the other varies.
In the context of competitive spreading, whether memes, opinions, or products, the population under study serves as the 'resource' for the competitive entities, relating nicely to the concept of 'competing species' in ecology. Longterm study of competing species in ecology centers on the 'competitive exclusion principle' [17]: Two species competing for the same resources cannot coexist indefinitely under identical ecological factors. The species with the slightest advantage or edge over another will dominate eventually. Our SI 1 SI 2 S model also predicts when the network layers are identical, coexistence is not possible. Significantly, different propagation routes break this 'ecological symmetry,' allowing coexistence. Not only have we rigorously proved a coexistence region, we quantitated this ecological asymmetry via interrelation of central nodes across the network layers. None or small overlapping of central nodes of each layer is the key determinant of coexistence. Excitingly, this conclusion nicely relates to 'niche differentiation' in ecology and yet is built upon network science rigor.
D. Multi-layer Network Metric for Competitive Spreading
Proving coexistence is one of the key contributions of this paper. We go further to define a topological index Γ s (G) quantifying possibility of coexistence in a multi-layer network G = (V, E A , E B ) for the case of nonaggressive spreading as .
Values of Γ s (G) vary from 0 (corresponding to the case where v A = v B ) to 1. Values of Γ s (G) close to zero imply coexistence is rare and any survived virus is indeed the absolute winner. Γ s (G) closer to 1 indicates coexistence is very possible on G. Therefore, Γ s (G) can be used to discuss coexistence of non-aggressive competitive viruses. Similar to non-aggressive competitive spreading, we can define a topological index Γ l (G) to quantify coexistence possibility in a multi-layer network .
Values of Γ l (G) vary from 0 (corresponding to the case where d A = cd B ) to 1. Values of Γ l (G) close to zero imply coexistence is rare and any survived virus is indeed the absolute winner. Γ l (G) closer to 1 indicates coexistence is very possible on G. Therefore, Γ l (G) can be used to discuss coexistence of aggressive competitive viruses.
E. Numerical Simulations
Multi-layer network generation: Our objective for numerical simulations is not only to test our analytical formulae, but also to investigate our prediction of cross-layer interrelation effect on competitive epidemics. This demands a set of two-layer networks for which isolated layers have identical graph properties but how these layers are interrelated is different, hence capturing the pure effect of interrelation. Specifically, in the following numerical simulations, the contact network G A through which virus 1 propagates is a random geometric graph with N = 1000 nodes, where pairs less than r c = 3 log(N ) πN apart connect to ensure connectivity. For the contact graph of virus 2 (G B ), we first generated a scale-free network according to the Barabási-Albert model. We then used a randomized greedy algorithm to associate the nodes of this graph with the nodes of G A , approaching a certain degree correlation coefficient ρ with G A , i.e., each iteration step permutates nodes when the degree correlation coefficient , is closer to the desired value. Specifically, we obtained three different permutations where the generated graphs are negatively (ρ = −0.47), neutrally (ρ = 0), and positively (ρ = 0.48) correlated with G A . These three graphs have identical graph properties, yet they are distinct respective to G A . FIG. 4 depicts a graph G A and three graphs of G B with N = 100 nodes to improve conceptualization.
Steady-state infection fraction: When the spreading of a single virus is modeled as SIS, the steady-state infection fractionp ss = 1 N p i illustrates a threshold phenomena respective to effective infection rates: steadystate infection fractionp ss is zero for effective infection rates less than a critical value but becomes positive for larger values. When two viruses compete to spread, steady state infection fractionp ss 1 = 1 N p 1,i of virus 1 in the SI 1 SI 2 S model exhibits a threshold behavior at τ 1 = τ 1c , for a given τ 2 . FIG. 5 depicts the steady state infection fraction curve of virus 1 in the SI 1 SI 2 S competitive spreading model. In this simulation, effective infection rate of virus 2 is fixed at τ 2 = 6 1 λ1(B) and G B is positively correlated with G A (ρ = 0.48). In order to obtain a unified form, we normalized the horizontal axis to τ 1 λ 1 (A). The steady state infection fraction of virus 1, p ss 1 , is zero for τ 1 ≤ τ 1c ≃ 3 1 λ1(A) , identifying this range as an extinction region for virus 1, whilep ss 1 is positive for τ 1 > τ 1c indicating survival of virus 1. Interestingly, aside from the survival threshold τ 1c , the winning threshold τ † The contact network GA through which virus 1 propagates is a random geometric graph where pairs of nodes with a distance less than rc are connected to each other. For visualization convenience, the number of nodes is N = 100, which is different from the actual N = 1000 used for numerical simulation results. For the contact graph of virus 2 (GB), we first generated a scale-free network according to the B-A model, associating the nodes of this graph with the nodes of GA to achieve a certain degree correlation coefficient with GA. Specifically, we obtained three different permutations such that the generated graphs are negatively, neutrally, and positively correlated with GA. These three graphs are the same if isolate, and distinct in their interrelation with GA. The high degree nodes in the positively correlated GB (lower right) have also high degree in GA (upper left), while the high degree nodes in the negatively correlated GB (upper right) have low degree size in GA. The uncorrelated GB (lower left) shows no clear association.
nario (red curve) is exactly similar to the case of single virus propagation (black curve) for τ 1 > τ † 1 ≃ 6.6 1 λ1(A) . Hence, this region is identified as the absolute winning range for virus 1. For τ 1 ∈ (τ 1c , τ † 1 ), virus 1 and virus 2 each persist in the population, marking this range as the coexistence region.
FIG . 6 illustrates the dependency of steady-state infection fraction curve on network layer interrelation. When the contact network of virus 2 (G B ) is positively correlated with that of virus 1 (G A ), it is more difficult for virus 1 to survive, making the survival threshold τ 1c relatively larger for positively correlated G B . Negatively correlated contact network layers impede virus 1 from completely suppressing virus 2, making winning threshold τ † 1 larger for negatively correlated G B . In this simulation, the steady-state infection fraction of virus 1 (p ss 1 ) is zero for τ1 ≤ τ1c ≃ 3 1 λ 1 (A) , an extinction region for virus 1. Interestingly, for τ1 > τ † 1 ≃ 6.6 1 λ 1 (A) ,p ss 1 for the competitive scenario (red curve) is identical to the case of single virus propagation (black curve), suggesting extinction of virus 2, hence marking this region as the winning range for virus 1. For τ1 ∈ (τ1c, τ † 1 ), virus 1 and virus 2 both persist in the population, marking this range for coexistence region.
IV. DISCUSSION AND CONCLUSION
Competitive multi-virus propagation shows very rich behaviors, beyond those of single virus propagation. This type of modeling is suitable for co-propagation of exclusive entities, for example, opposing opinions about a sub-ject, where people are for, against, or neutral; spreading of a disease through physical contact and viral propagation of antidote providing absolute immunity to the disease, or marketing penetration of competitive products like Android versus Apple smart phones. Aside from its potential applications, the problem of competitive spreading over multilayer networks is technically challenging. In particular, compared to single layer networks, science of multilayer networks is still in its infancy.
There are yet numerous unknowns about this complex problem.
In this paper, we study SI 1 SI 2 S model, the simplest extension of SIS model to competitive spreading over a two-layer network, focusing on long-term behaviors in relation to multilayer network topology. In brief, the major contributions of this paper are: (a) identification and quantification of extinction, coexistence, and mutual exclusion via defining survival thresholds and winning thresholds, (b) proving a region of coexistence and quantitating it through overlapping of layers central nodes, (c) developing an explicit approximation formula to globally find threshold values, and (d) proposing a novel multilayer network generation scheme to capture influence of layers interrelation. We believe our methodology has great potentials for application to broader classes of multi-pathogen spreading over multi-layer and interconnected networks.
Coexistence Proofs
Coexistent region non-aggressive competitive viruses: To investigate the coexistence region for non-aggressive viruses we show that (14) is true. From (10), we find Multiplying sides of (A.10) and (A.11) yields A,i ), (A.12) proving (A.9) is true. | 2013-08-30T02:35:07.000Z | 2013-08-22T00:00:00.000 | {
"year": 2013,
"sha1": "b0ab6807b3f3b31cc95c053afa59eee4faa0f673",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b0ab6807b3f3b31cc95c053afa59eee4faa0f673",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics"
]
} |
226984254 | pes2o/s2orc | v3-fos-license | Clinical characteristics of dry eye with ocular neuropathic pain features: comparison according to the types of sensitization based on the Ocular Pain Assessment Survey
Background To compare the clinical characteristics of dry eye patients with ocular neuropathic pain features according to the types of sensitization based on the Ocular Pain Assessment Survey (OPAS). Methods Cross-sectional study of 33 patients with dry eye and ocular neuropathic pain features. All patients had a comprehensive ophthalmic assessment including detailed history, the intensity and duration of ocular pain, the tear film, ocular surface, and Meibomian gland examination, and OPAS. Patients with < 50% improvement in pain intensity after proparacaine challenge test were assigned to the central-dominant sensitization group (central group) and those with ≥50% improvement were assigned to the peripheral-dominant sensitization group (peripheral group). All variables were compared between the two groups. Results No significant differences were observed in age, sex, underlying diseases, history of ocular surgery, duration of ocular pain, tear film, ocular surface and Meibomian gland parameters (all p > 0.05). Ocular pain and non-ocular pain severity and the percentage of time spent thinking about non-ocular pain were significantly higher in the central group than in the peripheral group (all p < 0.05). Central group complained more commonly of a burning sensation than did the peripheral group (p = 0.01). Conclusions Patients with central-dominant sensitization may experience more intense ocular and non-ocular pain than the others and burning sensation may be a key symptom in those patients.
inflammation can cause peripheral neuronal sensitization, and repeated peripheral nerve injury can lead to central neuronal sensitization [5][6][7][8][9]. Topical anesthetic may be insufficient to alleviate pain in patients with centralized ocular neuropathic pain. Crane et al. [10] have introduced the proparacaine challenge test which can discriminate if there is a centralized component in ocular pain.
There is no gold standard for diagnosing ocular neuropathic pain. Owing to the scarcity of available signs, the diagnosis mainly depends on clinical history, symptoms, and ophthalmologic examination results [11,12]. Many studies attempted to evaluate patients with ocular neuropathic pain with dry eye related questionnaires, such as the Ocular Surface Disease Index, Dry Eye Questionnaire, and Dry Eye-Related Quality-of-Life Score [13][14][15]. The Ocular Pain Assessment Survey (OPAS) is a validated questionnaire for ocular pain and assesses nonocular pain, quality of life (QoL), aggravating factors and associated factors as well [16].
In this study, we collected clinical data and the OPAS questionnaires from three eye centers to investigate the characteristics of patients with DED and ocular neuropathic pain features, the association of ocular neuropathic pain features with pre-existing medical conditions, and the clinical differences between groups classified according to the dominant types of sensitization.
Methods
A multicenter, cross-sectional study was performed between January 2, 2018 and June 30, 2019, at the outpatient departments of three eye centers in Korea, namely, Chonnam National University Hospital, Konyang University Hospital, and Chonbuk National University Hospital. Informed consent was obtained from each patient. Ethical approval was obtained from the ethical committees of all participating hospitals, and the study protocol followed the guidelines of the Declaration of Helsinki.
Patients and groups
Patients complaining of continuous severe ocular pain or burning sensation of pain score 7 or more using the Wong-Baker FACES® Pain Rating Scale with little or no corneal staining were included. The diagnosis of DED was made based on DEWS II criteria [2]. Patients who complained of ocular discomfort and had tear film break-up time (TBUT) less than 10 s were included. Patients with active inflammation of ocular surface and eye lid, orbital diseases that could induce pain, glaucoma, and migraine were excluded. Patients with deficient tear secretion with Schirmer test scores less than 5 mm/5 min without anesthesia were also excluded.
Patients were divided into two subgroups according to their response to the proparacaine challenge test [6,10]. As part of the test, 10 μL of 0.5% proparacaine hydrochloride (Alcaine, Alcon, Fort Worth, TX, USA) was instilled in the inferior fornix of each eye. The Wong-Baker FACES® Pain Rating Scale scores (range, 0-10) were recorded before and 15 s after proparacaine administration. Patients with more than a 50% decrease in pain scores after 15 s were assigned to the peripheraldominant sensitization group (peripheral group) and those with a decrease in pain scores equal to or less than 50% were assigned to the central-dominant sensitization group (central group). Information on demographics and thorough history of systemic diseases and ocular surgery was collected for each patient.
Tear film, ocular surface, and Meibomian gland assessment TBUT, Schirmer test score, and corneal staining score (CSS) were evaluated by three cornea specialists (K.C.Y, I.C.Y, and B.Y.K) at the first visit. TBUT was assessed three times after the instillation of fluorescein dye, and the mean TBUT recorded in seconds was used for analysis. CSS was evaluated subsequently by employing a white light and cobalt blue filter, using the area-density index, scoring the area (0-3) and density (0-3) of the superficial punctate corneal lesion, and multiplying the area and density scores (0-9) [17]. The Schirmer test was performed using a calibrated sterile strip (Color Bar Schirmer Tear Test, Eagle Vision Inc., Memphis, TN, USA) under topical anesthesia (0.5% proparacaine hydrochloride). The sterile strips were placed in the lateral canthus, away from the cornea, for 5 min with the eyes closed. Schirmer test scores were recorded in millimeters of wetting after 5 min.
The Meibomian gland (MG) expressibility was assessed by applying digital pressure onto the lower central eyelid and counting the number of expressed gland orifices within the central eighth of the lower eyelid and was scored on a 0-3 grading scale [18]. MG secretion quality score was also assessed using a 0-3 grading scale [18]. The eye with worse pain was chosen for statistical analysis for each patient because the OPAS questions are conducted on the eye with more pain [16]. When both eyes had the same pain intensity, the values from the right eye were included in the analysis.
Ocular pain assessment
All patients completed the OPAS, which is a validated questionnaire for neuropathic pain that combines patient responses regarding ocular and non-ocular pain intensity, impact on QoL, aggravating factors, associated factors, and symptomatic relief [16]. The questions were divided into sections for analysis: questions 4-9 pertained to the intensity of ocular pain; questions 10-12, non-ocular pain; questions 13-19, the QoL; questions 20-21, aggravating factors; and questions 22-25, associated factors. After excluding the section on symptomatic relief, only questions 4-25 were analyzed in this study.
Statistical analysis
SPSS Statistics for Windows, Version 23.0 (IBM Corp., Armonk, NY, USA) was used for statistical analysis. All data are shown as the mean ± standard deviation. The Kolmogorov-Smirnov test was performed for continuous variables, and the normally distributed variables were age, OPAS ocular pain intensity score, impairment in walking score, pain-associated redness, and burning sensation score. The independent t-test was used to identify between-group differences in the mean values of these variables. For other variables that were not normally distributed, the Mann-Whitney U-test was performed. Pearson's correlation analysis between the Wong-Baker FACES® Pain Rating Scale and the ocular pain severity score of OPAS was conducted. A p value < 0.05 was considered statistically significant. The sample size of 17 subjects in the central group and 16 subjects in the peripheral group provided approximately 88% of power to show a significant difference with the independent t-test.
Results
A total of 33 patients were analyzed. On the basis of the proparacaine challenge test results, 17 patients were assigned to the central group and 16 patients to the peripheral group; their mean ages were 59.12 ± 11.58 and 58.13 ± 12.86 years, respectively. There were more women than men in both groups. Table 1 shows the demographic features and personal history of the patients. Two patients in the central group and one in the peripheral group had been diagnosed with chronic pain syndrome (CPS). In the central group, two patients had psychological disorders and two others had neurological disorders, whereas in the peripheral group, none had psychological or neurological disorders. Six patients in the central group and three in the peripheral group had previous cataract surgery. No significant differences were observed in the demographic and personal history data between the two groups (all p > 0.05).
The mean values for TBUT, Schirmer test score, CSS, and MG quality and expressibility in patients with ocular neuropathic pain were 4.67 ± 2.01 s, 7.22 ± 5.09 mm, 0.42 ± 0.75, 1.21 ± 0.82, and 0.61 ± 0.79, respectively. No significant differences were found in tear film, ocular surface, and MG parameters between the central and peripheral groups (all P > 0.05) ( Table 2). The duration of ocular pain was 38.35 ± 31.37 months in the central group and 36.00 ± 30.54 months in the peripheral group but the difference between groups was not significant (p = 0.69). Table 3 summarizes the OPAS scores in the participants. The ocular pain severity score was significantly higher in the central group (36.71 ± 11.48) than in the peripheral group (25.06 ± 11.21) (p < 0.01). The pain scale assessed by the Wong-Baker FACES® Pain Rating Scale was also significantly higher in the central group (7.06 ± 2.33 vs. 5.19 ± 2.56, p = 0.04) and the Pearson's correlation coefficient between the Wong-Baker FACES® Pain Rating Scale and the OPAS score was 0.78 (p < 0.001). Non-ocular pains, such as headache, backache, and arthralgia, scored higher in the central group (6.12 ± 3.12) than in the peripheral group (3.81 ± 2.90,
Discussion
In the present study, dry eye patients with ocular neuropathic pain features were divided into two groups on the basis of proparacaine challenge test results. Although clinical parameters associated with the tear film and ocular surface, such as TBUT, basal tear secretion, CSS, and MG parameters, were not significantly different between the two groups, the central group complained of more severe ocular pain and non-ocular pain than did the peripheral group. In addition, we noticed that patients in the central group complained of a burning sensation more commonly than did those in the peripheral group.
The underlying mechanisms of DED is ocular surface inflammation and damage induced by tear hyperosmolarity [1]. Tear hyperosmolarity can be a direct cause of ocular discomfort and it can also lead to a death of epithelial cells and a loss of goblet cells and induce ocular discomfort indirectly [1,19]. However, some patients may suffer from allodynia, hyperalgesia, hypesthesia, and hyperesthesia without any obvious abnormal findings and are thought to have ocular neuropathic pain features [20,21]. Ocular neuropathic pain is known to be associated with other systemic diseases such as depression, anxiety, fibromyalgia and headache [6]. It is more frequently reported in females than males [22].
It is well established that structural and functional changes occur in ocular surface sensory nerves in DED. Reduced tear secretion causes stress on ocular mucosal epithelium, leading to local inflammation and peripheral nerve damage [23]. Long-term inflammation and nerve injury alter trigeminal ganglion and brainstem neurons, changing their excitability, connectivity, and impulse firing causing dysesthesias and neuropathic pain referred to the ocular surface [23][24][25]. Subcategorizing DED patients based on peripheral and/or central dysfunction has important implications in the treatment the disease [26]. Patients with peripheral abnormalities may benefit from treatments targeting ocular surface inflammation and hyperosmolarity whereas patients with neuropathic pain features may need more centrally acting neuromodulators [27].
In vivo confocal microscopy (IVCM) visualizes microstructures, including corneal nerve plexus [28][29][30]. Some studies have demonstrated the usefulness of the IVCM in evaluating corneal neuropathies by visualizing the decrease in sub-basal corneal nerve density, increase in nerve tortuosity, activation of keratocytes and spindle in corneal stroma, and the presence of microneuromas in the stroma [11,29,30]. In many general clinical settings, the IVCM is not available for evaluation.
The 0.5% proparacaine challenge test is a useful method to assess the central sensitization of ocular pain in a general clinical setting which does not provide research equipments such as esthesiometry or confocal microscopy [10]. In previous studies, patients with complete relief of pain after proparacaine administration were classified into the peripheral sensitization group, those with consistent pain without relief were classified into the central sensitization group, and those with partial relief were classified into the mixed sensitization group [10]. Dieckmann et al. [6] reported that most of the patients showed partial improvement in pain, suggesting that central sensitization and peripheral sensitization were mixed, and the rate of contribution to pain depends on the etiology or duration of the disease. Similarly, in our study, 27 out of 33 (81.82%) patients showed mixed sensitization. Therefore, for the analysis, we divided the patients into two groups: centraldominant sensitization group with less than 50% improvement in pain, and peripheral-dominant sensitization group with 50% or more improvement in pain. No significant differences were observed in pain duration between the two groups. Comorbidities such as CPS, neurologic disorders, and psychological disorders were more common in the central group (70.6%) than in the peripheral group (37.5%).
Previous studies have attempted to evaluate patients with ocular pain by using dry eye-related questionnaires, such as the Ocular Surface Disease Index, Dry Eye Questionnaire, and Dry Eye-Related Quality-of-Life Score [13][14][15]. In many of these questionnaires, ocular pain and dry eye symptoms are queried simultaneously. The Neurobehavioral Rating Scale, which is used as a primary outcome measure for chronic pain, and the Neuropathic Pain Symptom Inventory (NPSI), which is used to evaluate neuropathic pain, have also been used for the evaluation of ocular pain [31,32]. Recently, the NPSI was appropriately adapted for evaluating ocular pain, and a modified NPSI-Eye was later developed during our study period [33]. Among the available questionnaires, we used the OPAS to evaluate patients with ocular neuropathic pain features, because it provides multidimensional information not only on the severity of ocular pain but also on associated and aggravating factors and the QoL; moreover, it is a validated questionnaire for ocular pain [16]. The Pearson's correlation analysis between the Wong-Baker FACES® Pain Rating Scale and the OPAS score showed good correlation between two measures.
Kalangara et al. [5] used the term "burning eye syndrome" for a subset of dry eye representing a neuropathic pain of the eye. Burning sensation is often diagnosed as neuropathic pain when primary painful conditions are excluded. Burning mouth syndrome (BMS) shares many features with ocular neuropathic pain [34][35][36]. BMS is characterized by abnormal burning sensation in the oral cavity, but in the absence of clinical lesions. Patients diagnosed with BMS have been reported to have psychiatric disorders, such as depression and anxiety, or are mentally vulnerable to stress. In addition, loss of small-diameter nerve fibers in the oral mucosa and decreased brain activation to heat stimuli on functional magnetic resonance imaging have been demonstrated in these patients [36]. Our finding that patients with central-dominant sensitization complained more commonly of a burning sensation may support the association between BMS and ocular neuropathic pain. Further investigations focusing on both the oral mucosal and corneal nervous systems could help identify the underlying mechanism of ocular neuropathic pain.
Our study has several limitations. First, the sample size may not be enough to prove the clinical significance of results. Further studies with larger sample sizes that provide greater power could advance the results of this study. Second, because this study is a multicenter study, subtle bias may be present in conducting clinical examinations. Third, corneal esthesiometry or IVCM imaging were not obtained in this study. Fourth, we used an arbitrary 50% cut-off value in the proparacaine challenge test for the analysis and it might not be a validated value for determining the types of sensitization. Further studies with more precise diagnostic equipment are warranted in the future.
Conclusion
In conclusion, dry eye patients with central-dominant sensitization may experience more intense ocular and non-ocular pain than the others. Burning sensation may be a key symptom in ocular neuropathic pain. The OPAS questionnaire can be a good option for evaluating whether patients have ocular neuropathic features. Further investigations on corneal nervous system and peripheral and central sensitization associated with dry eye may give clue to the management of patients with ocular neuropathic pain features. | 2020-07-30T02:02:34.978Z | 2020-07-29T00:00:00.000 | {
"year": 2020,
"sha1": "051049bf0e6ad9a969bd7cd9cce1bbdc0066f5c9",
"oa_license": "CCBY",
"oa_url": "https://bmcophthalmol.biomedcentral.com/track/pdf/10.1186/s12886-020-01733-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "24169f1b3b2cc715a7ebb2316d697f4252870923",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211048319 | pes2o/s2orc | v3-fos-license | Comprehensive survey among statistical members of medical ethics committees in Germany on their personal impression of completeness and correctness of biostatistical aspects of submitted study protocols
Objectives To assess biostatistical quality of study protocols submitted to German medical ethics committees according to personal appraisal of their statistical members. Design We conducted a web-based survey among biostatisticians who have been active as members in German medical ethics committees during the past 3 years. Setting The study population was identified by a comprehensive web search on websites of German medical ethics committees. Participants The final list comprised 86 eligible persons. In total, 57 (66%) completed the survey. Questionnaire The first item checked whether the inclusion criterion was met. The last item assessed satisfaction with the survey. Four items aimed to characterise the medical ethics committee in terms of type and location, one item asked for the urgency of biostatistical training addressed to the medical investigators. The main 2×12 items reported an individual assessment of the quality of biostatistical aspects in the submitted study protocols, while distinguishing studies according to the German Medicines Act (AMG)/German Act on Medical Devices (MPG) and studies non-regulated by these laws. Primary and secondary outcome measures The individual assessment of the quality of biostatistical aspects corresponds to the primary objective. Thus, participants were asked to complete the sentence ‘In x% of the submitted study protocols, the following problem occurs’, where 12 different statistical problems were formulated. All other items assess secondary endpoints. Results For all biostatistical aspects, 45 of 49 (91.8%) participants judged the quality of AMG/MPG study protocols much better than that of ‘non-regulated’ studies. The latter are in median affected 20%–60% more often by statistical problems. The highest need for training was reported for sample size calculation, missing values and multiple comparison procedures. Conclusions Biostatisticians being active in German medical ethics committees classify the biostatistical quality of study protocols as low for ‘non-regulated’ studies, whereas quality is much better for AMG/MPG studies.
AbstrACt
Objectives To assess biostatistical quality of study protocols submitted to German medical ethics committees according to personal appraisal of their statistical members. Design We conducted a web-based survey among biostatisticians who have been active as members in German medical ethics committees during the past 3 years. setting The study population was identified by a comprehensive web search on websites of German medical ethics committees. Participants The final list comprised 86 eligible persons. In total, 57 (66%) completed the survey. Questionnaire The first item checked whether the inclusion criterion was met. The last item assessed satisfaction with the survey. Four items aimed to characterise the medical ethics committee in terms of type and location, one item asked for the urgency of biostatistical training addressed to the medical investigators. The main 2×12 items reported an individual assessment of the quality of biostatistical aspects in the submitted study protocols, while distinguishing studies according to the German Medicines Act (AMG)/German Act on Medical Devices (MPG) and studies non-regulated by these laws. Primary and secondary outcome measures The individual assessment of the quality of biostatistical aspects corresponds to the primary objective. Thus, participants were asked to complete the sentence 'In x% of the submitted study protocols, the following problem occurs', where 12 different statistical problems were formulated. All other items assess secondary endpoints. results For all biostatistical aspects, 45 of 49 (91.8%) participants judged the quality of AMG/MPG study protocols much better than that of 'non-regulated' studies. The latter are in median affected 20%-60% more often by statistical problems. The highest need for training was reported for sample size calculation, missing values and multiple comparison procedures. Conclusions Biostatisticians being active in German medical ethics committees classify the biostatistical quality of study protocols as low for 'non-regulated' studies, whereas quality is much better for AMG/MPG studies.
IntrODuCtIOn Problem formulation
Medical ethics committees (institutional review boards) aim to judge the quality and validity of medical studies in order to ensure an ethically justifiable positive benefit-risk profile. [1][2][3] Their members do not only assess the submitted material but also act as consultants. Besides medical content, the board verifies legal and scientific validity including biostatistical aspects of the study design and analysis strategy. [4][5][6] Although general strengths and limitations of this study ► This is the first survey among biostatisticians active in German medical ethics committees to assess the individual assessment of the quality of and the completeness of information on biostatistical aspects in the submitted study protocols. ► Although having put much effort in searching for all biostatisticians active in German medical ethics committees, the target population was not completely identified. ► Confidentiality issues did not allow direct and objective assessment of individual study protocols' content. ► This survey classified study protocols as regulated by the German Medicines Act/German Act on Medical Devices and those non-regulated by these laws, where the latter covers a very heterogeneous group of studies for which the statistical requirements are not all the same. ► This survey was conducted too early to study the impact on recent revisions on the statistical concepts of estimands.
Open access guidance for a good biostatistical practice in medical research projects exists, there is no consensus and only limited guidance to what extent medical ethics committees should assess these statistical aspects. [5][6][7][8] According to the last revision of the German Drug Regulation Law in 2016 (Bundesgesetzblatt, § 41a), a biostatistician is a mandatory member of a medical ethics committee, next to medical as well as legal experts, and lay persons. 9 However, not all medical ethics committees appraise legally regulated studies in which case a biostatistician is not mandatory. [10][11][12][13] Moreover, medical research is faced with the new challenges related to the digitalisation of the health system and the focus on personalised medicine, which also brings along new tasks and perspectives for medical ethics committees. 14
Purpose or research question
To increase the biostatistical quality of study protocols, standards for biostatistical reporting and for biostatistical reviewer comments have to be implemented in Germany that account for the fact that the organisation and composition of German medical ethics committee organisation are quite heterogeneous. Of course, on the long-run international standard have to be agreed on. To achieve this global aim, the first step is to assess the current level of statistical quality of submitted study proposals, so that gaps and challenges can be identified. For this purpose, we conducted a comprehensive survey among biostatisticians who were active in German medical ethics committees between 2016 and 2018. The aim was to evaluate and quantify the personal assessment of the participants of the quality and completeness of statistical aspects in clinical study protocols submitted to German medical ethics committees.
A direct judgement of the statistical quality of study protocols would have required the assessment of relevant protocol extracts or even entire study protocols by the experts. This was, however, not possible due to enforced data protection mechanisms. Many medical ethics committees argued that original protocols (even partly and anonymised) could not be made available for the planned assessment without impairing the trust which is an essential part of the medical ethics committee's standing.
To overcome this problem, we decided to ask biostatisticians in medical ethics committees to give a global, personal assessment on specific issues of the statistical quality and completeness of study protocols. On the one hand, the individual impression does not objectively reflect the 'true' quality. On the other hand, objective quality criteria are very hard to define and would definitively impose a need for a controversial discussion. Therefore, the individual global quality assessment of biostatisticians in medical ethics committees provides an informative marker to at least roughly assess current standards and problems. Biostatisticians in medical ethics committees review many study protocols and can well reflect the statistical problems currently met. From these findings, we can identify statistical topics which need an enforced focus, for example, within the framework of Good Clinical Practice courses, specific training addressed to the statistical reviewers to improve clarity of statistical reviewer comments and other training addressed to the medical investigators to improve their statistical knowledge. [15][16][17] MethODs Qualitative approach and research paradigm This study is a comprehensive systematic survey among biostatisticians who were members of German medical ethics committees between 2016 and 2018.
researcher characteristics and reflexivity The questionnaire was developed by a senior biostatistician, reviewed and extended by five independent biostatisticians including two professors of biostatistics, two senior biostatisticians and one bachelor student with a limited background in biostatistics. The latter person was consulted in particular to assess the comprehensibility of the wording.
Context
All authors, who developed the survey, analysed the data and wrote this article, are members of the joint project group 'Biometry in ethics committees' of the German Association for Medical Informatics, Biometry and Epidemiology (GMDS) e. V. and the German Region of the International Biometric Society (https:// gmds. de/ aktivitaeten/ medizinische-biometrie/ arbeitsgruppenseiten/ projektgruppen/ biometrie-in-der-ethikkommission/, accessed September 2019). This group was founded in 2017 and aims to strengthen the work of biostatisticians in medical ethics committees by offering specific training (in methods as well as communication of statistical issues to non-statisticians), establishing a communication network for mutual support and developing specific guidelines, which allow a standardised, high-quality statistical review of study protocols.
Data collection instruments and technologies
The questionnaire was implemented as an online survey ( www. umfrageonline. com, accessed September 2019). The survey consisted of 31 items, which were grouped in 11 steps/pages in the online survey. Questions were formulated in German. The original survey can be found here (https://www. umfrageonline. com/ s/ 6b2e8f4& preview= 1& DO-NOT-SEND-THIS-LINK-ITS-ONLY-PREVIEW, accessed September 2019). English translations are provided in online supplementary appendix 1.
The first item checked the key inclusion criterion if the respondent served as statistical expert in a medical ethics committee within the last 3 years. Only persons who answered this question positively were included in the final analysis. The last item evaluated if the respondent enjoyed the survey. Of the remaining 29 main items, 4 items characterised specific features of the medical ethics committee and the review process within the medical ethics committee. We asked (1) for the type of the medical ethics committee (ethics committee of a medical faculty, of a State Chamber of Physicians (Landesärztekammer) or other), (2) for the federal state in Germany, where the medical ethics committee is located, (3) how many studies the respondent reviews on average per year (in steps of 50) and (4) if the respondent is exclusively responsible for study proposals according to the German Medicines Act (AMG)/German Act on Medical Devices (MPG) or also for studies that are non-regulated by these laws which will be referred to in the following briefly as 'non-regulated studies'. In case the respondent's medical ethics committee is responsible for regulated as well as for non-regulated studies, another (conditional) item asked whether the statistical quality of study protocols is better in the regulated compared with the non-regulated setting.
Additionally, 2×12 items asked for assessing the completeness and correctness of different biostatistical aspects (12 for the regulated setting, 12 for the nonregulated setting conditional on the responsibilities of the specific medical ethics committee as marked in the previous item). Participants were asked to complete the sentence 'In x% of the submitted study protocols, the following problem occurs', were 12 statistical problems were addressed (eg, 'specification of the significance level is missing'). Participants could provide percentages in steps of 10% (0%-100%) with a higher percentage indicating a worse result. In principle, items formulated in the way 'In x% of the submitted study protocols, the following problem occurs' could have also been assessed on an interval scale by allowing for continuous specifications of percentages. This would have pretended a quantitative and objective assessment. However, the subjective impression is surely neither quantitative nor completely objective.
An additional item asked for the need to refresh the statistical knowledge of the, medical or epidemiological investigator on a certain topic to be selected out of a list of 9 statistical topics with the option to add additional ones. In addition, this need had to be assessed as low, medium or high.
Data processing
The online survey system saved the answers of the participants in a central database where data can be downloaded in various formats. An extended group of experts including the authors validated the online survey by testing and commenting it. Final corrections were integrated after the validation phase.
units of study
The study population is defined as all biostatisticians being members of a German medical ethics committee between 2016 and 2018.
sampling strategy To identify the study population, a web search on the homepages of all German medical ethics committees was performed which resulted in a preliminary email list. All these email addresses were freely available on the web. Moreover, several persons known to be active in ethics committees were asked to complete this list in agreement with the specific biostatistician. The final list of eligible candidates consisted of 86 biostatisticians.
Data collection methods
The call to participate in the survey was sent out by email on 28 November 2018 with a reminder on 13 December 2018. The survey was opened until 15 December 2018. Some participants actively asked for the possibility to slightly extend the deadline on which we agreed. The original survey was, however, open only between 1 December 2018 and 31 January 2019, so that after that time period data collection was completed.
techniques to enhance trustworthiness The webpage for the survey was not publicly published to avoid participation of persons not belonging to the study population. Still, in principle, anyone who was aware of the link could have participated in the survey. There is no way to check for fulfilment of the inclusion criterion or correctness of the provided answers. However, as the link was not easily available and the study population was likely to be highly compliant, this risk seems to be minor. The survey could be completed only once from a single IP address. In principle, participants could have completed the survey several times using different IP addresses, which seems very unlikely. As the survey was anonymous due to data protection reasons, it is impossible to verify such fraud. However, as there was no benefit in completing the survey more than once, this risk seems to be minor. We did not advertise the survey by means of global mailing lists within biostatistical societies, although this approach was discussed. A limited and focused mailing list is preferable as otherwise no responders' proportion could have been evaluated which is a crucial quality indicator of a survey. Moreover, a global mailing list would have included a large proportion of recipients who did not fulfil the eligibility criteria.
ethical issues pertaining to human subjects The online survey was anonymised and most items included an option for providing no answer. The question asking for the medical ethics committee's federal state allowed for a potential reidentification of the respondent in case of a small federal state like Bremen. Respondents were therefore free to leave this box blank. No personal data were collected from the participants. Participation in the survey was voluntary and not rewarded. To enable reproduction of the results presented in this paper, the final dataset is freely available from the Dryad repository (https:// datadryad. org/ stash/ share/ Vjof DJko UtqI jjaJ Q84O 7FB1 W5cJ u78g 04cN ey044no), without any information on the medical ethics committee's federal state to avoid any risk of potential reidentification. No ethical approval was necessary for this voluntary survey in healthy participants without any risk of putting harm to the respondents and without any direct medical research focus.
Data analysis
This is an exploratory study, which is analysed using descriptive statistical methods. Items 1-6 as well as items 30 and 31 are simple categorical items assessed on a multiple-choice basis. The items were evaluated by means of absolute and relative frequencies. The 2×12 items asking for the assessment of the completeness and correctness of different biostatistical aspects are Likertscaled ordinal variables with 11 possible outcomes (0%, 10%, …, 100%). For these items, we reported absolute and relative frequencies and graphically displayed them as stacked bar charts. Moreover, we provided medians, quartiles and grouped boxplots, where two groups of studies are considered (regulated vs non-regulated studies). All analyses were performed using the statistical software R, V.3.5.1. The original dataset (excluding information on the specific German federal state) is freely available from the Dryad repository (https:// datadryad. org/ stash/ share/ Vjof DJko UtqI jjaJ Q84O 7FB1 W5cJ u78g 04cN ey044no) to allow for reproducibility.
Patient and public involvement
This survey does not include patients or the general public. The design and the development of the survey was intensively discussed by the members of the joint project group 'Biometry in ethics committees' of the GMDS e. V. and the German Region of the International Biometric Society.
results Table 1 shows the characteristics of the medical ethics committees to which the 57 participants of the survey are appointed. Note that the number of answers differ per item, as the online survey offered the option to abstain from answering a specific question. A majority of 46 participants (80.7%) were members of an ethics committees of a medical faculty of a university, whereas 15 (26.3%) were members of an ethics committee of a State Chamber of Physicians (Landesärztekammer). Some participants were members of more than one medical ethics committee at the same time.
A total of 47 participants answered the question on the location of the medical ethics committee to which they were appointed as member. The medical ethics committees are located in 12 out of 16 German federal states, where 14 (28.6%) participants were members of medical ethics committees in Northrhine-Westphalia and 7 (14.3%) in Baden-Württemberg.
A total of 18 (32.1%) participants reviewed up to 50 study proposals per year, 14 participants (25.0%) reviewed between 51 and 100 study proposals per year and 24 participants (42.0%) reviewed more than 100 study proposals on average per year.
The vast majority of 50 participants (89%) reviewed both-study proposals according to AMG/MPG and study proposals in a non-regulated setting.
Continued
With respect to the general biostatistical quality of study protocols, table 2 displays the results of items 6 and 11. Only 47 participants answered Item 6, assessing if the statistical quality of ethical proposals generally differs between regulated and the non-regulated studies. As this item was placed in the survey before the specific biostatistical aspects were named, this general formulation seemed to be difficult to understand for a large part of the participants. Out of 47 participants in total who responded to item 6, a majority of 45 (95.7%) stated that study protocols under regulatory requirements (AMG/MPG) are on average of higher statistical quality compared with studies without such requirements. The remaining 2 (4.3%) participants stated that there is no difference on average. Item 11 asked how the participants considered the need for additional training in different statistical areas addressed to the investigators submitting protocols (see table 2). A high need for a training was especially identified for 'handling of missing values' as indicated by 34 participants (75.6%), for 'sample size calculation' as indicated by 27 participants (60.0%), for 'multiple comparison procedures' 26 (57.8%) and for 'adjustment for covariables' 26 (57.8%). For all topics, at least 70% of participants considered the need for a refreshment of statistical knowledge as middle or high.
Items 7-10 asked to assess completeness and correctness of biostatistical aspects while distinguishing studies according to the regulatory setting (AMG, MPG) (items 7 and 8) and studies without regulatory requirements (items 9 and 10). Participants were only able to judge those study types (regulated and/or non-regulated) that were specified in item 5. Participants were asked to complete the sentence 'In x% of the submitted study protocols, the following problem occurs', where 12 statistical problems were formulated (eg, 'specification of the significance level is missing'). Participants could give the percentage in steps of 10% (0%-100%) with a higher percentage indicating a worse result. Note that, study protocols submitted to German ethics committees can cover various types of research including observational studies, retrospective analyses and surveys. Not all of the 12 aspects formulated below are applicable to all types of studies. The requested percentages refer to the average of all studies, which have been reviewed by the participant. However, there also was the option to classify a specific aspect as 'not assessable'. As a consequence, the number of valid responses varies per item, where lower values might indicate items that are more difficult to judge. Moreover, some participants interrupted the survey after the assessment of some items, probably because the statistical problems are repeated for regulated and non-regulated studies, which might have decreased the motivation. The results referring to the valid responses are presented in table 3 and displayed in figure 1 as grouped boxplots. Additionally, figure 2 displays the percentages of the item categories for both study types as stacked bar plots.
It turns out that protocols of non-regulated studies tend to be of much lower statistical quality and show a lower level of completeness, regardless of the specific topic. Differences in medians of the ratings between regulated and non-regulated studies range between 20% and 60%. The statistical aspects 'missing values', 'multiple comparison problems' as well as 'adjustment for covariables' show the highest discrepancies between both study types. For instance, the statistical methods are not sufficiently specified in 80% on average (median) for non-regulated study proposals whereas this is the case in only 20% on average (median) for studies with regulatory requirements. Similarly, for non-regulated studies only general statements on statistical analysis methods are provided not fitting and addressing the specific study aim in 70% on average (median), whereas this is only stated in 10% on average (median) for regulated studies. For nonregulated studies, all 12 statistical aspects show high deficiencies, while only in 10% on average (median) of all proposals of regulated studies aspects are mentioned which have not been completely or correctly addressed in the proposals.
DIsCussIOn
This systematic survey among biostatisticians serving in German medical ethics committees aimed to assess the individual impression of completeness and correctness of biostatistical aspects of submitted study protocols. As an overall result, the completeness and correctness of handling statistical issues in the submitted study protocols is heterogeneous. There is a notably difference in quality between study protocols with and without regulatory requirements, where the latter show major deficits. A specifically high need for refreshment was identified Open access for 'handling of missing values', 'sample size calculation', 'multiple comparison procedures' and 'adjustment for covariables'. However, there also exist quite general deficiencies for non-regulated studies, as the description of the statistical methods is not sufficiently specified. It should be mentioned that for regulatory studies the International Conference on Harmonization (ICH) E9 guideline offers guidance on how to analyse a clinical study. 6 This guideline is also helpful for the nonregulated settings but may be unknown to members of this community. 13 To the best of our knowledge, this is the first survey among biostatisticians who were members of German medical ethics committees and the first attempt to assess the quality and completeness of biostatistical issues in medical study protocols. Wang et al conducted a survey among biostatistical consultants with respect to the quality of reporting the statistical analysis strategy after data was already analysed. 18 The four most frequently reported statistical problems were 'removing or altering some data records to better support the research hypothesis', 'interpreting the statistical findings on the basis of expectation, not actual results', 'not reporting the presence of key missing data that might bias the results' and 'ignoring violations of assumptions that would change results from positive to negative'. Clark et al screened original study protocols submitted to UK ethics committees for the completeness and correctness of sample size derivation. 19 They found that only 42% of the study protocols reported all information, which is required to accurately reproduce the sample size. Kilkenny et al conducted a survey of the quality of experimental design, statistical analysis and reporting of research in animal studies. 20 They found that only 59% of the studies stated the hypothesis or objective of the study and the number and characteristics of the animals used und only 70% of the publications described their methods and presented the results with a measure of error or variability. 20 These findings are in line with the results of our survey. In addition, Hall et al looked at the methodological quality of surgical clinical trials. 21 They reported that less than 50% of the studies commented on potential bias in the assessment of the outcome, adequately described the randomisation technique, or commented on sample size calculation. 21 Peng et al conducted a review on published epidemiological papers to assess the reproducibility of epidemiological research. 22 They found that 30% of the publications did not report the implementation of the statistical analysis. Begley et al commented that there is a general problem of reproducibility of study results, in particular in preclinical studies. 23 This goes in line with the problems of study protocols and designs reported by Ioannidis et al. 17 A total of 57 (66%) of the contacted 86 persons participated and fulfilled the inclusion criteria. This corresponds to a high participation proportion. However, it remains unknown whether all potential participants were truly identified and contacted.
A further limitation of our survey is that it does not provide an overview of the objectively measured quality and completeness of biostatistical aspects of study protocols but that it only refers to the subjective, individual impression of the statistical members of the ethics committees. An objective rating of the study protocols, however, was not possible due to data and privacy protection issues, as this would have required screening of the submitted study documents. Moreover, objective quality measures are difficult to define, because as the meaning of 'adequate' quality might differ considerably. The subjective ratings of completeness and correctness are subject to interrater and intrarater variability. Therefore, the results should rather be interpreted as a rough indication and not as definite numbers.
As a third limitation, we consider the fact that the survey could not assess recent issues added in an addendum to the ICH E9 guideline. 24 It presents a structured framework to link trial objectives to a suitable trial design and tools for estimation and hypothesis testing. This framework introduces the concept of an estimand, translating the trial objective into a precise definition of the treatment effect that is to be estimated. It also aims to facilitate the dialogue between disciplines involved in a clinical trial.
Even in view of these limitations, the survey clearly indicates the need for basic and advanced statistical trainings and guidance for medical researchers. All medical faculties in Germany have established biostatistical Open access units providing consulting services. However, this does not seem to be sufficient to enable medical researchers to develop protocols, which cover statistical issues adequately. Reasons could be that medical researchers undervalue the impact of an appropriate biometrical planning on the quality and validity of medical studies. In personal discussions with participants of this survey, several of them reported frequent examples where the statistical analysis strategy in study protocols is addressed with a single general sentence like 'The data are analysed with valid statistical methods.' This does not only indicate a lack of statistical knowledge but also a lack of awareness that statistical methods have an important impact on the validity of medical research.
The survey also gauges the range of methodological challenges encountered by biostatisticians being member of a medical ethics committee. Often a biostatistician is the last methodological sentinel before a study is implemented in a clinical setting. In order to involve more biostatisticians in medical ethics committees, there is a need to provide support to enable them to adequately discharge their responsibilities. Unfortunately, the survey did not check how the remaining members react on revision requests by biostatisticians and if these requests are adequately addressed before the final vote of the medical ethics committee on the criticised study protocol. Moreover, we did not formally assess if biostatisticians, who are members of medical ethics committees, formulate their requests in comparable detail and persistence. Due to own experiences and based on narrative reports, we suspect that biostatistical concerns cannot be easily communicated to and understood by the non-statistical members of the medical ethics committee. Thus, there is a need to establish a better communication, which allows expressing biostatistical concerns in a convincing easily understandable language.
It is, therefore, time to communicate the general importance of statistics for medical research. This includes the establishment of guidelines for protocol writing and templates like the SPIRIT Statement, which also handles statistical input to a protocol. 25 26 Furthermore, the implementation of reporting guidelines like STROBE should be made more popular. 27 Moreover, the development of specific trainings and guidance on how to address specific statistical challenges is required. Finally, national standards for the tasks of a biostatistician as a member of a medical ethics committee must be formulated. 28 29 Author affiliations Contributors GR developed the research idea, designed the survey and wrote the manuscript. LH implemented the online survey, performed the analyses and reviewed the manuscript. UM helped to develop the research idea, reviewed the survey, reviewed the manuscript. IP helped to develop the research idea, reviewed the survey, reviewed the manuscript.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available in a public, open access repository. The original dataset of the survey is available from the Dryad repository, https:// datadryad. org/ stash/ share/ XbZQ 8pXE bP8P XuuA gSPm 4sc2jt_ HGWRSQRV-PoGcNqs.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/. | 2020-02-06T09:08:50.766Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "6ed6d4f04a9d6a91e668ae9bcebec1f75107de8b",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/2/e032864.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "7eb180edd73472aff3d67842c1bcdd52f6371e23",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257242166 | pes2o/s2orc | v3-fos-license | Personality and Nomophobia: The Role of Dysfunctional Obsessive Beliefs
Background: The development of new technologies (ICTs), and specifically the invention of smartphones, has offered users enormous benefits. However, the use of this technology is sometimes problematic and can negatively affect people’s lives. Nomophobia has been defined as the fear of being unreachable by means of a smartphone and is considered a disorder of the modern world. The present study aims to provide additional evidence of the relationship between personality traits and nomophobia. Moreover, this research explores dysfunctional obsessive beliefs as another possible antecedent. Finally, this study also examines the effect of the combination of these antecedents on nomophobia. Method: The study sample was comprised of Spanish workers (males: 44.54%; females: 55.46%) in the city of Tarragona and its surroundings. Results: Our results showed that nomophobia is directly related to personality traits such as extraversion, and that dysfunctional obsessive beliefs play a role in the development of nomophobia. Moreover, our study confirms that the combination of personality traits and dysfunctional obsessive beliefs can affect the degree of nomophobia experienced. Discussion and Conclusions: Our study contributes to the body of literature that examines how psychological variables of personality can be predictors of nomophobia. Additional research is needed to better understand the determinants of nomophobia.
Personality and Nomophobia: The Role of Dysfunctional Obsessive Beliefs
Lifestyles have changed drastically in recent years, among other reasons, because of the rapid development of information and communication technologies (ICTs). ICTs, such as smartphones, have become an indispensable part of our lives [1,2] because they involve numerous benefits for users; for example, to perform a variety of daily tasks with a single device, [3], and meet some of their needs. In contrast, smartphones also present some risks for health. The indiscriminate use of smartphones can provoke psychological disorders [4]. In fact, a specific phobia related to smartphones has been included within the phobic group of anxiety disorders: nomophobia. The term is a portmanteau of 'no mobile phone phobia' [5,6]; it is defined as the 'irrational' fear of not being able to use one's smartphone, and it is characterized by four dimensions [3]. (1) Fear of not being able to communicate refers to feelings of losing instant communication with people (2) Losing connectedness is related to feelings of losing the ubiquitous connectivity that smartphones provide and being disconnected from one's online identity (3) Not being able to access information reflects the discomfort of losing pervasive access to information and to search for things on smartphones. Lastly, (4) giving up convenience reflects the desire to take advantage of the convenience of having a smartphone.
Given that nomophobia is a relatively new phenomenon, much remains to be known. Yildirim and Correia (2015) [3] called for further research to clarify the predicting factors of
Personality: The Big Five Model
Personality reflects the ways of thinking, feeling and behaving that define an individual [19,20]. One of the main theoretical models used to represent personality is the Big Five model [21][22][23], which describes personality as a hierarchical model with five general traits [24,25]. (1) Extraversion (vs. introversion) refers to the traits of being sociable, gregarious, assertive, talkative, and active. (2) Neuroticism (vs. emotional stability) is characterized by being anxious, depressed, emotional, worried, insecure, frustrated, irritable and by having difficulty controlling impulses and desires. (3) Agreeableness (vs. hostile non-compliance) refers to being courteous, flexible, trusting, cooperative, relaxed, having emotional stability, and being open to criticism. (4) Conscientiousness (vs. lack of direction) is defined as "a spectrum of constructs that describe individual differences in the propensity to be self-controlled, responsible to others, hardworking, orderly, and rule abiding" [26] (p. 1). Finally, (5) openness to experience (vs. closedness) includes the traits of being broad-minded, intelligent, and artistically sensitive [27].
The link between personality and psychological disorders has been extensively studied and demonstrated over the years (see, for example, the meta-analysis by Kotov et al., 2010) [28]. Regarding nomophobia, this connection is not so clear. Some studies have found positive and significant relationships between extraversion [7,13] and/or neuroticism [7,13,29] and nomophobia. These studies suggest that technologies may affect people's behaviors, mood and emotions. For example, smartphones promote enjoyable feelings [30]. Hence, extraversion, as a trait related to excitement, stimulation, action and thrills, was positively associated with nomophobia. For extrovert people, anything that alters this state of enjoyment could cause anxiety. Similarly, neuroticism pertains to fluctuations in emotions and is associated with dispositions as anxiety, impulsiveness and self-consciousness [31]. Neurotic individuals tend to demonstrate sensitivity and vulnerability to their social environment [20]. Hence, a lack of sources of enjoyment, such as a smartphone, may affect them more than other people with higher emotional stability.
Other studies have also found a significant but negative relationship between conscientiousness [7], agreeableness [15] nd/or openness to experience and nomophobia [13,15]. These studies suggest that conscientious people may develop and employ strong selfcontrol mechanisms that determine their smartphone use, whereas unconscientious individuals tend to be disorganized people who act impulsively. Hence, conscientiousness was determined to have a negative relationship with nomophobia. In addition, disagreeable individuals are more likely to develop personality disorders, such as antisocial disorders [32]; therefore, agreeableness also presented a negative association with nomophobia. Finally, smartphones allow people to easily convey their feelings, ideas and thoughts. This is especially useful for individuals with low openness to experience, who have greater difficulty expressing themselves. The relationship between openness to experience and nomophobia was also found to be negative.
Given these incongruent results, this research aims to provide additional empirical evidence to clarify the relationship between personality traits and nomophobia. We therefore posed the following hypotheses: Hypothesis 1. Extraversion is positively related to nomophobia.
Hypothesis 2.
Emotional stability is negatively related to nomophobia.
Hypothesis 5.
Openness to experience is negatively related to nomophobia.
Obsessive Beliefs
Dysfunctional beliefs are based on the cognitive theory [33], which suggests that people suffer due to their interpretation of events more than the events themselves. So, dysfunctional beliefs reflect an association between specific cues and catastrophic consequences or states [34]. In fact, psychopathology may underpin certain cognitive variables, such as the type of beliefs and/or specific interpretations that each person holds about their intrusive thoughts [35].
A relevant amount of literature has supported this assumption, especially for phobias. For example, Gellatly (2016) [36] showed how catastrophic thinking may play a critical role in a variety of disorders by being a predictor of psychopathological disorders, including phobias. Thorpe et al. (1995) [37] suggested that idiosyncratic cognitions may be primary to the experience of phobic anxiety. Harm cognitions were strongly related to some phobias. Ollendick et al. (2017) [38] stated that catastrophic beliefs and low coping expectancies are often present in people with specific phobias. Stopa and Clark (1993) [39] concluded that socially phobic individuals are characterized by specific dysfunctional beliefs (negative self-evaluative thoughts).
Another possible dysfunctional belief associated with phobias may be obsessive beliefs. Several authors (e.g., [40,41]) have proposed that dysfunctional obsessive beliefs are not exclusive to obsessive-compulsive disorder but may also be determining factors in other psychological disorders. Dysfunctional obsessive beliefs reflect intrusive, recurrent thoughts that come spontaneously to the mind and are unpleasant (scary, distressing or disturbing). Based on the study by Belloch et al. (2010) [40], we focused on these two dysfunctional obsessive beliefs: perfectionism, that refers to the belief that there is a perfect solution for every problem, and that making it perfect is possible and necessary, so that any failure will have serious consequences; and excessive responsibility, that reflects the belief that one can cause, and therefore should prevent, major negative events, which leads the person to feel very overly responsible for everything that happens.
Despite the growing research on dysfunctional obsessive beliefs and phobias, we are not aware of any study that has specifically examined the relationship between dysfunctional obsessive beliefs and nomophobia. To our knowledge, only two studies have linked obsessiveness to nomophobia. Lee et al. (2018) [42] found that high levels of obsessiveness were associated with high levels of nomophobia. Adawi et al. (2019) [43] reported that obsession-compulsion was positively related to nomophobia, understood in terms of not being able to access information, giving up convenience/losing connectedness, and not being able to communicate. According to the research on dysfunctional beliefs and phobias, it seems plausible to suggest that dysfunctional obsessive beliefs may play a role in degrees of nomophobia. We therefore hypothesized the following. [17] found that dysfunctional obsessive beliefs play a critical role in emotional distress in individuals with personality dysfunction. Dysfunctional obsessive beliefs mediated the relationship between the Big Five personality dimensions and the psychopathology dimensions (positive emotionality, psychoticism, aggressiveness, and disconstraint). The mediation analyses indicated that personality variables operated through dysfunctional obsessive beliefs to exert their effect on other outcomes. Similarly, Zahura (2020) [18] also demonstrated that personality dysfunction, dysfunctional obsessive beliefs, and negative emotional outcomes are closely related. Unlike McDermut et al. (2019) [17], Zahura (2020) [18] found that negative emotional outcomes were predicted by the interaction of personality and dysfunctional obsessive beliefs, and not by a mediation effect. In fact, Zahura (2020, p. 4) [18] explained that the "moderation for the dimensions of negative emotions, depression, social anxiety and anger since it's more appropriate to support that connection, because with mediation too many assumptions are made that can't be easily justified". So, these authors concluded that moderation makes more sense to explain these relationships because it can identify subgroups who are the most at risk of negative emotional outcomes (depression, anxiety, anger, etc.).
Taking into account the relevant literature on the relationship between personality and nomophobia, the research on dysfunctional obsessive beliefs and phobias, and the study by Zahura (2020) [18], which reported that the combination of personality and dysfunctional obsessive beliefs can explain cognitive and emotional outcomes, it seems plausible to propose that personality and dysfunctional obsessive beliefs may interact to explain nomophobia, despite the apparent lack of studies examining the possible combination of personality traits and dysfunctional obsessive beliefs to explain the disorder. Hence, the present work aimed to examine the moderating role of obsessive beliefs in the relationship between personality traits and nomophobia. We therefore posed the following hypotheses.
Hypothesis 7.
Obsessive beliefs moderate the relationship between extraversion and nomophobia.
Hypothesis 8.
Obsessive beliefs moderate the relationship between emotional stability and nomophobia.
Hypothesis 9.
Obsessive beliefs moderate the relationship between conscientiousness and nomophobia.
Hypothesis 10. Obsessive beliefs moderate the relationship between agreeableness and nomophobia.
Hypothesis 11. Obsessive beliefs moderate the relationship between openness to experience and nomophobia.
Procedure
Non-probabilistic sampling [44] was used to obtain the samples. Assistant researchers collected data through their personal contacts. They described the research project and asked for participation. Those who volunteered to participate completed the questionnaire. The volunteers were assured of data confidentiality and anonymity. Accordingly, a wholly random sampling method was not possible, given the reliance on voluntary participation.
The study was conducted in accordance with the Declaration of Helsinki, and the protocol followed the guidelines of the Ethics Committee of our university.
Measures
Nomophobia was measured using the Nomophobia Questionnaire (NMP-Q) [3], which consists of twenty seven point Likert items ranging from 1 (strongly disagree) to 7 (strongly agree). The scale contained four dimensions: (1) Not being able to communicate (six items; e.g., "I would feel nervous because I would not be able to receive text messages and calls"; α = 0.94); (2) Losing connectedness (five items; e.g., "I would be uncomfortable because I could not stay up-to-date with social media and online networks", α = 0.94); (3) Not being able to access information (four items; e.g., "I would be annoyed if I could not use my smartphone and/or its capabilities when I wanted to do so", α = 0.94); (4) Giving up convenience (four items; e.g., "Running out of battery in my smartphone would scare me", α = 0.94).
Dysfunctional obsessive beliefs were assessed using a global measure of its two main dimensions: perfectionism and intolerance of uncertainty and excessive responsibility and importance of controlling thoughts. It was measured through the Inventory of Obsessive Beliefs (ICO) [35], with a scale of 24 items, with response options that ranged from 1 (strongly disagree) to 7 (strongly agree). The perfectionism and intolerance of uncertainty dimension consisted of 14 items. An example item was "I must be the best in what is important to me". The excessive responsibility and importance of controlling thoughts dimension consisted of 10 items; for example, "I should be able to rid my mind of inappropriate thoughts". The Cronbach's alpha was 0.92.
Analysis
After the preliminary analyses (descriptive analysis and correlations) were conducted, three hierarchical multiple regression analyses were performed to test our hypotheses. In line with that reported in Cohen and Cohen (1983) [46], the lower-order variables were introduced first and the higher-order terms later. The control variables (sex and age) were entered in step 1. In step 2, the predictor variables (personality traits and dysfunctional obsessive beliefs) were introduced, and in step 3, interaction terms among variables were entered. We used centered scores to solve the possible problem of multicollinearity and to maximize interpretability. Finally, a graphic representation was generated to better understand the nature of the interactions [47]. Table 1 presents the means, standard deviations and correlations among the variables. Most of variables were significantly related. The correlation values ranged from −0.12 to 0.51.
Results
The results of the regressions are presented in Table 2. They show a positive and significant relationship between extraversion and nomophobia. Hypothesis 1 was therefore supported. Hypothesis 2 was partially supported. Our results show negative and significant relationships between emotional stability and the fear of not being able to communicate by means of a smartphone, and not being able to access information. However, the relationships between emotional stability and losing connectedness and giving up convenience were nonsignificant. Hypothesis 3 was also partially supported. Conscientiousness was only negatively related to fear of losing connectedness. Hypothesis 4 was rejected. Nonsignificant relationships were found between agreeableness and nomophobia. Finally, hypothesis 5 was supported. Openness to experience was negatively related to the fear of not being able to communicate by means of a smartphone, losing connectedness, and not being able to access information and giving up convenience.
The relationship between dysfunctional obsessive beliefs and nomophobia was supported as stated in hypothesis 6. Our results show a positive and significant relationship between dysfunctional obsessive beliefs and the four dimensions of nomophobia. Table 2 also shows the significant interaction effect on nomophobia. Our results partially supported hypothesis 7, as we found a significant interaction between extraversion and obsessive beliefs to predict the fear of not being able to access information. Figure 1 is the plot of interaction between extraversion and dysfunctional obsessive beliefs in predicting the fear of not being able to access information. When obsessive beliefs are high, fear of not being able to access to information do not seem to vary, regardless of the level of extraversion. Extroverted and introverted people reported similar levels of fear of not being able to access information when they presented high levels of obsessive beliefs. However, among subjects with a low degree of obsessive beliefs, differences between extroverts and introverts were detected in the level of fear of not being able to access information. People with higher extraversion are more fearful of not being able to access information by means of a smartphone than those with low extraversion (introverts). The relationship between dysfunctional obsessive beliefs and nomophobia was supported as stated in hypothesis 6. Our results show a positive and significant relationship between dysfunctional obsessive beliefs and the four dimensions of nomophobia. Table 2 also shows the significant interaction effect on nomophobia. Our results partially supported hypothesis 7, as we found a significant interaction between extraversion and obsessive beliefs to predict the fear of not being able to access information. Figure 1 is the plot of interaction between extraversion and dysfunctional obsessive beliefs in predicting the fear of not being able to access information. When obsessive beliefs are high, fear of not being able to access to information do not seem to vary, regardless of the level of extraversion. Extroverted and introverted people reported similar levels of fear of not being able to access information when they presented high levels of obsessive beliefs. However, among subjects with a low degree of obsessive beliefs, differences between extroverts and introverts were detected in the level of fear of not being able to access information. People with higher extraversion are more fearful of not being able to access information by means of a smartphone than those with low extraversion (introverts). Figure 2 presents the graphic representation of the interaction between emotional stability and dysfunctional obsessive beliefs in predicting the fear of giving up convenience, therefore partially supporting hypothesis 8. People with high emotional stability exhibited similar levels of discomfort at giving up convenience regardless of whether or not they had obsessive beliefs. However, when they reported lower levels of emotional stability, they experienced differences in the fear of giving up convenience depending on the degree of their obsessive beliefs. More specifically, people with low emotional stability and high dysfunctional obsessive beliefs showed a greater fear of giving up convenience than people with lower degrees of obsessive beliefs.
Our results partially supported hypothesis 11 by showing that dysfunctional obsessive beliefs moderate the relationships between openness to experience and the fear of not being able to communicate, losing connectedness and not being able to access information. Figures 3-5 present the plot of the interaction between openness to experience and dysfunctional obsessive beliefs in predicting the fear of not being able to communicate by means of a smartphone, losing connectedness and not being able to access information, respectively. These three figures present similar results. People with high degrees of obsessive beliefs along with a high openness to experience exhibited a similar degree of fear of not being able to communicate, losing connectedness and access to information to those with low openness to experience. However, people with low degrees of obsessive beliefs along with low openness to experience exhibited higher levels of fear of not being able to communicate, losing connectedness and access to information compared to people with high openness to experience. not they had obsessive beliefs. However, when they reported lower levels of emotional stability, they experienced differences in the fear of giving up convenience depending on the degree of their obsessive beliefs. More specifically, people with low emotional stability and high dysfunctional obsessive beliefs showed a greater fear of giving up convenience than people with lower degrees of obsessive beliefs. Our results partially supported hypothesis 11 by showing that dysfunctional obsessive beliefs moderate the relationships between openness to experience and the fear of not being able to communicate, losing connectedness and not being able to access information. Figures 3-5 present the plot of the interaction between openness to experience and dysfunctional obsessive beliefs in predicting the fear of not being able to communicate by means of a smartphone, losing connectedness and not being able to access information, respectively. These three figures present similar results. People with high degrees of obsessive beliefs along with a high openness to experience exhibited a similar degree of fear of not being able to communicate, losing connectedness and access to information to those with low openness to experience. However, people with low degrees of obsessive beliefs along with low openness to experience exhibited higher levels of fear of not being able to communicate, losing connectedness and access to information compared to people with high openness to experience.
Discussion
The smartphone has become an indispensable resource in people's lives. However, it has also brought about negative consequences for some people. Among them, a new phobia has emerged in our society: nomophobia. Nomophobia reflects the fear of a lack of access to technology for communication or access to information [48][49][50][51]. This phenomenon is relatively new, and it warrants further research to better understand it. Hence, the present study aimed to examine how personality traits, dysfunctional obsessive beliefs and the combination of the two may predict nomophobia.
The first contribution of this study is that it provides additional evidence regarding the association between personality traits and nomophobia. Our results show that extraversion is positively related to all four dimensions of nomophobia. The more extraverted
Discussion
The smartphone has become an indispensable resource in people's lives. However, it has also brought about negative consequences for some people. Among them, a new phobia has emerged in our society: nomophobia. Nomophobia reflects the fear of a lack of access to technology for communication or access to information [48][49][50][51]. This phenomenon is relatively new, and it warrants further research to better understand it. Hence, the present study aimed to examine how personality traits, dysfunctional obsessive beliefs and the combination of the two may predict nomophobia.
The first contribution of this study is that it provides additional evidence regarding the association between personality traits and nomophobia. Our results show that extraversion is positively related to all four dimensions of nomophobia. The more extraverted a person was, the greater his or her fear of not being able to communicate, losing connectedness, not being able to access information and giving up convenience. Emotional stability was negatively related to the fear of not being able to communicate by means of a smartphone and not being able to access information. More emotionally stable people presented lower levels of nomophobia, in terms of the dimensions of fear of not being able to communicate by means of a smartphone and not being able to access information. Conscientiousness was only negatively related to the fear of losing connectedness; thus, more conscientious people exhibit less fear of losing connectedness. Finally, an openness to experience was negatively related to all four dimensions of nomophobia. People who were more open to experiences presented lower degrees of nomophobia than those with a lower openness to experience. Consequently, it seems plausible to conclude that extraversion and openness to experience are the most critical personality traits in the development of nomophobia, whereas agreeableness is not associated with nomophobia. All results are consistent with those of previous studies on the association between personality and nomophobia [33,52,53], but also with those that found nonsignificant associations between agreeableness and nomophobia (e.g., [7,11,15]).
The second contribution of this study is that it sheds light on the potential association between dysfunctional obsessive beliefs and nomophobia. Our results show that dysfunctional obsessive beliefs are positively related to all dimensions of nomophobia. People with dysfunctional obsessive beliefs presented higher levels of nomophobia. This finding is consistent with the cognitive theory [33] which suggests that people with dysfunctional beliefs associate specific events to detrimental consequences, which may underpin psychological disorders. These results are also congruent with the empirical research that has demonstrated the relationship between dysfunctional beliefs and phobias (e.g., [36][37][38][39])., and the growing research that has shown a link between obsessiveness and nomophobia (e.g., [42,43]).
The third contribution of this study is that it shows how obsessive beliefs might explain the variability in the research on the association between personality traits and nomophobia. Dysfunctional obsessive beliefs might sustain levels of nomophobia, regardless of personality traits. Subjects who presented high levels of obsessive beliefs experienced higher levels of nomophobia irrespective of their personality traits. In other words, extroverts and introverts exhibit a similar fear of not being able to access information if they have high dysfunctional obsessive beliefs. Likewise, both people open to experiences and those who are not open to experiences presented similar levels of fear of not being able to communicate, losing connectedness and not being able to access information by means of a smartphone. An exception was the relationship between emotional stability and giving up convenience. People with low emotional stability and high levels of obsessive beliefs experienced a greater fear of giving up convenience compared to people with higher emotional stability. In this vein, low compulsive beliefs moderated the relationship between personality traits and nomophobia. So, people who are extroverted, emotionally unstable, and not very open to experience exhibited higher levels of nomophobia than people who are introverted, emotionally stable and open to experience when they also had low obsessive beliefs. These results are congruent with incipient research suggesting that dysfunctional obsessive beliefs may intervene in the relationship between personality traits and psychological disorders (e.g., [17,18]).
Although these results are very interesting, they should also be interpreted with caution, bearing in mind the potential limitations of this study. First, the sample was collected in a specific region of Spain. Thus, results must be extrapolated to other populations in caution. Further research is warranted to provide additional empirical evidence to these research objectives. Second, causal relationships between variables cannot be inferred because a cross-sectional design was used. The literature on nomophobia includes few longitudinal studies that test causal relationships over time. More research is needed in this area. Another possible limitation is that all our variables were assessed by means of self-reported questionnaires. Hence, the results may be influenced by common-method variance. Other methods could be applied to collect data in future research. Finally, we controlled the most critical external factors that might affect our results (i.e., sex and age), but other unmeasured variables could also influence the relationship between personality, dysfunctional obsessive beliefs and nomophobia. Additional research is warranted.
These results also carry several theoretical and practical implications. Our study contributes to the body of literature that examines how personality traits and dysfunctional obsessive beliefs may be related to nomophobia. Specifically, it highlights the moderating role of obsessive beliefs in the personality-nomophobia link to explain the variability among studies. The practical implications of this study include the proposal that interventions for nomophobia may require personalization and individualization focused on dysfunctional obsessive beliefs.
Finally, our study examined a series of psychological factors that might explain nomophobia. However, additional research is needed to better understand the determinants of nomophobia. For example, social factors could also influence nomophobia (i.e., social and family relationships). Finally, longitudinal studies are warranted to established causal relationships between these determinants and nomophobia.
Conclusions
This study evidenced how personality traits and dysfunctional obsessive beliefs, and their interaction, are associated to nomophobia, taking into account its different dimensions. Specifically, it highlights the moderating role of obsessive beliefs in the relationship between personality traits and nomophobia. This study contributed to advance the knowledge about nomophobia with important practical implications. Institutional Review Board Statement: All procedures performed in studies involving human participants were in accordance with the ethical standards of the ethical commission of the University Rovira I Virgili (CEIPSA-2023-TDO-0002). In addition, the study was conducted in accordance with the Declaration of Helsinki.
Informed Consent Statement: Informed consent was obtained from participants to collaborate in this research.
Data Availability Statement:
The datasets generated during and/or analysed during the current study are not publicly available due to privacy or ethical restrictions but are available from the corresponding author on reasonable request.
Conflicts of Interest:
The authors declares that they do not have conflict of interest. | 2023-03-01T16:12:32.892Z | 2023-02-25T00:00:00.000 | {
"year": 2023,
"sha1": "7bfcb099ea1d21c25f5e4bb38f2da1b0a9513fa7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/5/4128/pdf?version=1677457519",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d92c3f38f7d1804367802566561b8d18410492f2",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
30305728 | pes2o/s2orc | v3-fos-license | Upregulated ATM Gene Expression and Activated DNA Crosslink–Induced Damage Response Checkpoint in Fanconi Anemia: Implications for Carcinogenesis
Fanconi anemia (FA) is a genetic disorder that predisposes to bone marrow failure, birth defects, and cancer (1,2). FA is characterized by chromosomal and cellular hypersensitivity to DNA cross-linking agents such as mitomycin C (MMC) and diepoxybutane (DEB). Thirteen FA genes have been identified: FANCA, FANCB, FANCC, FANCD1/BRCA2, FANCD2, FANCE, FANCF, FANCG, FANCI, FANCJ, FANCL, FANCM, and FANCN (reviewed in 3). Eight of the proteins (FANCA, FANCB, FANCC, FANCE, FANCF, FANCG, FANCL, FANCM) are thought to assemble into a core nuclear complex, which activates the downstream proteins FANCD2 and FANCI by monoubiquitination (4,5). FANCD1, FANCJ, and FANCN are identical to the breast cancer susceptibility genes BRCA2, BACH1/ BRIP1, and PALB2, closely linking the FA and BRCA pathways (3). Presumably, all these gene products participate in a common pathway of repair or processing of interstrand cross-links (ICLs) (2) and DNA double-stranded breaks (DSBs) (6,7) that may arise as intermediates in ICL repair (8). Analysis of an international database of FA patients has revealed a remarkable predisposition to squamous cell carcinoma (SCC) of the cervix and head and neck (HNSCC) (9,10). The cumulative incidence of SCC in FA patients is 19% by the age of 40 years, with an incidence ratio of 500, making FA the single best genetic model of HNSCC. Human papillomavirus (HPV) may be an initiating factor in FA-HNSCC carcinogenesis, with one group of investigators describing HPV DNA in 84% of FA tumors and absence of TP53 mutations (11). However, a second study was unable to find HPV in four tumor cell lines established from a different set of FA patients (12). Instead, these FA tumors demonstrated loss of heterozygosity patterns and TP53
INTRODUCTION
Fanconi anemia (FA) is a genetic disorder that predisposes to bone marrow failure, birth defects, and cancer (1,2). FA is characterized by chromosomal and cellular hypersensitivity to DNA cross-linking agents such as mitomycin C (MMC) and diepoxybutane (DEB). Thirteen to assemble into a core nuclear complex, which activates the downstream proteins FANCD2 and FANCI by monoubiquitination (4,5). FANCD1, FANCJ, and FANCN are identical to the breast cancer susceptibility genes BRCA2, BACH1/ BRIP1, and PALB2, closely linking the FA and BRCA pathways (3). Presumably, all these gene products participate in a common pathway of repair or processing of interstrand cross-links (ICLs) (2) and DNA double-stranded breaks (DSBs) (6,7) that may arise as intermediates in ICL repair (8).
Analysis of an international database of FA patients has revealed a remarkable predisposition to squamous cell carcinoma (SCC) of the cervix and head and neck (HNSCC) (9,10). The cumulative incidence of SCC in FA patients is 19% by the age of 40 years, with an incidence ratio of 500, making FA the single best genetic model of HNSCC. Human papillomavirus (HPV) may be an initiating factor in FA-HNSCC carcinogenesis, with one group of investigators describing HPV DNA in 84% of FA tumors and absence of TP53 mutations (11). However, a second study was unable to find HPV in four tumor cell lines established from a different set of FA patients (12). Instead, these FA tumors demonstrated loss of heterozygosity patterns and TP53 mutations and polymorphisms similar to the non-FA-associated sporadic HNSCC controls. Although these conflicting results have not been reconciled and the HPV association remains unclear, all available information points to the importance of p53 tumor suppressor mechanisms in the genesis of cancer in FA patients.
The DNA damage response (DDR) is a complex network that regulates cell proliferation and death and coordinates repair in response to DNA damage (13). Checkpoints prevent cells that have sustained DNA damage from transiting the cell cycle before repair, whereas irreparable DNA damage triggers activation of pathways, resulting in removal of affected cells from the replicating pool. The p53 tumor suppressor accumulates in cells treated with ultraviolet or ionizing radiation (IR) and is required for the G 1 /S cell cycle checkpoint. Similarly, the ataxia telangiectasia mutated (ATM) kinase is stimulated immediately after IRinduced DSB damage (reviewed in 14) and rapidly phosphorylates substrates involved in the DSB response, including p53 and BRCA1. The major control of ATM activity is thought to be by autophosphorylation of preexisting protein (15). Loss of checkpoint function is associated with genomic instability and tumor predisposition, a concept recently validated in studies demonstrating activation of a DDR network early in tumorigenesis (16,17), which may serve as a barrier to cancer progression.
Two recent studies suggested that whereas FA cells could repair IR-generated DSBs, they were deficient in ICL repair (8,18). By contrast, cells deficient in the noncore FANCJ/BACH1/BRIP1 gene product were defective in the resolution of IR-induced DSBs and demonstrated radiosensitivity (19). These findings suggest that there may be differences in the precise roles of the individual FANC gene products, which may contribute to clinical heterogeneity. To clarify the function of the core FA protein, FANCA, we studied the DDR in isogenic pairs of FANCA-mutant and gene-corrected pri-mary fibroblast cell lines. Our results suggest a molecular mechanism that may underlie the remarkable cancer predisposition in FA patients.
Retroviral Transduction
Retroviral vectors expressing human FANCA were produced in Phoenix packaging cells with amphotropic envelope (originally from Dr. G.P. Nolan, Stanford University) by transfecting pBabe-Puro-FANCA. GM1309C and GM16631 cells were infected with retroviral vectors and selected in medium containing puromycin 1 μg/mL or blasticidin S 2 μg/mL. Cells were checked for MMC sensitivity by the crystal violet assay and cell cycle analyses to confirm phenotypic correction following transduction of wild-type FANCA.
Control vectors expressed either shRNA against GFP or no shRNA. The different constructs were transfected into HCT116 cells using Fugene reagent (Roche). For transient studies, cells were harvested 48 h after transfection, and RNA was analyzed by quantitative PCR (Q-PCR). For studies with stable clones, transfected cells were cultured under puromycin (Sigma) selection at 2 µg/mL. Total RNA from either transient or stable clones was extracted with Trizol reagent (Invitrogen, Carlsbad, CA, USA) and analyzed for FANCA and ATM expression by Q-PCR.
Q-PCR was performed on the 7900HT Fast Real-Time PCR System (Applied Biosystems, Foster City, CA, USA) to measure the levels of FANCA (NM_000135. 2) or ATM (NM_000051) using TaqMan and Universal Probe Library technology and CYPB as a control.
Immunofluorescence and Confocal Microscopy
Primary fibroblasts were grown on chamber slides (Nunc). After the indicated cultivation times following 5 or 0.5 Gy irradiation, cells were fixed with 4% paraformaldehyde for 10 min. For MMC treatment, cells were incubated with 3 μM MMC for 1 h. After two washes with PBS, cells were cultivated for the indicated times and fixed. After fixation, cells were permeabilized for 10 min in 0.3% Triton-X/PBS, washed twice with PBS, and blocked with 3% bovine serum albumin/PBS for 30 min. The cells were then incubated with rabbit or mouse anti-γ-H2AX phospho-Ser139 (Upstate, Charlottesville, VA, USA) for 1 h at room temperature, washed three times in PBS, and incubated with Alexa Fluor 488 or Alexa Fluor 594 conjugated antirabbit or antimouse IgG (Molecular Probes). Cells were washed three times in PBS and mounted by using Slowfade Antifade solution (Molecular Probes-Invitrogen, Eugene, OR, USA). Fluorescence images were captured using an Olympus IX70 confocal microscope. γ-H2AX foci were enumerated manually after capturing images, and cells containing more than five discrete bright dots were scored positive. More than 150 morphologically intact cells were examined for each experiment and time point. Standard error of the mean (SEM) was calculated.
Flow Cytometric Assessment of Cell Cycle Arrest and Apoptosis
FA cell lines were exposed to varying concentrations of nitrogen mustard (NM) and studied for onset of cell cycle arrest and apoptosis using flow cytometry. Apoptotic cells were identified by fluorescent end labeling of DNA fragments using terminal deoxynucleotidyl transferase nick-end labeling (TUNEL), with a DAPI or propidium iodide DNA counterstain. G 2 /M arrest was assessed by ploidy analysis after DNA staining with propidium iodide.
TP53 Mutation Testing
Genomic DNA was isolated from cells using a kit from Gentra Systems (Minneapolis, MN, USA). Mutation detection for the TP53 gene was done using a commercial chip and resequencing (Asper Biotech, Tartu, Estonia).
Upregulated ATM Gene Expression and Activation of the ATM-p53-Checkpoint Pathway in Primary Fibroblasts Derived from FA Patients
To study the functional relationship between the p53-ATM axis and the FA gene pathway, we used the following patient-derived primary fibroblast cell lines: FANCA-mutant GM1309C and GM16631 and corresponding gene-corrected cell lines created by transduction with wild-type FANCA cDNA. For each transduced line, complementation of the FA phenotype was confirmed by correction of MMC sensitivity and relief of G 2 /M cell cycle delay at 48 h after treatment with MMC (data not shown). We first analyzed the expression levels of components of the p53-ATM axis and also activation of this pathway using antiphosphoprotein antibodies. In GM1309C cells, basal expression of ATM and p53 and phosphorylation of ATM (Ser1981) and p53 (Ser15) induced by IR or MMC were upregulated, compared with GM1309C cells transduced with wild-type FANCA ( Figure 1A). Similar results were observed in a second FANCA-mutant line, GM16631 ( Figure 1B). We reasoned that these results might be explained by upregulation of the basal levels of ATM protein in the mutant fibroblasts. We also confirmed the upregulation of ATM protein ( Figure 1C) in a FANCA-mutant EBV-immortalized lymphoblastoid cell line, HSC72, in comparison to its gene-corrected control. Using real-time Q-PCR assays, we confirmed that ATM gene expression was increased in GM1309C cells (Q-PCR fold change: GM1309c mutant 1.00 vs. gene-corrected control 0.45) and in HSC72 cells (Q-PCR fold change: HSC72 mutant 0.42 vs. gene-corrected control 0. 19).
To determine if these changes in ATM gene expression were directly caused by FANCA depletion, we transfected HCT116 cells (known to be p53 wildtype) with two different siRNAs against FANCA. For these experiments, we used a plasmid expression vector (pRS; Ori-Gene Technologies) that drives 29-mer short hairpin RNAs targeting FANCA. In both transient and stable transfection experiments with either shRNA (Table 1), we found that knockdown of FANCA expression was associated with moderate downregulation of ATM expression, as assessed by real-time PCR, compared with cells transfected with an empty vector control. These data suggested that depletion of FANCA does not directly lead to upregulated ATM gene expression.
γ-H2AX Foci in Primary FANCA-Mutant Fibroblasts
γ-H2AX foci are regarded as sensitive markers of DSB formation (20). Thus, we next asked whether FA mutant cells accumulate larger numbers of DSBs, because FA cells are presumed to be defective in the processing or repair of DSBs that arise during ICL repair (2). Using semiquantitative immunofluorescent microscopy, we analyzed γ-H2AX focus formation in the primary FANCA-mutant GM1309C cell line and its gene-corrected control after exposure to IR or MMC. (For both cell lines, it was possible to detect baseline γ-H2AX staining of approximately 3% to 7% of cells, with no significant difference between the two.) γ-H2AX foci could easily be seen after IR (Figure 2A) or MMC ( Figure 2B) treatment, but the kinetics and time course of formation were markedly different with each type of DNA damage ( Figure 3A and B). Following IR exposure, γ-H2AX foci were rapidly formed (within minutes) in FA-mutant and gene-corrected cells (Figure 2A). Comparing the FA mutant and gene-corrected cell lines, however, there was no significant difference in the numbers of cells positive for γ-H2AX foci, and there was rapid clearance of γ-H2AX foci in both mutant and control cells ( Figure 3A). Even after lowering the IR dose from 5 to 0.5 Gy, there was no difference in the percentage of foci-positive cells between GM1309C and its gene-corrected control (data not shown). With the lower 0.5 Gy dose, the mean number of foci per cell decreased as expected, but there was no significant difference in the means between GM1309C and control cells (data not shown).
For MMC, a concentration of 3 µM was chosen because this had previously been shown to induce γ-H2AX foci in primary MEF cells (21) and is thought to induce approximately 10 3 interstrand crosslinks per genome. After MMC treatment ( Figure 2B and Figure 3B), the number of cells exhibiting γ-H2AX foci increased gradually over 24 to 48 h, and the peak of positively scored cells (~60%) was less than following IR (90%-100%). In contrast to the results following IR, we observed a significant increment and persistence in the number of cells positive for γ-H2AX foci in FANCA-mutant primary fibroblasts compared with genecorrected controls.
Acquired Relative Resistance to DNA Crosslinker-Induced Cell Cycle Arrest and Apoptosis in a TP53-Mutant, Patient-Derived HNSCC Cell Line
To this point, our experiments had focused on primary fibroblast cell lines derived from FA patients. To determine the significance of our findings with respect to FA carcinogenesis, we next turned to a patient-derived HNSCC cell line. The HPV-negative squamous cell carcinoma cell line OHSU-974 (12) is derived from a FA-A patient, and VU974L is a lymphoblastoid cell line derived from this same individual. Both cell lines were exposed to varying concentrations of the DNA cross-linker NM and studied for onset of cell cycle arrest and apoptosis as described in "Materials and Methods." At each NM concentration, the percentage of cells arrested in G 2 /M and the percentage of apoptotic cells were quantified. Threshold NM concentrations for increased G2/M arrest and apoptosis were then correlated. Accumulation of apoptotic cells did not occur until these thresholds were met or exceeded. Figure 4A displays the percentage of cells arrested in G 2 /M as a function of NM concentration for the respective cell lines. VU974L retained heightened sensitivity to NM, whereas OHSU-974 exhibited resistance to NM-induced cell cycle arrest. As shown in Figure 4B, VU974L underwent NM-induced apoptosis with concentrations of NM that exceeded the threshold for G2/M arrest. In contrast, OHSU-974 displayed a marked resistance to NM-induced apoptosis at the concentrations of NM tested. In considering these results, it is important to recognize that the resistance to cell cycle arrest and apoptosis is only relative to VU974L, because OHSU-974 is more sensitive to DNA crosslinker cytotoxicity than are HNSCC cell lines derived from non-FA patients (12).
OHSU-974 was previously described as having a 1-bp deletion at codon 41 (exon 4) of TP53, which results in a very short p53 protein by introduction of a stop codon downstream of this frameshift mutation (12). In addition, OHSU-974 was also determined to have a polymorphism encoding arginine or proline at position 72 of p53 (12). When we analyzed the TP53 gene from genomic DNA of VU974L, we confirmed the presence of the heterozygous polymorphism at exon 4 (codon 72) but found no other TP53 mutations among 1319 mutations or SNPs tested (data not shown).
DISCUSSION
We focused our experiments on the DDR in primary FA cells and its putative role as a barrier to cancer through cell cycle and apoptosis regulation. In FA mutant cells, we found increased IR-or MMC-induced phosphorylation of p53 and ATM, suggesting a generally heightened activation responding to DNA damage. This heightened response could be explained by upregulation of ATM gene expression in the FANCA-mutant cell lines. Functional analysis of this response using γ-H2AX foci as sensitive markers of DSBs suggested that FA-A fibroblasts suffer from a defect that causes aberrant persistence of MMC-induced foci. The consequences of this chronically activated DDR may include selective pressure for cancer cells that can escape this checkpoint barrier.
Our immunoblot analysis suggests increased basal levels of ATM in the FA mutant fibroblast cell lines compared with that of gene-corrected controls, thereby explaining heightened activation of ATM and p53 following either IR or MMC treatment. There may also be slight upregulation of phosphorylated ATM even before IR or MMC, suggesting a response to endogenous DNA damage. In human fibroblasts, ATM levels are thought to remain relatively constant throughout the cell cycle (22), and it is well established that the major control of ATM function is by activation of preexisting protein by a mechanism involving autophosphorylation (15). However, recent experiments indicate that ATM expression can also be regulated at the transcriptional level (23). The E2F-1 transcription factor increases ATM promoter activity (24), and a deficiency in the catalytic subunit of DNA-dependent protein kinase leads to downregulation of ATM (25). To our knowledge, however, increased ATM gene expression in FANCAmutant cells has not been previously described and represents a potentially important mechanism. Our data also indicate that knockdown of FANCA does not directly lead to upregulation of ATM expression, perhaps because chronic genotoxic stress induced by loss of FANCA is not seen in cells partially depleted of FANCA.
A second important question is, why should ATM be hyperactivated after IR when our γ-H2AX data indicate that FA cells are proficient at DSB repair, the main lesion induced by IR? IR-induced ATM activation in FA cells may result from damage to biomolecules including lipids and DNA (single-strand breaks, DSBs, and nucleotide damage), as well as the formation of reactive oxygen species (ROS) (26). Previously, we demonstrated a possible role for FA proteins in protection against oxidative DNA damage (27). Viewed in this context, ATM activation may reflect a response to increased levels of endogenous and induced oxidative DNA damage that includes but is not necessarily limited to DSBs.
Phosphorylated γ-H2AX accumulates at DSBs, forming foci that can be de- tected by immunostaining (20). Using confocal microscopy to enumerate γ-H2AX foci, our data indicate marked differences in the formation and persistence of DSBs in FA mutant cells only following MMC treatment. If we consider our data in the context of a recently proposed model of the FANC proteins in DNA ICL repair (2), which proposes DSB intermediates, several conclusions can be drawn. First, our data suggest that the processing and clearance of IR-induced DSBs are not grossly impaired in FA mutant primary cells. Following IR, there was a very rapid formation followed by a rapid clearance of DSBs, with no differences between FA mutant and control cells. Following MMC, however, DSBs were gradually increased until reaching a plateau, with markedly higher levels in FA mutant cells before a correspondingly higher plateau phase. Our findings are similar to those reported by Rothfuss and Grompe (8) and Atanassov et al. (18), although neither study employed isogenic pairs of patient-derived primary fibroblasts as we have done. In particular, the study by Rothfuss and Grompe (8) demonstrated in normal cells rapid activation of (ubiquitinated) FANCD2 after IR and slow activation in S phase after ICL treatment, data that closely mirror our γ-H2AX foci results. ICLs are believed to be repaired predominantly by homologous recombination (HR) pathways, whereas DSBs are repaired by both HR and nonhomologous end joining (NHEJ) processes (for review see 28). Thus, persistence of γ-H2AX foci in FA cells following MMC may reflect inefficient HR, while clearance of IR-induced DSBs may indicate intact NHEJ (18).
FANCJ was identified as the BRIP1 (BRCA1-interacting protein 1) or BACH1 (BRCA1-associated C-terminal helicase 1) helicase (29,30). In recently published work, MCF-7 breast cancer cells made deficient for BACH1 by RNA interference methods were delayed in DNA double-strand break repair, exhibiting persistence of γ-H2AX foci following 0.5 Gy IR treatment (19). Although these experiments differ from ours in that shRNA methods were used to knock down BACH1 expression, they suggest that FANCJ deficiency has different effects on DSB repair and IR sensitivity than do mutations in FANCA. In this regard, FANCJ is known to act downstream from the core complex FA pathway, as ubiquitination of FANCD2 is normal in FA-J cells (31). Possibly, these differences in DSB processing between FANCJ and FANCA may have clinical significance. Turning to the implications of our work for carcinogenesis, mutant FA primary fibroblasts are known to be 3-to 50-fold more sensitive than normal fibroblasts to transformation in culture by the SV40 virus (32). In previous work, we confirmed this marked susceptibility to transformation of a FA-C mutant primary fibroblast cell line, GM449 (33). We then introduced a copy of the wild-type FANCC cDNA into GM449 cells using a recombinant adeno-associated virus (rAAV) vector. We found that GM449 cells transduced with FANCC were at least 10-fold less prone to form transformed foci. Diminished transformation potential of transduced cells was a specific effect of FANCC because GM449 cells transduced with a rAAV vector not containing FANCC retained marked susceptibility to SV40 transformation. At the time of these studies, we speculated that FA gene products such as FANCC might have a tumor suppressor function cooperative with p53 or RB. Recent genetic evidence for this comes from the demonstration of accelerated tumor formation in both Fancc (34) and Fancd2 (35) knockout mice bred to heterozygosity at Trp53. Paradoxically, our present data, which indicate DDR activation in mutant cells, suggest intact or even hyperactive tumor suppressor pathways in mutant cells responding to MMC-induced damage. In zebrafish, knockdown of Fancd2 caused developmental abnormalities that were attributed to p53-dependent apoptosis (36). Consequently, we hypothesize that chronic DDR activation, perhaps resulting from defective DNA processing and repair, may drive antiproliferative signaling at various points during development and lead to somatic abnormalities and bone marrow failure. Conversely, this same state may exert significant selective pressure for the inactivation of p53 or other DDR components, accelerating genetic instability and evolution of leukemia and cancer.
R E S E A R C H
A recent publication has suggested that FANCG-and FANCC-deficient pancreatic tumor cell lines are hypersensitive to inhibition of ATM by the KU-55933 in- hibitor (37). The two cell lines, Hs766T and PL11, are not derived from FA patients but have mutation of one allele and loss of heterozygosity of the other allele of FANCG or FANCC, respectively (38). Both cell lines were previously shown to be hypersensitive to DNA crosslinkers as assessed by cell survival assays and G 2 /M cell cycle arrest (39). By comparison, our data from OHSU-974, an HNSCC cell line established from a known patient with biallelic mutations in FANCA, indicate loss of DNA crosslinkerinduced checkpoint control, the mechanism of which is currently the focus of our ongoing research. Although based on this single cell line, our observations suggest that a subset of FA patient-derived HNSCC may acquire secondary mutations that enable escape from dependence on the ATM surveillance pathway. | 2018-04-03T01:38:41.621Z | 2008-03-01T00:00:00.000 | {
"year": 2008,
"sha1": "b3f454c83bbdec0ad634948f92f456d1327fbc80",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2119/2007-00122.yamamoto",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "64b54e0937c72b6fa7cbfdf7ed681ebe3d61efc9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
54532233 | pes2o/s2orc | v3-fos-license | Co-implanting orthotopic tissue creates stroma microenvironment enhancing growth and angiogenesis of multiple tumors [version 1; peer review: 1 approved, 1 approved with reservations]
Tumor models are needed to study cancer. Noninvasive imaging of tumors under native conditions in vivo is critical but challenging. Intravital microscopy (IVM) of subcutaneous tumors provides dynamic, continuous, long-term imaging at high resolution. Although popular, subcutaneous tumor models are often criticized for being ectopic and lacking orthotopic tissue microenvironments critical for proper development. Similar IVM of orthotopic and especially spontaneous tumors is seldom possible. Here, we generate and characterize tumor models in mice for breast, lung, prostate and ovarian cancer by co-engrafting tumor spheroids with orthotopic tissue in dorsal skin window chambers for IVM. We use tumor cells and tissue, both genetically engineered to express distinct fluorescent proteins, in order to distinguish neoplastic cells from engrafted tissue. IVM of this new, two-colored model reveals classic tumor morphology with red tumor cell nests surrounded by green stromal elements. The co-implanted tissue forms the supportive stroma and vasculature of these tumors. Tumor growth and angiogenesis are more robust when tumor cells are co-implanted with orthotopic tissue versus other tissues, or in the skin alone. The orthotopic tissue This study expands on long-standing observations that tumor xenografts behave differently in different anatomical locations. The hypothesis is that there are some paracrine interactions that favour tumorigenecity that can only occur with cells or matrix from the appropriate tissue location. Many studies have suggested this over the past twenty five years, for example Nakamura et al. (2007), Killion et al. (1999), Fidler et al. (1990). This study revisits the theory in a methodical manner. Tumor cell lines from different anatomical locations are grafted with or without normal tissue from the same organ. Lung, ovarian and breast cancer cells are found to grow quicker when co-implanted with normal tissue from the lung and breast, respectively. In general, the work is thorough and convincing, although there is some scope for improvement and clarification. The provides a novel way to image and investigate the tumor in the original similar microenvironment. That makes it much easier to investigate the tumor characteristics related to the microenvironments.
v1 Abstract
Tumor models are needed to study cancer. Noninvasive imaging of tumors under native conditions in vivo is critical but challenging. Intravital microscopy (IVM) of subcutaneous tumors provides dynamic, continuous, long-term imaging at high resolution. Although popular, subcutaneous tumor models are often criticized for being ectopic and lacking orthotopic tissue microenvironments critical for proper development. Similar IVM of orthotopic and especially spontaneous tumors is seldom possible. Here, we generate and characterize tumor models in mice for breast, lung, prostate and ovarian cancer by co-engrafting tumor spheroids with orthotopic tissue in dorsal skin window chambers for IVM. We use tumor cells and tissue, both genetically engineered to express distinct fluorescent proteins, in order to distinguish neoplastic cells from engrafted tissue. IVM of this new, two-colored model reveals classic tumor morphology with red tumor cell nests surrounded by green stromal elements. The co-implanted tissue forms the supportive stroma and vasculature of these tumors. Tumor growth and angiogenesis are more robust when tumor cells are co-implanted with orthotopic tissue versus other tissues, or in the skin alone. The orthotopic tissue promotes tumor cell mitosis over apoptosis. With time, tumor cells can adapt to new environments and ultimately even grow better in the non-orthotopic tissue over the original orthotopic tissue. These models offer a significant advance by recreating an orthotopic microenvironment in an ectopic location that is still easy to image by IVM. These "ectopicorthotopic" models provide an exceptional way to study tumor and stroma cells in cancer, and directly show the critical importance of
Introduction
Relevant animal models are vital to understand the processes involved in tumor progression and to develop new therapies [1][2][3][4] . Solid tumors can be followed in their native tissue when tumor cells are administered properly into their orthotopic tissue, or when tumors are chemically induced or arise spontaneously, for instance in genetically engineered mouse models. Though in vivo imaging systems are advancing rapidly, imaging orthotopic tumors within the animal, especially at the cellular level for extended periods of time dynamically and continuously, remains a significant challenge. Subcutaneous models that implant rodent or human tumor cells directly into the skin of animals have quickly become the most commonly used tumor models 5-7 , in part because these tumors are convenient, easy to implant, and are readily imaged outside the body 4,8 .
Recent advances in intravital microscopy (IVM) have made subcutaneous tumors even easier to image dynamically at higher resolutions in live rodents through dorsal skinfold window chambers 7,9,10 . Standard light and fluorescence microscopy in this system can distinguish individual cells in the tumor so that many cellular events, such as cell migration, mitosis, pyknosis, apoptosis, and the growth of blood vessels, can be readily quantified. Intravital microscopy can also be particularly powerful for evaluating tumor imaging probes and therapeutic agents by visualizing at high resolution and quantifying tumor targeting, delivery, processing and efficacy in vivo, dynamically and continuously.
Tumor cell interactions with the surrounding tissue stromal environment, including extracellular matrix, local enzymes and proteases, vasculature, inflammatory cells, growth factors and hormones, can significantly affect tumor development [11][12][13] and are, to a large extent, extensively altered or even missing when tumors are grown in ectopic environments such as skin [14][15][16][17][18] . Most orthotopic tumor models and especially spontaneous tumors are not readily amenable to IVM except possibly acutely for very short periods after surgical exposure, which frequently can be quite invasive. Moreover, injecting tumor cells properly to maintain an orthotopic tissue microenvironment can be quite difficult, in part because the orthotopic organ to be injected can be so very tiny in the mouse. Making sure that all of the injected cells enter and stay inside tiny organs can be quite challenging. Microsurgical techniques with stereomicroscopic imaging can help but greatly increase the labor per mouse.
Recently, we have successfully engrafted donor tissue from healthy rat organs and mouse prostate tissue with hormonally sensitive prostate tumor cells into the dorsal skinfold of mice carrying a window chamber for dynamic and continuous IVM imaging in vivo 19,20 . The implanted tissue maintained both tissue and species-specificity, even expressing key organ-specific biomarkers 19 . Here, we expand this tissue transplantation and revascularization model to multiple cancers by engrafting different donor tissues with various tumor spheroids to create novel ectopic-orthotopic (EO) tumor models that permit dynamic imaging by IVM while attempting to provide and maintain an orthotopic stroma microenvironment for the tumor cells. Comparative IVM analysis of these tumors directly shows the critical incorporation of the co-engrafted tissue into the stroma of the growing tumor and ultimately the pronounced importance of this stroma and unique microenvironment for tumor growth and angiogenesis.
Materials
All materials were obtained from Sigma-Aldrich (St. Louis, MO) unless otherwise noted.
Tumor growth
Tumors were imaged using IVM, as described above. Tumor growth was analyzed off-line from the recorded, digital, grayscale 0-to-256 images using Image-Pro Plus (Media Cybernetics, Bethesda, MD). Tumor growth was determined in two ways: by measuring the area with fluorescence signal from the GFP-or CFP-expressing tumor cells or by quantifying the cumulative fluorescence signal for the tumor over time. Tumor area was measured by counting the number of pixels with a grayscale intensity above 75, thereby making it easier to reliably follow irregularly shaped tumors. The cumulative tumor fluorescence signal was measured by signal summation of all pixels over 75. All growth curves are normalized to the tumor on day 0. In all cases, growth measured by area or aggregate fluorescence signal was found to be very similar so only one of the results is usually shown.
Mitotic and apoptotic indices
To determine mitotic and apoptotic indexes, two peripheral and two central fields (using 20x objective) from three different animals for six random fields from the growing tumor in the dorsal skinfold chamber for each tumor/tissue combination was used. Only mitotic figures in metaphase-telophase are included in the mitotic index (MI) to exclude potential artifact of nuclear membrane distortion. Apoptotic/pyknotic nuclei are defined as H2B-GFP labeled nuclei with a cross sectional area < 30 µm 2 . Nuclear karyorrhexis, easily distinguishable by the vesicular nuclear condensation and brightness of H2B-GFP, is included within this apoptotic index (AI).
Vascular parameters
The vascular density of tumors was measured from digital images (obtained using 10× objective) that were "flattened" to reduce the intensity variations in the background pixels and cropped to eliminate distorted areas. The thresholding feature was used to segment the picture into objects and background. The picture was also skeletonized to measure vascular length. Vascular density was calculated as vascular length per tumor area.
Statistics
SigmaStat (Systat Software, San Jose, CA) was used to determine statistical significance. Ranked ANOVAs with the Tukey post hoc test were used and a statistically significant difference delineated if p<0.05.
Results and discussion
To develop new breast and lung tumor models that are amenable to continuous long-term, dynamic monitoring by IVM, yet maintain an "orthotopic" tumor microenvironment, we engrafted orthotopic tissue into the ectopic subcutaneous location, the dorsal skin with a window chamber already surgically attached, and then implanted tumor spheroids onto this donor tissue (see Methods). The tumor spheroids used in these EO tumors were formed from murine mammary adenocarcinoma (N202) and Lewis Lung Carcinoma (LLC) cells (see Methods). To follow tumor cell growth and chromosome were anesthetized (7.3 mg ketamine hydrochloride and 2.3 mg xylazine per 100g body weight, intraperitoneal injection) and placed on a heating pad. As per the standard IVM tumor model 20,22 , a titanium frame was placed onto the dorsal skinfold of the mice to sandwich the extended double layer of skin. A 15 mm diameter full-thickness circular layer of skin was then excised. The superficial fascia on top of the remaining skin was carefully removed to expose the underlying muscle and subcutaneous tissue which was then covered with another titanium frame with a glass coverslip to form the window chamber. After a recovery period of 1-2 days, tumor spheroids were implanted.
Tumor spheroids were formed by plating 50,000 cells (N202, LLC, TrampC2 and MOVCAR-16) onto 1% agar-coated 96-well nontissue culture treated flat bottom dishes (Becton Dickinson, Franklin Lakes, NJ) (20 µl cells in 100 µl medium) and centrifuging 4 times at 1200g for 15 min, rotating the dish after every centrifugation. The cells were incubated an additional 3-7 days (depending on cell type) at 37°C in 5% CO 2 in air to form tight 3-dimensional spheroids. BT474 cells required 500,000 cells in the presence of Matrigel (BD Bioscience, San Diego) (2:1 cell volume dilutioncells to matrigel) to form spheroids in culture.
The tumor spheroids were implanted in the window chamber directly onto the exposed dorsal skin either alone to created standard, classic, subcutaneous model or with lung, liver, mammary (lactating female mammary fat pad) or prostate tissue which was excised from a donor mouse and minced into small pieces in penicillin (10,000 U/ml) -streptomycin (10,000 µg/ml) solution. One animal was usually enough to supply donor tissues for an experimental set of 15 animals except for the EO model in the case of prostate tissue when 3 animals were needed. Typically, the tumor spheroid was placed in the center of a bed of 1-2 mm of flattened minced tissue onto the subcutaneous tissue of each mouse. Tumors were allowed to re-vascularize over 7-14 days depending on model. For the BT474 cells, in some cases 10 µl of 5 mg/ml human 17β-estradiol (University of California, San Diego pharmacy) was injected subcutaneously twice weekly.
For adaptation to a new microenvironment, the tumors were allowed to re-vascularize as above. The tumor was removed and the fluorescent tumor cells were separated from non-tumor cells. New tumor spheroids were formed and re-implanted with donor mouse tissue as above. This was repeated two more times to reprogram the tumor to its new microenvironment.
Intravital microscopy (IVM) and fluorescence confocal microscopy of tumors After implantation, tumor spheroids were allowed to revascularize (12-14 days) and tumors were imaged with intravital fluorescence video microscopy, as described 20 . The tumors were imaged with a FITC or Texas Red filter using an integrated frame grabber. Confocal microscopy was used to acquire dual fluorescence images via a Nikon E2000 microscope (20× and 60× objective lens) equipped with a Perkin Elmer UltraView 5ERS confocal system with an Hamamatsu Orca ER camera (Hamamatsu Corporation, Bridgewater, NJ). To construct movies, dual color images were taken experiments, we used fluorescent tumor spheroids implanted concurrently with other tissues in nude mice.
In the last decade or so, it has become clear that the stroma and tissue microenvironment can affect tumor development. Our ability to co-implant other tissues from normal organs with different tumor cell types in the dorsal skin window chamber provides a unique way to study the direct effects of different tissue stromas on tumor development. Moreover, this system facilitates direct imaging of the tumors over many days using IVM. To assess the effects of different tissues on the tumor growth in vivo, tumor spheroids were implanted directly onto the dorsal skin alone in the window chamber (as per the classic IVM subcutaneous tumor model 22 ) or with different donor tissue. IVM enabled detailed visualization and quantification of tumor cell fluorescence signal as well as tumor area to assess tumor growth (see Methods). The N202 spheroids grew well on skin alone, better with each co-implanted donor tissue and most robustly with the orthotopic mammary fat pad tissue (Figure 1a and b).
After 15 days, tumors grown in mammary tissue were >3 times the size of tumors grown subcutaneously. Thus, the orthotopic tissue provided the heartiest environment for tumor growth.
dynamics independent from changes in the tumor stroma and surrounding host tissue, these tumor cell lines were transduced to stably express histone H2B linked to green (GFP) or monovalent cherry (CFP) fluorescent proteins. We observed very similar growth for the parental and stably transfected fluorescent tumor cells and when tumor spheroids were implanted simultaneously with engrafted tissue or onto engrafted tissue that had already revascularized days earlier (Supplementary Figure 1a). We also assessed if the tumors implanted in syngeneic mice had a growth advantage over tumors implanted in nude mice. When we implanted syngeneic tumor cell spheroids in non-immunocompromised versus nude mice, we did not observe any noticeable differences in EO tumor growth (Supplementary Figure 1b). Both the N202 and LLC cells grew very similarly in nude mice as in FVB and C57BL/6J mice, respectively (Supplementary Figure 1b and data not shown). Consistent with this result, classic subcutaneous IVM tumor models routinely use nude mice in part because of several key advantages: i) enables implantation of a wide variety of tumor spheroids and tissues of different strains and species and ii) their abundant and hairless skin makes it easier to implant the titanium window chambers and observe the progressing tumors. Therefore, for the remaining spheroids implanted onto the engrafted tissue, and the entire living host mouse. To visualize the implanted tissue cells distinctly from the neoplastic tumor cells and to determine which tissue (engrafted tissue or host tissue) gives rise to the stroma and vasculature inside the EO tumors, donor mammary tissue from GFP transgenic mice 23 was implanted simultaneously with N202 tumor spheroids expressing H2B-mCherry. Confocal fluorescence microscopy showed very typical solid tumor architecture with well-separated islands of "red" tumor cells surrounded by "green" stromal cells. When the images were projected in 3D, green vasculature with other cells derived from the orthotopic stroma could also be seen weaving amid the red tumor cells (Figure 2a-d; Supplementary Movies 1 and 2). Vascular tubes with blood flow were also readily apparent in phase images as dark vessels against the lighter stroma (Figure 2e). Under fluorescent microscopy, the blood vessels within tumors were uniformly lined with cells expressing GFP (Figure 2a, b, d and f) and were clearly distinct from tumor cells expressing mCherry (Figure 2a, b, c and g). Thus, the engrafted tissue persists to become the supportive stroma for the tumor cells in this EO model. Furthermore, these images show not only that a thriving tumor has been created with very typical, quite classic morphology but that two key components of the tumor can be marked a priori to be visualized distinctly in a long-term, dynamic, continuous imaging system.
To examine the vascular endothelium more specifically, we also implanted donor tissue excised from mice expressing GFP under Having shown previously that prostate tumor cells grow better when implanted with prostate tissue that express key hormones required for tumor cell growth 20 , we were concerned that robust growth could similarly emanate from hormones expressed in the mammary fat pad tissue. To avoid typical hormonal effects and to show that the enhanced tumor growth with co-engrafted orthotopic tissue was not restricted to one cell type, we created a new lung tumor model by implanting LLC tumor spheroids onto skin alone or co-engrafted with lung, liver, and mammary tissue (Figure 1c). Fluorescent IVM again showed the tumors growing sooner and more rapidly with orthotopic tissue. At 14 days after implantation, tumors with orthotopic tissue were again at least three times larger than subcutaneous tumors (Figure 1d). However, it should be noted for both LLC and N202 cells that after about a 10-day lag period, the growth rate of the subcutaneous tumors increased dramatically to become more similar to that of the EO tumors. The EO tumor model described here uses three sources of tissue, the engrafted orthotopic tissue from the donor mice, the tumor We also tested other tumor cell lines in this system to create prostate and ovarian EO tumor models and they showed quite extreme behavior with an extraordinary dependence on co-implantation of the correct orthotopic tissue. Both Tramp-C2 prostate tumor cells (Supplementary Figure 2a) and MOVCAR-16 ovarian tumor cells (Supplementary Figure 2b) did not grow at all when implanted alone subcutaneously in the dorsal skin window chambers. They actually disappeared over a 10-day period. However, when coimplanted with their proper orthotopic tissue in the EO model, they both grew very well, with vascular development clearly evident by 6 days after implantation. Co-implantation with other ectopic tissues did not prevent tumor disappearance (data not shown). Thus, it is not just the presence of any co-implanted stroma tissue to envelop the tumor spheroid that is necessary for growth. These tumor cell lines appear actually to require the co-engraftment specifically of the orthotopic tissue to grow and to develop new blood vessels in the tumor. The orthotopic tissue co-implantation can essentially rescue in vivo growth and enable tumor cell lines to create a more robust and potentially useful tumor model in vivo.
Effect of tissue microenvironment on tumor growth and development
We also observed that human tumor cells can also exhibit a strong preference for orthotopic tissue co-implantation. We implanted the well-known human breast cancer cell line BT474 as tumor spheroids in the dorsal skin window chamber with and without mouse mammary tissue. First, we did so without supplementing the mice with human estrogen, which is customary for these tumor cells. Figure 4a and b show that the tumors did not grow well and substantially regressed from the original tumor spheroid, especially the endothelial cell-specific promoter TEK 24 . Here again, the tumor vasculature was clearly lined with GFP-expressing endothelial cells (Figure 2h) that were clearly distinct from tumor cells (Figure 2i). The green vessels attached to host vessels lacking GFP and blood cells circulated seamlessly between the contiguous vessels. The tumor stroma and neovasculature, therefore, arose from the engrafted donor tissue and successfully revascularized by attaching to the unlabeled vessels present in the host animal.
Tumor growth ultimately requires vascular development to fulfill the metabolic demands of the cancer cells 25 . To compare the rates of revascularization, tumors growing in different tissue microenvironments were transilluminated so that dark blood vessels were readily visible against the bright tumor background (Figure 3a). The vascular development of N202 tumors grown either subcutaneously or on implanted lung tissue lagged for days behind the N202 tumors grown on orthotopic mammary tissue. Eventually, the vascular density became nearly equivalent by about 2 weeks in both models (Figure 3a and b). Blood vessels developed similarly for the LLC tumors ( Figure 3c). Vascularization occurred sooner and initially was more rapid and extensive in EO tumors, likely supporting more rapid tumor growth. in the subcutaneous-only implants. However, with mammary tissue, the tumor regression was reversed after 2 weeks with modest growth thereafter. When we performed the implantations this time with estrogen supplementation, tumor growth was much more robust. Figure 4c and d show that, again, the fluorescent tumors grew more quickly in the EO model than subcutaneous model. The tumor spheroids decreased in size initially in the subcutaneous model for about 1 week and then grew modestly thereafter. The EO tumors did not regress and required about 5-7 days to begin robust growth. Angiogenesis was readily evident by 8-10 days after implantation. Vascular development lagged along with little tumor growth in the subcutaneous tumors alone until after 2 weeks. Thus, it appears that human tumor cells can also benefit from orthotopic mouse tissue implantation quite similarly to the mouse tumor cell lines. Even without human estrogen supplementation, these tumors cells did better in the orthotopic stroma milieu (Figure 4a and b). Then, also with human estrogen, the tumors grew much better when exposed to an orthotopic tissue environment. Our new IVM study of multiple tumor types subjected to tissue co-implantation clearly shows that the tissue stroma can have a very significant and even dramatic effect on tumor growth and vascular development. Ultimately, every tumor type tested grew best when co-implanted with respective orthotopic tissue. Using IVM with the H2BGFP-labeled tumor cells allowed us to visualize directly the growing tumor cells and their fluorescent nuclei in real time. To begin to examine the cellular mechanisms mediating growth differences in the distinct tissue microenvironments, chromatin dynamics were imaged to quantify both mitotic and apoptotic cells in the LLC spheroids implanted with lung tissue, ectopically with other tissues, or subcutaneously, directly on skin (Figure 5a). The ratio of mitotic to apoptotic tumor cells in each tumor revealed that the LLC tumors growing on orthotopic tissue had a strong bias towards mitosis (Figure 5b). LLC tumors growing in mammary tissue, liver or skin had a more balanced ratio of mitosis to apoptosis. The N202 tumors showed very similar results whereas the TRAMP-C2 and MOVCAR-16 tumors also exhibited ample mitosis in the EO model, but no mitosis and ample apoptosis and cell death, as they disappeared when implanted alone subcutaneously (Supplementary Figure 2). Thus, the orthotopic tissue could create for multiple tumor cell lines a local tissue microenvironment that favored tumor growth by promoting tumorcell mitosis over apoptosis. In humans, tumors are not restricted to one organ, but instead eventually reprogram to alter their phenotype often in order to metastasize to other organs. This well-known characteristic of cancer suggests tumor cells have the inherent ability to genetically adapt and maybe even to grow optimally in other non-orthotopic tissue environments.
Effect of tissue co-engraftment on mitotic/apoptotic indices of tumor cells
To determine the ability of tumor spheroids to adapt to different tissue microenvironments, we passaged N202 mammary tumor spheroids on donor lung tissue in the dorsal window chamber model (as described in the methods). Initially, mammary tumors grew poorly (Figure 6a and b) and revascularized more slowly (Figure 6c) when grown on lung tissue than orthotopic mammary tissue. However, after three passages of growing in lung tissue implanted in the IVM chamber followed by isolating and re-culturing the tumor cells for spheroid formation and then re-implantation, mammary tumor cells eventually grew much more robustly and revascularized faster (Figure 6a-c) on donor lung tissue. Remarkably, when lung-adapted mammary tumor cells were implanted onto mammary tissue, the tumors grew quite poorly and revascularized rather slowly (Figure 6a-c). In fact, their growth and revascularization was similar to the growth on the lung tissue prior to being trained via lung tissue passaging. Thus, interactions between tumor cells and stroma become evident, including tumor cells adapting to a new tissue microenvironment and eventually reaching a new phenotype optimized for the new stroma, but no longer flourishing in the original orthotopic tissue. IVM offers an unparalleled view into tumor development, allowing dynamic, high resolution, in vivo imaging of molecular and cellular events. Here, we greatly expand the relevancy of the classic IVM tumor model by introducing orthotopic tissue into the dorsal skinfold chambers, thereby creating EO tumor models allowing easy and direct manipulation of the tissue microenvironment that can now be viewed with a long-term, dynamic, continuous imaging system. We show that tumors in an orthotopic tissue microenvironment grow more robustly and develop vasculature more rapidly than subcutaneous and other ectopic tissue models. The orthotopic environment facilitates tumor cell mitosis over apoptosis. As new blood vessels are needed to support tumor growth, the faster growing blood supply observed in the EO models likely supports the greater rates of mitosis in the tumor cells growing with orthotopic tissue versus just subcutaneously. Recreating the orthotopic tumor microenvironment in the dorsal skinfold window chamber is a significant advancement that maintains the power of the IVM imaging system. This approach incorporates the more relevant orthotopic tissue microenvironment, while still being amenable to dynamic imaging by IVM. The IVM experiments show improvements over subcutaneous tumor models and provide key direct evidence that the tumor stroma and microenvironment can dramatically influence growth and angiogenesis.
Though tumor models abound, one of the many strengths of this novel EO model is its ease of use. True orthotopic tumor models, in which tumors are implanted onto orthotopic tissue, can be technically difficult to create. For example, it is quite challenging to inject mammary tumor cells into the very tiny mammary tissue of the mouse, especially when the cell number or injection volume is similar to that of mouse tissue. Genetic tumor models are complicated and costly to create and are specialized for a very specific set of genetic defects. Dynamic in vivo imaging, especially at the cellular level, is also very limited in many of these models. The EO model overcomes each of these difficulties. Engrafting tissue in the dorsal skinfold chamber is fairly straightforward. Numerous types of tumors and donor orthotopic tissue can be readily implanted. This model is widely applicable to many tumor types and is amenable to dynamic imaging by IVM, which offers an unparalleled view into tumor development, allowing dynamic, high resolution, in vivo imaging of molecular and cellular events.
Importantly, recreating the orthotopic tumor microenvironment in the dorsal skinfold window chamber allows researchers to focus on tumor-stroma interactions in a more controlled environment.
Growing tumor spheroids in different microenvironments revealed that tumors in an orthotopic tissue microenvironment grow more robustly than subcutaneous and other ectopic tissue models. The orthotopic environment facilitates tumor cell mitosis over apoptosis.
As new blood vessels are needed to support tumor growth, this faster growing blood supply likely supports greater rates of tumor-cell mitosis in tumor cells growing orthotopically versus subcutaneously. However, growth in a single microenvironment is not hardwired into the tumor cell. Tumor cells clearly have the ability to adapt to new microenvironments. Thus, these new tumor models may allow the ongoing interaction between tumor and stroma to be examined in greater detail and with more precise control than previously possible. Further experimentation comparing EO versus subcutaneous tumors may be warranted. Such studies may find additional functional and molecular distinctions that not only uncover stroma effects, but also provide contrasts to actual tumors in humans.
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We would like to thank the reviewers for their time and comments.
We agree that tissue attenuation of fluorescence signal is likely and could contribute to an underestimation of tumor growth. But that is why we performed two different measurements, including one based on tumor area which is defined by the pixels reaching a minimum threshold fluorescence signal from the glowing tumor cells and thus will be much less sensitive to such attenuation. The results were very similar with both measurements. Consistent with this, these tumors tend to grow more in 2dimensions and be a bit flattened and not so spherical in shape, in part because of the glass coverslip. Not sure that this issue is all that critical to our findings and conclusion; for the purposes of comparisons made in the paper, we used the same methods which quantified growth differences that were not particularly subtle but rather quite obvious from the captured images. Performing ex-vivo measurement of tumor size and weight somewhat defeats the overall purpose behind using IVM and this model system. We wish to get dynamic and continuous intravital data on the tumors at multiple scales and ultimately avoid using huge numbers of animals to get data that may only be a bit more accurate but at such a considerable cost in many different ways. There are some advantages here: because we are using tumor cells that provide the fluorescent signal specifically, we can be sure that our measurement reflects actual tumor cells and not other events that can appear to contribute to tumor size and apparent growth such as dead cells, infiltration of other cells, hemorrhage, edema, etc. Here we wish to examine effects of stroma/tissue implantation on the tumor cells themselves in-vivo and their proliferation. So our approach is more direct and maybe even better in many respects than simple tumor excision and weighing.
1.
We have compared our definition of apoptotic cells with tunnel assays and they were very similar in their assessment of apoptosis. This was reported in reference 20. We have added a sentence in this regard in the methods citing this paper.
2.
We are able to observe the blood including cells circulating through unmistakable blood vessels. The static images shown were taken from our movies. When we set up these measurements, we picked darkness thresholds that highlighted unambiguous vessels with clear blood flow. We have frequently used various fluorescent tracers which as expected provide a signal that coincides with the blood flow seen through the vessels. But this extra procedure on the mice ultimately was unwarranted for this singular purpose because it did not really augment or refine our measurement of vascularity. Again ex vivo evaluations seem contrary to noninvasive, dynamic, continuous, in vivo imaging attained here and would likely add more effort and animals but little beyond the results and conclusions provided more efficiently through IVM. We appreciate these comments and have added further description of the vascularity measurement in the methods to provide more clarity to the reader. 3.
expressing key organ-specific biomarkers". This donor tissue can also do so with the tumor spheroid where the two appear to work quite well together to create a functioning robust neoplastic tissue. When comparing tumors with and without the donor orthotopic tissue, it appears clear that the co-implanted stroma helps the tumor take root more quickly with faster development of functioning blood vessels leading to a significant growth advantage at least initially. It will be interesting to see how similar or not the EO and subcutaneous tumors are over time; once the subcutaneous tumors have overcome their longer lag period and achieve similar vascular densities and growth, does the incorporated orthotopic stroma contribute to sustained, long term meaningful differences between the two models?
We can readily differentiate the blood vessels from the non-fluorescent stroma because the blood vessels not only encompass morphologically distinct and obvious dark channels between the fluorescent tumor cells but also were identified by presence of blood cells and even actual circulating blood flow, which could easily be visualized in the movies from which the static images were made. For our measurement, we picked darkness thresholds that emphasized vessels with clear blood flow. As expected, when we have used various fluorescent tracers they provided a signal that readily coincided with the blood flow seen through the blood vessels. We appreciate these comments and have attempted to be clearer by adding more details on this to the methods section.
2.
The tumor size is based on the size of the tumor on Day 1 after implantation which is normalized for each tumor to be 1. Therefore, the relative tumor growth graphs do not have any units in the Y-axis. You can see from the pictures provided that the tumors were similar but not identical in size (in part because the tumor spheroids,at the time of implantation, cannot be matched perfectly in size). Because we ultimately were interested in relative tumor growth over time between the different groups and experiments, we choose to simplify the growth curves akin to many other past published studies by this standard normalization. How we measure tumor size and growth including this normalization is described in the methods section with a brief sentence in some of the figure legends. Please note that we measured size in two ways and both gave very similar results. We appreciate these comments and have adjusted the legends and methods to be clearer. | 2018-04-03T04:47:13.923Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "df9a4ce6d823d8be760c7101d19e1bb15481c237",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.12688/f1000research.2-129.v2",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ffea0651fdd6deca7f80aa31eb796fc081a999b8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265311030 | pes2o/s2orc | v3-fos-license | Diagnosing Fabry nephropathy: the challenge of multiple kidney disease
Fabry disease (FD) is an X-linked inherited lysosomal disorder due to a deficiency of the enzyme alpha-galactosidase A (α-gla) due to mutations in the GLA gene. These mutations result in plasma and lysosome accumulation of glycosphingolipids, leading to progressive organ damage and reduced life expectancy. Due to the availability of specific disease-modifying treatments, proper and timely diagnosis and therapy are essential to prevent irreversible complications. However, diagnosis of FD is often delayed because of the wide clinical heterogeneity of the disease and multiple organ involvement developing in variable temporal sequences. This observation is also valid for renal involvement, which may manifest with non-specific signs, such as proteinuria and chronic kidney disease, which are also common in many other nephropathies. Moreover, an additional confounding factor is the possibility of the coexistence of FD with other kidney disorders. Thus, suspecting and diagnosing FD nephropathy in patients with signs of kidney disease may be challenging for the clinical nephrologist. Herein, also through the presentation of a unique case of co-occurrence of autosomal dominant polycystic kidney disease and FD, we review the available literature on cases of coexistence of FD and other renal diseases and discuss the implications of these conditions. Moreover, we highlight the clinical, laboratory, and histological elements that may suggest clinical suspicion and address a proper diagnosis of Fabry nephropathy.
Introduction
Fabry disease (FD) is an X-linked inherited lysosomal disorder with an incidence of 1 in 40-60,000 new births in the male population.The disease is due to deficiency of the enzyme alpha-galactosidase A (α-gla), which results from mutations in the GLA gene.To date, more than 1000 mutations have been described, which may result in different clinical phenotypes and disease courses (including classical phenotype, late-onset FD, attenuated disease, and variants of uncertain significance) [1].The common final effect of all these mutations is the accumulation in plasma and lysosomes of various glycosphingolipid substrates, particularly globotriaosylceramide (Gb3) and its deacylated form globotriaosylsphingosine (LysoGb3) and, to a lesser extent, galabiosylceramide (Gb2) and blood group B substances, which may lead to progressive organ damage and reduced life expectancy [2].
Fortunately, in view of the severity of the disease, specific therapies are now available.Indeed, FD can be treated with enzyme replacement therapy (ERT) using IV infusions of agalsidase alfa or beta or, in selected cases, with migalastat, an oral chaperone that increases the enzymatic activity of α-gla in patients carrying amenable mutations [3].These treatments, the choice of which is influenced by the patient's characteristics (gender, type of mutation, residual enzyme activity, and type of genetic variant) and clinical manifestations (symptoms and organ involvement), may improve the quality of life by reducing subjective symptoms and disease burden, even in patients with cardiac or renal involvement [4,5].However, the benefits of ERT seem to be dependent on the timing of diagnosis, since early initiation of therapy is associated with its long-term success, while treatments are less effective in patients with advanced organ damage [6,7].Thus, correct, and timely diagnosis of FD is crucial for proper management of these patients.Nevertheless, diagnostic latency is one of the most relevant culprits in the management of FD [8].The diagnosis of FD is often delayed due to the high heterogeneity of the disease, which may present during childhood or later (because of its genetic background) with different signs and multiple organ involvement developing in variable temporal sequences [9].This is also valid for renal involvement, which represents one of the main causes of disability and death in patients with FD.The most common renal manifestations include proteinuria, hypertension, and progressive chronic kidney disease (CKD).Although the severity and clinical impact of renal dysfunction in FD is clear, it should be recognized that these manifestations are not specific, because proteinuria, CKD, and hypertension, are hallmarks of many nephropathies, such as diabetes kidney disease and glomerulonephritis.Furthermore, beyond the low specificity of the signs of Fabry nephropathy, a further pitfall to early suspicion and diagnosis of FD in patients with kidney alterations is the eventuality of the coexistence of FD with other nephropathies, which has been occasionally reported.
All these factors may make a diagnosis of FD challenging in a patient presenting with signs of kidney disease [10].
Herein, through the description of a unique case of concurrent autosomal dominant polycystic kidney disease (ADPKD) and FD, we review and discuss renal involvement in FD, highlighting the challenge posed by often overlooked conditions when multiple kidney diseases coexist.
Moreover, we provide suggestions on the clinical, laboratory, and histological elements that may help in improving diagnostic skills, emphasizing how awareness of the disease, a complete physical examination, and the combination of the different findings are essential to achieve proper and timely diagnosis.
Renal involvement in FD
As for other organ manifestations of FD, renal damage may onset early or late during the history of the disease (Table 1).Considering the glomerular function, in the early phases of the disease, mild albuminuria and hyperfiltration have been reported [11].However, glomerular filtration rate (GFR) calculation should be interpreted with caution since creatinine-based equations, commonly used for estimating GFR (eGFR), may be inaccurate.This is the reason why, while eGFR may represent a suitable initial test, expert panels recommend performing direct GFR measurements at least annually (e.g., iohexol GFR) to accurately assess the kidney function in patients with FD [5].
Later, evaluating the natural history of the disease in a cohort of 105 untreated male patients, Branton et al. found that 82% of patients develop clinically manifest proteinuria at a mean age of 34 ± 10 years, while nephrotic proteinuria may be seen in 18% of patients, starting at the age of 40 ± 7 years [12].High-grade proteinuria was often accompanied by CKD (defined as GFR < 60 ml/min/1.73m2) that in 23% of the patients led to the development of end-stage kidney disease (ESKD) at a median age of 47 years (range, 21-56 years).However, these manifestations cannot be generalized because clinical presentation, and outcomes, may greatly vary according to age of onset, gender, genetic background, and residual enzyme activity [13].Thus, for example, while blood pressure, because of autonomic dysfunction, may be low during the first phases of the disease [14], it may increase thereafter.The actual prevalence of hypertension in FD is unknown, but related factors include age, level of proteinuria, underlying proinflammatory environment, genetic factors, and kidney dysfunction [15].In particular, a prevalence of hypertension of 80% has been found in untreated patients with GFR < 60 ml/min/1.73m2 [16,17].
A characteristic aspect reported in FD patients is the high incidence of renal cysts, mainly parapelvic, whose pathogenesis and clinical significance are unknown [18].Urinalysis and urine microscopy, apart from albuminuria and proteinuria of different degrees, may show hematuria and peculiar features, such as "Maltese cross" particles seen at the polarized microscope and urinary "mulberry" cells, which are both expressions of the accumulation of Gb3 in epithelial cells [19,20].Moreover, in the presence of glomerular injury, podocyturia may also be detected [21].However, these tests are of limited value because they are not pathognomonic for FD and are problematic to evaluate in routine laboratory examinations.Finally, although seen less frequently, tubular manifestations, such as distal renal tubular acidosis, isosthenuria, nephrogenic diabetes insipidus, and Fanconi syndrome, have also been documented [22][23][24].Overall, it must be underlined that signs of FD nephropathy are not specific to FD and are commonly found in other pathological conditions, such as diabetic kidney disease and primary glomerulonephritis, which may coexist with FD [25].
Clinical case
This unique case involves a 52-year-old man who underwent kidney transplantation at 35 years of age because of end-stage renal disease caused by autosomal dominant polycystic kidney disease (ADPKD).The patient's father had renal cysts and died at the age of 60 of myocardial infarction, while one of two patient's younger sisters presented with cortical and parapelvic renal cysts with normal kidney function.
In addition, he had paternal cousins with renal cysts.In his childhood, the patient reported suffering from burning in his feet and hands.At the age of 16 years, he was hospitalized for macrohematuria, proteinuria, and fever.At that time, immunological tests were negative, and serum creatinine was normal.Ultrasound imaging showed multiple liver cysts, and increased kidney volume with multiple bilateral parapelvic and cortical cysts.Hence, a clinical diagnosis of ADPKD was made.Subsequently, the patient developed hypertension and progressive CKD.In 2002, at the age of 33 years, maintenance hemodialysis was started and in 2004 the patient underwent kidney transplantation.There were no complications of the transplant surgery.Immunosuppressive treatment included tacrolimus, mycophenolate mofetil, and steroids.At discharge, serum creatinine was 167 µmol/L with associated proteinuria of about 200 mg/24 h.These values remained stable throughout the follow-up period, and no kidney biopsy was performed after the transplantation.
At the age of 48 years, i.e., 15 years after the transplantation, the patient presented with repeated episodes of atypical chest pain.ECG showed negative T waves in the inferior and lateral leads, ST elevation in V1-V3, supraventricular premature beats, and short PR interval (Fig. 1).
The echocardiogram revealed a non-dilated left ventricle with marked concentric hypertrophy (left ventricular mass index 219.3g/m 2 ; relative wall thickness 0.837) and ground glass appearance, preserved kinesis and systolic function, advanced diastolic dysfunction with signs of elevated left ventricular filling pressures, moderate mitral regurgitation (in the presence of systolic anterior motion of the mitral valve anterior leaflet), normal right ventricle and trivial pericardial effusion) (Fig. 2).
A 24-h ECG monitoring showed rare ventricular premature beats and frequent supraventricular single premature beats with a few couples and short runs.Laboratory examinations showed that troponin and BNP values were slightly raised to 70 ng/dL (normal value < 19 ng/dL) and 250 pg/dL (normal value < 100 pg/dL), respectively.Immunofixation of serum and urine to rule out amyloidosis was negative.At that time, cardiac magnetic resonance was not performed due to concerns regarding the presence of metal plates inserted in his skull after a road accident 20 years earlier.Based on these findings, a diagnosis of hypertrophic cardiomyopathy was made.At the age of 50 years, the patient presented with a recurrence of cardiac symptoms characterized by frequent palpitations and shortness of breath.Twenty-four-hour ECG monitoring and echocardiographic findings were unchanged.In addition, for the first time, angiokeratomas on the patient's abdomen were observed.Therefore, considering these new findings, and based on the patient's history, enzymatic and genetic studies were performed for FD.
The activity of α-gla in whole blood measured by dried blood spot (DBS) was found to be extremely low (0.2 nmol/ml/h; normal range > 3) and associated with significant accumulation of LysoGb3 in plasma117.19nmol/l (normal range < 2.3).Sanger sequencing of the GLA gene showed the c.1072 G > A nucleotide substitution in exon 7, which determines the amino acid substitution p. Glu358Lys.This variant is associated with the classical phenotype of FD [9].Therefore, 16 years after kidney transplantation, the patient received a diagnosis of FD (Fig. 3).
Considering this diagnosis, cardiac and skin alterations and symptomology were retrospectively attributed to FD.As well, it was possible that the development of ESRD was due in part to FD.
To complete the clinical framework, we performed an ophthalmological examination that highlighted the presence of cornea verticillata.A neurological visit did not reveal symptoms attributable to peripheral neuropathy, while otolaryngological evaluation found bilateral hearing loss on high frequencies.Finally, once the compatibility of metal plates was established, brain magnetic resonance imaging did not show any abnormalities.
After completion of the diagnostic workup, the patient started ERT with algasidase beta at a dose of 1 mg/ kg body weight every two weeks.Screening for FD was offered and performed on the patient's relatives within a genetic counselling framework.Both of the patient's sisters tested negative for the FD familial variant.However, as expected by X-linked inheritance of the disease both daughters (9 and 14 years old) were carriers and presented normal enzyme activity with low LysoGb3 accumulation (Table 2).Interestingly, the patient's mother was negative for the c.1072 G > A substitution; thus, suggesting a de novo origin of the variant.
The index patient underwent also genetic testing for confirmation of ADPKD, which was considered the cause of renal dysfunction for the first 50 years of his life.Genetic testing by next-generation sequencing identified a pathogenic variant of the polycystin 2 (PKD2) gene.Specifically, the patient was heterozygous for the c.709 + 1G > C substitution in intron 2, resulting in a loss of the splicing site [27].The patient's father had died at the time of the family screening, while the mother was not tested for ADPKD because she presented normal kidney function and without renal cysts at ultrasound.Moreover, genetic testing was not performed on the patient's daughters because they were very young and asymptomatic.Our final diagnosis was the co-occurrence of two distinct genetic diseases, namely ADPKD and FD.The former was likely inherited from the paternal line, while FD was likely due to a de novo mutation in the patient himself.
The challenge of multiple kidney diseases
Although the present case represents one of the first descriptions of the extremely rare association of ADPKD with FD, beyond the unicity of the specific case, it allows us to make some considerations that can be generalized.First, our patient's clinical history demonstrates that multiple unrelated inherited kidney diseases (IKDs) may coexist in the same individual with different patterns of inheritance.Apart from patients affected by a single IKD, rare cases of multiple IKDs in the same individual have been described.The concomitance of ADPKD and hereditary renal hypouricemia type 2 [28] or Alport syndrome (AS) [29] have been documented, as well as the coexistence of FD with AS [30,31].
In the literature, there have been reports of FD coexisting with other genetic or acquired nephropathies (Table 3).Although each single case represents a rare event, together these reports emphasize that FD should not be ruled out in the diagnostic workup of patients with renal diseases.Johar et al. reported on a patient Fig. 3 Timeline of main clinical manifestations and events of the index case.Abbreviations: ADPKD, Adult dominant polycystic kidney disease; FD, Fabry disease; CKD, chronic kidney disease; HD, hemodialysis; ERT enzyme replacement therapy Table 2 Enzymatic and genetic testing for Fabry disease in the index case and family screening Abbreviations: LysoGb3 globotriaosylsphingosine, GLA alpha-galactosidase A, WT wild type Methodology: the enzyme activity of alpha-galactosidase was performed in whole blood was performed using dried blood filter paper; the determination of LysoGb3 in plasma was performed by tandem mass spectrometry (MS/MS) [26] with concomitant polycystic kidney disease [32].In addition to Fabry symptoms caused by the c.730G > A (p.Asp244Asn) mutant of the GLA gene that was diagnosed at the age of 34 years, this patient presented with polycystic kidney disease with multiple simple and complex cysts at 60 years of age.Molecular testing revealed a variant of unknown significance in the PKD1 gene.
In addition to genetic disorders, FD has been reported to coexist with several types of nephropathies including crescentic glomerulonephritis, membranous nephropathy, minimal change disease (MCD), and IgA and IgM nephropathy.Singh et al. reported on two cases of necrotizing and crescentic glomerulonephritis with coexisting FD [41].Both patients presented with fever of unknown origin and progressive renal impairment, although other pathognomic signs of FD such as dyshidrosis, acroparesthesias, and cutaneous angiokeratomas were absent.A case of crescentic glomerulonephritis in a 58-year-old woman with FD was also reported who developed progressive renal insufficiency [42].Another case of crescentic glomerulonephritis was described in a 26-year-old woman with fever of unknown origin and renal failure [43].Of note, the patient's brother was also found to have FD associated with tubulointerstitial nephritis.
A few cases of superimposed FD and membranous nephropathy have been reported.Liu et al. published the case of a 21-year-old man who presented with proteinuria and stage 1 membranous nephropathy [38].FD was diagnosed by low α-gla activity in plasma and genetic testing which revealed a hemizygous mutation in the GLA gene.In another rare case, FD was reported to coexist with membranous nephropathy in a 30-year-old male presenting with nephrotic proteinuria [39].The diagnosis was aided by electron microscopy which showed zebra bodies in podocytes, as well as low α-gla activity and genetic testing showing a single base deletion in exon 7 of the GLA gene.More recently, another rare case of FD and membranous nephropathy was reported in a 22-year-old man with FD presenting with proteinuria during ERT [40].Membranous nephropathy was confirmed by renal biopsy.Moreover, some cases of FD superimposed with MCD have been described.Even in these cases, as for membranous nephropathy, patients presented with nephrotic syndrome, suggesting that in patients with FD this condition, which is an uncommon presentation of Fabry nephropathy, deserves special attention and proper investigations [33,34].
In addition, several cases of FD and coexisting IgA nephropathy have also been published [35].Chao et al. presented the case of a 49-year-old man with foamy urine lasting for years [36].Of note, the patient also reported intermittent severe burning pain in both hands during childhood.Diminished sweating and exertion were further reported along with pigmented papules in the groin area after puberty.Kidney biopsy revealed focal segmental endocapillary and mesangial proliferation with focal segmental glomerulosclerosis.Thus, the patient had many tell-tale signs including zebra bodies in podocytes under electron microscopy.Yin et al. reported on two cases of FD and IgA nephropathy [37].Both patients presented with proteinuria and were diagnosed with IgA nephropathy upon admission with no suspicion of FD.Histology of renal biopsy showed vacuolation of podocytes with mild mesangial expansion, which raised suspicion of FD that was later confirmed by lack of α-galactosidase A activity in both patients.
Lastly, a single case of FD with coexisting IgM nephropathy has been documented [44].The case was that of a 54-year-old woman who presented with proteinuria, but without clinical signs or family history of FD.Diagnosis of FD was obtained through the use of light and electron microscopy, immunostaining for IgM of renal biopsy, and genetic testing.
Taken together, these cases highlight that a complex clinical presentation can hide the coexistence of different diseases, even in patients with an established diagnosis.Such coexisting conditions can make diagnosis of FD even more challenging, such as in our index case who did not present with a family history of FD and had another rare disease underlying renal failure.
An open question remains the potential mutual contribution of concurrent kidney diseases on the pathogenesis and evolution of kidney dysfunction.Indeed, one can speculate that the chronic inflammatory environment
Immuno-mediated diseases
IgAN 4 [35][36][37] Membranous nephropathy 3 [38][39][40] Crescentic glomerulitis 4 [41][42][43] IgMN 1 [44] present in FD may augment the risk of developing and progression of other kidney disorders [45].For example, in our index case it is not possible to exclude that the coexistence of FD accelerated the progression of the underlying PKD2-related ADPKD, which typically presents with mild kidney disease [46].Unfortunately, the limited clinical experience and the absence of mechanistic studies do not allow sound conclusions on this issue.
When to suspect FD in patients with kidney disease
Given the unspecific signs of renal involvement in FD and the possibility of the coexistence of other kidney disorders, it seems essential for the clinical nephrologist to find elements to guide differential diagnosis and suspect FD in patients with alteration of the kidney function and/ or urinalysis (mainly albuminuria).Indeed, paradoxically, due to the availability of feasible tests evaluating enzyme activity, LysoGb3 accumulation, and eventually the presence of GLA gene mutations, the main limitation to early diagnosis is clinical suspicion of the disease [47].Suspicion of Fabry-related nephropathy can be guided by several aspects.Firstly, at least in males, FD should be considered in patients with (even mild) proteinuria and a more rapid loss kidney function than the normal adult population (loss of eGFR > -1 ml/min per 1.73 m2/ per year) [48].Clinical history must be taken with special attention to family history, especially of the maternal branch, for nephropathy, kidney failure, or other alterations that could be linked to FD [49].
In addition, extrarenal signs and symptoms must be evaluated, starting from subjective signs, such as acroparesthesias, anhidrosis [49], history of burning or hot pain in hands and feet, and exercise, heat, or cold intolerance.Systemic organ involvement may manifest as ECG abnormalities, such as short PR interval, left ventricular hypertrophy, arrhythmia, history of early cerebrovascular disease, skin lesions (angiokeratomas), cornea verticillata, and peripheral neuropathy (Fig. 4) [50].
All these signs have been considered by some authors as "red flags" in addressing the suspicion of FD and should be taken into consideration when evaluating a patient with kidney alterations [51].However, it is not a general rule since these signs are not so specific (for example, cornea verticillata also represents a manifestation of amiodarone-related keratopathy, or drug-induced phospholipidosis [52]).Moreover, it should be underlined that not all FD patients, including those with the classic phenotype, have complete expression of these features, and thus their absence should not exclude FD as a potential diagnosis.In addition, the extent of the disease manifestations and their temporal evolution (including kidney involvement) may vary among patients and be affected by sex, age, and classical/late-onset/attenuated disease phenotype [53].Notably, together with the well-characterized above-reported signs, currently, it seems conceivable to suspect FD even in patients presenting with reduced kidney function and proteinuria associated with the finding of parapelvic cysts on ultrasonography [54].Although the central role of clinical evaluation in suspecting FD, it should be recognized that, as in other nephropathies, the most reliable exam to render a diagnosis is kidney biopsy.By light microscopy, FD may present with a quite normal picture or different degrees of glomerulosclerosis, both segmental and global, interstitial fibrosis, tubular atrophy, and thickening of the vascular walls.These findings are common to most glomerular diseases, so they may be not of help in addressing clinical suspicion of FD.Instead, a peculiar aspect that can be found in the biopsy of a patient with FD is the presence of intracytoplasmic accumulation of lipids mainly in podocytes (foamy podocytes) and vacuolation in different cells [55].These lesions are the consequence of GL3 accumulation, although they may also be present in other conditions, such as drug-related nephrotoxicity or other lysosome storage disorders [56,57].Notably, at light microscopy, GL3 deposits are well observable with toluidine blue staining.At immunofluorescence microscopy, nonspecific IgM or C3 deposits may be detected in areas of sclerosis.Moreover, immunofluorescence can be positive in the presence of superimposed glomerulonephritis.
Finally, electron microscopy is a fundamental tool to make a proper diagnosis of renal involvement in FD, even when other nephropathies are suspected or superimposed.Indeed, electron microscopy allows the direct identification of GL3 deposits, which may be seen in all cells, mainly podocytes and endothelial cells, as electrondense lamellar bodies that have been described as "myelin bodies, " "onion skin, " or "zebra bodies" [58].Lamellar bodies are cellular inclusions within lysosomes frequently appearing as intracellular concentric structures containing deposits of undegraded lipids.
However, these lesions are not specific for FD; indeed, these concentric bodies are also typical for lysosomal storage disorders (mucolipidosis type 2, GM1 gangliosidosis, Hurler's disease, or Niemann-Pick), as well as drug-induced phospholipidosis [59].In particular, the last condition deserves special attention in making a differential diagnosis of FD nephropathy.Drug-induced phospholipidosis is a form of acquired lysosomal storage disease characterized by intracellular accumulation of phospholipids with lamellar bodies because of the use of drugs that impair phospholipid metabolism of the lysosome [60].These drugs include antibiotics, antidepressants, antipsychotics, antimalarials (such as chloroquine), and antiarrhythmics (such as amiodarone) [61].
This condition may represent a phenocopy of FD, especially in patients without a family history of FD, since beyond similar histological findings, patients with druginduced phospholipidosis may present analogous clinical (including cornea verticillata) and biochemical features (such as low alpha-galactosidase activity, and elevated lysoGb3 circulating levels).In these cases, only proper genetic analysis may specifically allow FD nephropathy [62].
Finally, it should be noted that even if electron microscopy reveals diagnostic features of FD nephropathy, the limited use of this technique in clinical practice reduces its impact on the diagnosis of FD.Consequently, wider use of electron microscopy may constitute one of the factors that could facilitate the diagnostic approach to FD nephropathy.
Considering all the aspects briefly discussed here, it seems clear that a suspicion of FD nephropathy in a patient with kidney disease, rather from the evidence of a single pathognomonic sign, may emerge by the combination of multiple clinical, laboratory, and histological findings (Fig. 5).Interestingly, as also demonstrated by the clinical case reported herein, such an approach could also be applied to the study of patients with ESKD.
Obviously, in this setting, urinalysis and kidney biopsy are unreliable, but accurate history (evaluating personal and familial history), as well as the assessment of multiorgan dysfunction, may equally address FD diagnosis.Once suspected, FD screening procedures include the analysis of α-gla enzyme activity in males and the search for GLA mutations in women [47].Enzymatic activity may be tested on plasma, isolated leukocytes, or whole blood using DBS [63].However, gene mutation analysis is mandatory for the diagnosis.Initial tests include an evaluation of plasma or urinary levels of LysoGb3, which could be useful also for monitoring the patients [64].As a second step, systemic evaluation of organ damage should be warranted for all patients, through a complex workup, aiming to define indications, timing, modality, and monitoring of the specific treatment.The description of diagnosis and treatment strategies, such as long-term management of FD, is outside the scope of the present paper and may be found in focused expert opinions and guidelines [65].
Conclusions
The awareness that renal signs in FD are nonspecific and of the possible coexistence of FD with other kidney diseases is extremely important since correct and timely diagnosis of FD is crucial for the proper management of these patients, also considering the availability of specific therapies.Indeed, early treatments may result in better biochemical response and slower progression of coexisting cardiac and renal disease [66][67][68].An additional benefit of FD recognition is the possibility of performing family screening to allow early diagnosis of FD, thus permitting appropriate monitoring and treatment of the disease before the appearance of irreversible organ damage [69,70].To achieve these objectives, we need to improve diagnostic skills, first increasing understanding of the pathogenesis of FD and its presentation and considering FD in the differential diagnosis of kidney disease, even if a diagnosis is already available.Such an approach can aid in earlier diagnosis, genetic counseling, and administration of treatments that can improve long-term outcomes [71].
Fig. 5
Fig.5 Summary of elements that may suggest a diagnosis of Fabry nephropathy in patients with kidney dysfunction and albuminuria.EM is the most important single examination leading to FD nephropathy diagnosis, beyond specific genetic and functional tests.Age of onset/severity and disease course may vary depending on gender, (better in heterozygote females than in males), residual α-gla activity and specific mutation on GLA gene.Abbreviations: LM.Light microscopy; EM, electron microscopy; IFG, immunofluorescence.*Many clinical signs and laboratory and histological features may be common to other lysosome storage disorders or drug-induced phospholipidosis
Table 1
Main clinical manifestation of Fabry nephropathy in males with classical phenotype a Abbreviations: CKD chronic kidney disease, eGFR glomerular filtration rate, ESKD end-stage kidney disease a In patients with late-onset form, attenuated disease, or in females, clinical manifestations may be delayed or mild b Hyperfiltration should be confirmed by measured GFR rather than creatinine-based estimation methods ; genetic analysis of the GLA gene was performed by Sanger sequencing
Table 3
Cases of kidney diseases superimposed with Fabry disease as reported in the literature Abbreviations: ADPKD autosomal dominant polycystic kidney disease, MCD minimal change disease, IgAN IgA nephropathy, IgMN IgM nephropathy a To add the case reported in this paper | 2023-11-22T15:28:34.698Z | 2023-11-21T00:00:00.000 | {
"year": 2023,
"sha1": "4df7030d429c2a8881de11daee88d1d76a959404",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "4df7030d429c2a8881de11daee88d1d76a959404",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214318205 | pes2o/s2orc | v3-fos-license | The Mining of Craft: An exploration of Minecraft as a Community of Inquiry
Reading and Writing, for James Gee, is how we engage with any knowledge community; what he calls a semiotic domain (2007). Our level of literacy in a given domain amounts to how well we can read (understand) and write (convey) the knowledge that the domain values. Education is all about teaching the student how to read and write. However, we read and write with much more than just pens and paper. Our literacy covers how we engage and interact within our domain of knowledge. In the following paper I will explore these ideas of reading and writing within the domain of Minecraft. I will use the educational theory of John Dewey to show how the community of Minecraft creates its own knowledge community, and how its members learn to read and write.
Minecraft started as the side project of a Swedish game developer in early 2009. Taking inspiration from various other games for its distinctive look and gaming style, Notch (Markus Persson) first posted a very early build of the game on a forum for independent (indie) game developers called The Independent Games Source (TIGSource 2010).
The practice of taking inspiration from other games is not considered plagiarism in the gaming community. For example, the distinctive 'blocky' look of Minecraft was inspired by a game called Infiniminer, which had been created by fellow TIGSource member Zach Barth. Notch talks about Minecraft as an 'Infiniminer clone' in its early development. This is what makes TIGSource such an interesting place. Ideas are shared freely. If one person has a good idea and shares it with the community, other people are going to take that idea and go in different directions with it. There is an acknowledgement that many of these ideas would not exist if not for the community, and so it is the community, not the individual, that owns the knowledge that is shared and discussed. What individuals do with this knowledge is theirs, but it is only possible because of the community. This shows the stark contrast to the practices in independent creative communities as opposed to more mainstream companies in the games, music, and movie industries.
It was May of 2009 that the first playable version of Minecraft was released to the wider public. Again, Notch handled this differently. In his mind, the game wasn't finished. He wanted to add more. He initially charged a reduced fee for the game and promised that any future updates would be made freely available to anyone who had bought the game. This style of release has, for better or worse, become a common practice for indie developers. It's called early access. Games are released as soon as a playable version is made and content such as extra levels, gameplay modes, and items are added in later. This can be a great way of crowd-funding the testing phase of development and generate income from the game straight away. While Minecraft wasn't the first, it certainly was an early success story for indie developers choosing to release their content in this way.
In keeping to his promise, Notch kept an online blog about his development of Minecraft and asked for feedback from those who had bought the early version of the game. Players would find bugs in the game or have suggestions for updates they would like to see. Much of the additional content that was added to the main game early on was the result of these discussions. This was the beginning of the online community of Minecraft. It was a perfect mix of a developer who was willing to listen to suggestions, and a community of active players who wanted to help grow this new world.
In many ways, while this was always Notch's project, he gave an amount of control to the community in what the game would become and how it would be played. For example, Notch didn't ever put an introduction to Minecraft in the game. It is common practice in most video games to include a short introduction to help the player learn the controls and understand the mechanics of the game. This is also where the narrative of the game begins. However, Minecraft has no such introduction. For example, when playing survival mode for the first time you are dropped into the world without so much as a hint of what the controls are or what you are supposed to do. My first time playing I distinctly remember I had just figured out that if I 'punched' a tree several times (This is how the game mechanic is described in many forums), the block would break and I would get a block of wood. I looked up at where the tree I 'punched' down used to be and noticed the sun was going down. 'That's pretty' I thought, as the sky grew orange, then red, then blue. I was playing around with the wood I'd picked up (you can refine it into wooden planks, and then build a workbench with those. With the work bench you can make much more complex things) and thought I'd gotten a handle of this game when I heard a guttural growling from behind me. I stopped looking at the wood, and realised it was pitch black in the world and I could make out very little of the terrain. What I did end up seeing, for only a moment, was a tall, green monster-looking thing I would later find out was called a creeper. As it ran towards me it flashed white, and then exploded. I was dead. I had failed to survive the first night.
Confused and a little annoyed I went online to see if this weird thing had happened to anyone else. What I found were several YouTube videos titled 'how to survive your first night in Minecraft'. From these I very quickly found a series of online forums, discussion groups, and websites dedicated to the Minecraft community. From its humble beginnings on Notch's development blog, the community had turned into a living, breathing knowledge community with the virtual world of Minecraft as its hub.
Deweyan Crafting
The community groups that naturally arose around Minecraft in the months and years after its release is a great model of what John Dewey (2005) called a Community of Inquiry. Society, for Dewey, is an ongoing act of communication between individuals. He says, "Society not only continues to exist by transmission, by communication, but it may fairly be said to exist in transmission, in communication" (Dewey 2005, p. 5). What is communicated is the experiences of an individual; a set of values, habits, practices, beliefs, and ideals. Our social structure is made up of our continual transmission of these experiences, and our attempt to understand the transmissions of those around us. A community of inquiry arises when two or more people communicate in a similar way. Human communication, after all, is imperfect. What we wish to convey to the other is complex, and we have no control over how they interpret our communications. But when we share some values, practices, or beliefs with the other, we have some common ground with which to have our conversation. The more common ground we share, the more complex our conversation can be. The job of the community of inquiry is to facilitate complex conversations (sharing of experiences) between two or more people.
For example, the field of philosophy values (among other things) a love of wisdom, scholarly work such as reading and writing, and lengthy discussions on abstract and theoretical topics. The community decides which works are counted as philosophy and which are not (Plato's Republic is, but the manual for my car is not), how these works are engaged with (read and discussed, not thrown at each other from various distances), and who has the authority to judge them (the opinions of philosophy professors and authors tend to have more weight than a drunk friend or co-worker at the pub). By adhering to these practices, the member of the community of philosophy is able to engage in a far more complex discussion on, for example Plato's Republic, without having to first explain why The Republic or Plato are important. These practices outline what reading and writing is, in the sense that Gee (2007) describes, in the context of the community of philosophy. Other communities will read and write in different ways.
What is especially interesting about communities of inquiry is that these values, beliefs, and practices are not static, but always changing. By asserting these values, beliefs, and practices, we are saying that they are valuable and worth holding. In writing this paper I am asserting 2 things; first, that I am a philosopher, and so can add to the discourse in the community of inquiry of philosophers. I prove this membership by adhering to the values, practices, and beliefs common to philosophy. My second assertion is that Minecraft is a topic that can be talked about in the field of philosophy. In order to prove this I need to show some connection between the community of inquiry of philosophy and the community of inquiry of Minecraft. If my first assertion is accepted, my second one holds more weight. Because if I am a philosopher, then asserting that some thing (namely the game Minecraft) has some philosophical value, is easier to accept. If I am not a philosopher, I have less ability to say what is or isn't valued in the community of philosophy.
Therefore, membership in a community of inquiry gives the individual the power to add to and change the value system of that community. It also allows individuals to connect one community of another. However, it also comes with the responsibility to uphold and communicate those values, beliefs, and practices as worthwhile.
Dewey claimed that members of a community were not only brought together from physical proximity to each other such as neighbours, family, church groups, and schools, but from proximity to ideas and values. It is not the case that our only communities of inquiry are those in physical proximity to ourselves. Even in the early 20 th century, when Dewey was writing, subscribing to a particular publication or newsletter, or being a member of a political party, was enough to hold membership in a community. However, what he did not anticipate was the effect the internet would have on the scope and possibility of knowledge communities. The ability to connect to like-minded individuals from vastly different social and temporal locations has had an enormous impact on our ability to create communities of inquiry. The network of online communities that exist today is a prime example of these kinds of social groups.
Dewey places education in roughly two different characterisations: the informal education of a community, and the formal education of a school. We are taught social practices and habits informally, he says. In this way we take on the values and beliefs of our community. The young or inexperienced are praised for adhering to the community's values and beliefs and are punished or shunned by going against them.
Once we begin to use symbols to convey vast amounts of information, education needs to become more formal, such as a school. The problem here is that formal education becomes abstracted from the world to the extent that information is no longer situated in the world. Knowledge is not understood by doing, it is understood by memorising. The danger in this kind of formal education is what Paulo Freire (1996) describes as the banking concept of education.
The banking concept of education '…involves a narrating Subject (the teacher) and patient, listening Objects (The students)' (Freire 1996, p. 53). In this method, it is the role of the teacher to convey information, and it is the role of the student to memorise this information. Freire describes this as a tool of oppression. The teacher is set up as the knowing subject; someone who has all the knowledge. This is contrasted by the student, who is set up as ignorant. By maintaining this dichotomy it asserts the students ongoing ignorance, and dependence on the knowing teacher. Knowledge is therefore a gift bestowed upon the ignorant student by the benevolent and knowledgeable teacher.
Narration (with the teacher as narrator) leads the students to memorize mechanically the narrated content. Worse yet, it turns them into "containers", "receptacles" to be "filled" by the teacher. The more completely she fills the receptacles, the better the teacher she is. The more meekly the receptacles permit themselves to be filled, the better students they are. (Freire 1996, pp. 52-53) This method of education isolates knowledge from the world, turning it into mere facts. It is also an attempt to control what is counted as knowledge and what is not. Without any connection to the information being taught, or any control as to what is considered important, the student is left out of the process of education. Dewey discusses this in his explanation of the difference between training and educative teaching. Training is about securing habits of practice. Being able to dodge a punch in boxing, recite the times tables, or drive a car is all about being able to act without having to think. Teaching, however, is concerned with the process of thinking. Dewey describes this in the following way: A clew may be found in the fact that the horse does not really share in the social use to which his action is put. Some one else uses the horse to secure a result which is advantageous by making it advantageous to the horse to perform the act -he gets food, etc. But the horse, presumably, does not get any new interest. He remains interested in the food, not in the service he is rendering. He is not a partner in a shared activity. Were he to become a co-partner, he would, in engaging in the conjoint activity, have the same interest in its accomplishment which others have. He would share in their ideas and emotions. (Dewey 2005, p. 11) Dewey's great challenge was to create a formal education around the informal structure of a community. In a genuine community of inquiry, information is situated and contextualised so that it is practical, but it is also capable of the level of complexity that is required to understand, and live in, our contemporary society. To do this, Dewey framed subjects like English and Mathematics as communities. The rules of addition and subtraction, for example, became part of the practices and values of the community of mathematics.
Communities of inquiry are like bubbles of knowledge. Each holds knowledge that is useful to that community. Truth in this context is relative. How we explain and describe the world is going to be different depending on what community you are in at the time. Each subject in school is a different community of inquiry for Dewey, and each explains and defines the world differently. But outside of the formal structure of a school, we see communities of inquiry in the wild all the time. Football supporters, unions, interest groups, and gaming communities are just a few examples. Different communities will value different things in different ways. The truths adhered to in the practice of philosophy, for example, are different to those in the practice of mathematics. However, there are similarities; some values are shared by both groups. It is these similarities that allow for communication between different communities. Our society is made up of overlapping and connected communities of inquiry, where different sets of knowledge are used to describe the world. Dewey argues that, in order for us to navigate this subjective conception of knowledge, we ought to be members of multiple communities where we can describe the world in different, but accurate ways as well as understand other people from vastly disparate communities.
Communities of Minecraft
In order to highlight some key features of the community of inquiry, I'm going to look at a case study from the early days of Minecraft.
In 2010 an artist going by the online handle Halkun posted a link to a project he had just completed (see Figure 1). The project was a 'build' or creation in Minecraft of a 1:1 scale replica of the USS Enterprise D from Star Trek: The Next Generation (Capitan Piccard, not Kirk). The project was quickly shared amongst several Minecraft communities and Star Trek fan sites for its accuracy and attention to detail. It was a massive project for one person to have completed. Here's how Halkun did it: He found the original blueprints of the Enterprise online, and layered them in Gimp, which is an open source graphics editor. He resized the plans so that 1 meter = l pixel. He then reduced each of the ship decks to a two-colour bitmap image that was exported as a layer. Each of these layers were uploaded into the Minecraft level editor, and the frame for the enterprise was created. The result of this can be seen in Figure 1.
There was a significant discussion, over several forums, as to whether or not Halkun's wireframe was legitimate -whether or not the community of inquiry would accept this as appropriate practice. He hadn't laid down each block by hand (he hadn't placed each block in the game himself), so some in the community thought this might constitute cheating. At the time, Minecraft had no end game state. It wouldn't have an adventure mode with the end goal being to find and kill the Ender Dragon until 2011. In what way, then, could Halkun have been considered cheating? Some actions in a community are illegitimate. These actions go against the values and beliefs, or act contrary to the habits and practices, of a community. For example, hypotheses in science have to be falsifiable, otherwise they are not scientific hypotheses. I might have a hypothesis that I was Socrates in a past life citing my love of philosophy, beards, and white robes as evidence. But this hypothesis is untestable and unfalsifiable, and so not scientific. If I attempt to write and publish a paper about my past life in a scientific journal, it will get rejected. It is in this way that Halkun's project was being judged as potentially cheating; was the action he engaged in legitimate practice in the community of inquiry of Minecraft?
What was decided was that Halkun had done something new, and something valuable. He had used several programs to create an amazing structure, and while the skill involved wasn't in Minecraft per se (It was in creating the images to upload), it became a valuable practice in the Minecraft community nonetheless. New knowledge was created. Several new ways of building emerged from this, and other instances like it in the early days of the community. It was also around this time that modifications to the game (a practice known as modding) become common practice. The potential for action in the Minecraft community expanded greatly to include actions that engaged with the game in unforeseen ways.
For Halkun, he made a public server and asked for help in building in all the decks. On a forum on one of the Minecraft sites, he posted a detailed key as to how to properly build the decks. What blocks to use, how to colour code everything and what the layout of the inside of the ship should look like. A year later, Halkun posted a video online of an almost complete Enterprise. There have since been many projects like this one. Wireframes are shared freely, so that people can engage in different kinds of builds. Once again, this blurs the lines drawn by ideas of conventional intellectual property.
What's significant about this story is the way in which the community of inquiry around Minecraft works. Halkun posted his initial project on a Penny Arcade forum. Penny arcade is a web comic about two guys who like to play video games. It has nothing directly to do with Minecraft. Halkun was also a member of some of the Minecraft discussion groups, as well as some of the Star Trek forums, where he got the blueprints for the enterprise. Halkun's project showed the connection between these different communities. By posting this project, he made a value judgement about a particular activity; that making a model like this was valuable, and that interacting with Minecraft in this way was legitimate. Various communities responded positively to this. At various times Halkun, as well as others in this informal community of inquiry, acted as both students and teachers. Unlike Freire's banking concept, the form of education that is seen here aligns much closer to what Dewey has described. Halkun's post was the act of a student bringing a project to a teacher for evaluation. This subsequent evaluation allowed new knowledge to be created in the Minecraft community, which situated Halkun in the role of teacher. The fluidity of the roles of teacher and student is a key aspect of Dewey's model of education. For Dewey, the student is always capable of adding something new to the discourse, making them teachers. The teacher, on the other hand, is always capable of learning something new, making them students (Dewey 2005). Both teacher and student are therefore engaged in a conjoined activity of exploration where both parties are invested in the outcome; much like Notch and his development of Minecraft, and Halkun and his wireframe Enterprise.
The network of forums that Halkun communicated on highlights how communities of inquiry don't exist in specific locations, even online ones. The community of inquiry only exists because each member holds to that community; their values, their ideals, and their knowledge. In this way, Halkun was operating as a member of these communities by affirming a new set of values. However, we can say a lot more than merely he was making a 1:1 scale version of the Enterprise in Minecraft. The underlying values that were invoked by Halkun and affirmed by the community were about how ideas and intellectual property is shared, how to engage with the world of Minecraft, and what constituted a valuable enterprise in this world.
For Dewey, this is how members of a community of inquiry create and affirm the values and therefore knowledge of that community, as well as how members pass on that knowledge to initiates.
Crafting calculations
Let's look at one more case study. Redstone is a particular mineral that can be found in Minecraft. With it, you can generate a form of energy or power. This allows players to add simple movements to their Minecraft builds. With Redstone you can power minecarts, open and close doors, or turn lights on and off. But this material is far more versatile than that. Redstone functions on a simple binary system. Things can be either getting power or not; they can be either on or off. From these very humble beginnings, Redstone can be used to build objects in the world that function like binary computers.
For example, a logic gate is a device (either physical or digital) that takes one or more binary inputs and produces a single binary output. It functions on the rules of Boolean algebra and is used in programming extensively. A single logic gate can produce one output, sometimes called a bit which can be 1 or 0, yes or no, on or off. Logic gates can be used to create what is called an adder (Properinglish19 2012) which is a more complex build that is used in simple arithmetic. String several of these adders together and you can represent far more than 1 or 0. Each adder you add to this structure doubles the amount of numbers you can calculate. A single adder can produce an output of 0 or 1. This means we can represent the numbers 0 and 1. Having a second adder means our possible outputs double to 11, 10, 01, or 00. The first number will give us the value of either 0 or 1. the second number will give us the value of 0 or 2, the third number will give us the value of 0 or 4. By making a string of these values (as in the Table 1) we can very quickly start to represent larger numbers.
From here, it wasn't such a big step for people to start creating a sort of physical binary programming in the Minecraft world. From simple switches and logic gates, players experimented with Redstone to create more and more complex binary machines; physical computers inside a digital world. In short, players built calculators in a game made entirely of blocks.
The computing machines that are being built now in Minecraft have come a very long way. One example (Figure 2) shows an 8-bit quad core computer that functions in much the same way that a desktop computer does. The architect, legomasta99, has several YouTube videos where he goes though the technical specifications of his machine. These videos serve as a starting point for other members of the community to discuss programming, hardware architecture, processing power and efficiency, and much more. New members of this community often post questions such as 'how do I build something like this?' or 'Where do I begin to learn this stuff?'. Other, more experienced members of the community will often respond to these posts with lists of sites, videos, and tutorials for programming languages and other programs that the new members should learn (u/superbloxw 2019).
Dewey framed the practice of education in exactly this manner. New members of a community of inquiry are guided by older members. While the newer members follow their own interests, they are given the tools they need through interaction with other members. Anything new that is learnt is shared, and anything innovative or new is judged by all members (Dewey 2005). The community of inquiry of Minecraft is able to use this system to teach its many things. One of the most complex skills that this community can teach is the ability to create 'physical' computers that have a high level of complexity within a virtual world. It is important to mention that in order to present these case studies I had to learn a great deal about programming and physical computing, two things I know very little about. Instead of learning what I needed to know through an institution or a textbook, I have gathered all my knowledge using the Minecraft community. Because of this, my knowledge is probably very patchy, but what I do know is situated by the functionality of this knowledge. I have an idea of binary machines, for example, because I have seen one being built and function. From my experiences, I can confidently say that the community of inquiry of Minecraft is more than capable of sharing complex values, practices, and habits on par with any formal educative community.
Deweyan Communities in the Wild
Dewey's project in Democracy and Education is to develop a system of education based on the principles of Pragmatism. His focus is on the development of formal education. However, the case of Minecraft highlights the level of complexity an informal community of inquiry is capable of producing. The community of Minecraft has no formal hierarchy, no official homepage, and no manifest of members. It is not run like an organisation, despite the game itself being owned by an organisation. After all, the community of Minecraft is not the game, but the players.
What we can learn from the community of inquiry of Minecraft is the scope and possibility of informal communities, and the vague line between the informal and the formal. Dewey begins his discussion on informal education and moves to the more complex and abstract formal education of the school. In many ways, this is the trajectory of the Minecraft community as well. Not long after Minecraft was released, a series of Mods were built by a group of game developers and teachers called TeacherGaming. These mods made several changes to the core game so that teachers could create different scenarios for students to engage with. By 2016 Minecraft EDU, as it was known by then, had grown into a rich collection of pedagogical tools that teachers could use in complex ways to teach a wide variety of subjects like history, geography, environmental science, chemistry, biology, and programming. Minecraft EDU was subsequently bought by Microsoft and rereleased as Minecraft: Education Edition. The formal community of education that Minecraft: Education Edition now exists in would not have been possible without the informal community.
Concluding Craft
The informal community of Minecraft can teach us many things about how knowledge can be formed, shared, and valued. It is not the case that formal education is the only way in which we learn and teach. Cultural artefacts like video games show us in a multitude of ways how communities of inquiry arise out of a shared interest and willingness to engage in an act of learning. While these naturally occurring communities can inform and even overlap into our institutions of formal education, there is an inherent value in keeping them untethered to a rigid set of standards and practices. Informal communities of education play an important part in our overall literacy.
In researching for this paper, I have come across a multitude of examples where the community of Minecraft has surprised me and challenged my preconceptions of what a gaming community can do. There are too many to discuss here, but I wanted to share two very recent builds that highlight this. The first build is of a flying bee (u/TheCreatorofTNT 2019). The amazing thing about this bee is that it actually moves across the sky. This is possible because in the latest update to the game a new block was added; the honey block. The honey block is sticky and pulls other blocks along with it when launched with a piston.
The second build is incredible for a different reason. It is a build of Nortre-Dame de Paris (u/vesko_8 2019); a to scale version of one of the most famous churches in the world. Aside from the amazing detail found in this build, what is important here is the motivation to build such a monument and the ability Minecraft has as a tool of historical preservation and inquiry. | 2019-12-12T10:37:18.390Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "1b1b61f3081e3e30eeeefd024eae2f0732f42c59",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5334/csci.129",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "54809a61a50efd772202dac3280b7ea209fff4b0",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
55260204 | pes2o/s2orc | v3-fos-license | ACCOUNTING PRODUCTIVITY IN THE SECTORS OF ECONOMY : METHODOLOGICAL ASPECTS
Level of productivity is vital for each country striving develop sustainably. It appears that accounting of productivity requires additional attention of scientists. This paper is focused on the methods allowing to evaluate productivity in economy sectors. Scrutinized scientific literature proposes the following possible perceptions of productivity increase: as labour move from low to high productivity sectors, the process contributes to aggregate country’s productivity growth, and causes further productivity increase in more productive sectors. After critical scientific literature review, conclusions about contemporary productivity methods and are being be provided.
Introduction
Sustainable development of countries and their economies' sectors is driven by array of factors (Mačiulis, Tvaronavičienė 2013;Baublys et al. 2014;Balitskiy et al. 2014;Caurkubule, Rubanovskis 2014;Dzemyda, Raudeliūnienė 2014;Fuschi, Tvaronavičienė 2014;Kaminskienė et al. 2014;Raudeliūnienė et al. 2014;Scaringelli 2014;Tarabkova 2014;Tvaronavičienė 2014;Tvaronavičienė et al 2014;Vasiliūnaitė 2014).Productivity is vitally import as driving force of sustainable development, which in own turn is affected by various factors (Mathur et al. 2013;Bonetto et al. 2014;Demir et al. 2014;Figurska 2014;Garškaitė-Milvydienė, 2014;Ruza et al. 2014;Tvaronavičienė 2014, Wahl 2014).Researches, related to relevant economy structure and economic growth, generally named as structural economics, are in foreign rather common in scientific literature of development economics.This research area in Lithuania is relatively young and currently is intensively developing (e.g.Lankauskienė, Tvaronavičienė 2013;Lankauskienė, Tvaronavičienė 2014).In this paper attention is being focused to productivity measurement and accounting methods.This paper aims to systemize and group the methods evaluating productivity, which can reveal more precise picture of productivity in separately taken economy sectors.
International Centre for Entrepreneurship Research 2. Economy sectors' performance in structure of economy
Determinants of economy sectors' performance
Based on the reform agenda agreed upon in Lisbon, enterprise and industrial policies require a detailed understanding of the competitive process at the level of individual industries and sectors (Peneder 2009;Figurska 2014;Tvaronavičienė 2014).Within this context, the current study on sectorial growth drivers aims to identify the major determinants, patterns and trends in European competitiveness from a distinctly sectorial perspective.The first part of this study investigated European sectorial competitiveness, assessing the relative strengths and weaknesses of European industries with respect to the various dimensions of performance, such as the growth of value added, employment, labour and multifactor productivity, profitability, international trade, and foreign direct investments (Peneder 2009;Tvaronavičienė 2014).Hereinafter, an investigation of the major determinants or 'drivers' of sectorial growth will be provided.
Sectorial performance is driven by a myriad of distinct sources.At present, no single, comprehensive theory exists which can explain the role of these elements within a jointly integrated economic model.However, many of them are the subject matter of different strands of economic research.Accordingly, this model is organized according to six groups of related factors: macroeconomic conditions, demand side factors, inputs to production, R & D and innovation, market structure, and finally openness and barriers to trade (Peneder 2009).Figure 1 illustrates the six major dimensions of sectorial performance.First, macroeconomic conditions affect sectorial performance by defining the environment within which companies and industries operate.
Possible economy sectors' performance variations
Economic growth can not be perceived without role of economic sectors, as economies are comprised of them.
The following economic sectors' performance peculiarities in the structure of economy could be distinguished: structural change, structural transformation, structural growth, and structural development.It is important to mention that structural change and transformation are quite similar expressions, as well as structural growth and development (Lankauskienė, Tvaronavičiene 2013).Economic sectors' performance in the structure of economy most commonly is being defined as structural changes by foreign scientists (Lankauskienė, Tvaronavičienė 2013;Figurska 2014).Structural change is the central insight of development economics.Economic growth reflects in economic sectors' performance and entails structural change.Structural change, narrowly defined as the reallocation of labour across economy sectors, featured in the early literature on economic development by Kuznets (1966).As labour and the other resources move from traditional into modern economic activities, overall productivity rises and income expand.The nature and speed with which structural transformation takes place is considered one of the key factors that differentiate successful countries from unsuccessful ones.Therefore, the new structural economists argue that economy structures should be the starting point for comparative economic analysis and the design of appropriate policies.For the process of sustainable development elabouration, it is especially important for economy sectors to perform in a sustainable manner (Lankauskienė, Tvaronavičienė 2013;Litvaj, Poniščiaková 2014;Tvaronavičienė 2014).Economic sectors' sustainable performance manner is associated as a target at the development of knowledge based and innovation susceptible sectors, but not with exploiting non renewable natural resources (Tvaronavičienė, Lankauskienė 2013).Economic growth encompasses the growth of value added, created by economic sectors and their branches performance.Moreover, economy structure has to operate through all the possible capabilities of sustainability.
Productivity phenomenon evaluation methods in the structure of economy
Productivity most generally is perceived as a measure of output or value added per labour input (hour worked).But due to the economy sectors' performance, in the structure of economy this phenomenon gains more forms.
Hereinafter different methods accounting productivity will be provided.
Aggregate productivity growth accounting method
What is the impact of structural change on productivity growth?In response to this question many authors use an empirical methodology designed to analyse such issues, often called 'shift-share analysis'.It has been used frequently by among others economic geographers, economic historians, industrial economists and trade analysts.Essentially, it is a purely descriptive technique that attempts to decompose the change of an aggregate into a structural component, reflecting changes in the composition of the aggregate, and changes within the individual units that make up the aggregate.As such it is closely related to analysis of variance.There are many versions of this methodology, the main difference being the choice of base year or 'weights': initial year, final year, some kind of 'average', linked, etc., and each version usually has its critics as well as defenders (Hurber, Mayerhofer 2006;Maroto-Sanchez, Cuadrado-Roura 2009;Jalava 2006;Van Ark, Hann 1997;Vries et al. 2011).The reason for this state of affairs is the well known result in index number theory that if, say, initial or final year weights are applied throughout in decomposition, a residual will occur necessarily.So what many versions of this methodology do is to try to reduce this residual as much as possible (Tanuwidjaja, Thangavelu 2007).Authors examine the effects of recent structural changes on the growth of labour productivity.The traditional assumption of the growth accounting literature is that structural change is an important source of growth and overall productivity improvements.The standard hypothesis assumes a surplus of labour in some (less productive) parts of the economy (such as agriculture), thus shifts towards higher productivity sectors (industry) are beneficial for aggregate productivity growth.Even within industry, shifts towards more productive branches should boost aggregate productivity.On the other hand, structural change may have a negative impact on aggregate productivity growth if labour shifts to industries with slower productivity growth.The 'structural bonus and burden' hypotheses were examined by the example of Asian economies by Timmer and Szirmai (2000), a large sample of OECD and developing countries (Fagerberg 2000), and more recently by Peneder and DG Employment for the USA, Japan and EU Member States (Peneder 2009).The overall developments regarding output, employment and productivity described above mask substantial structural changes within economies and their individual sectors.Structural changes reflect inter alia different speeds of restructuring and resulting efficiency gains or losses at branch level.The impact of structural change on aggregate productivity growth in is evaluated by the frequently applied shift-share analysis in analogy with Timmer and Szirmai (2000), Fagerberg (2000), Peneder (2003) and others.The shift-share analysis provides a convenient tool for investigating how aggregate growth is linked to differential growth of labour productivity at the sectorial level and to the reallocation of labour between industries.It is particularly useful for the analysis of productivity developments in countries where data limitations prevent us from using more sophisticated econometric approaches (Havlik 2005).
Using the same notation as presented in Peneder (2003) (1) First, the structural component is calculated as the sum of relative changes in the allocation of labour across industries between the final year and the base year, weighted by the value of the sector's labour productivity in the base year.This component is called the static shift effect.It is positive/negative if industries with high levels of productivity (and usually also high capital intensity) attract more/less labour resources and hence increase/decrease their share of total employment.The standard structural bonus hypothesis of industrial growth postulates a positive relationship between structural change and economic growth as economies are upgrading from low-to higher-productivity industries.The structural bonus hypothesis thus corresponds to an expected positive contribution of the static shift effect to aggregate growth of labour productivity (Havlik 2005).The structural bonus hypothesis:
S S LP
(2) Second, dynamic shift effects are captured by the sum of interactions of changes in employment shares and changes in labour productivity of individual sectors/industries.If industries increase both labour productivity and their share of total employment, the combined effect is a positive contribution to overall productivity growth.In other words, the interaction term becomes larger, the more labour resources move toward industries with fast productivity growth.The interaction effect is, however, negative if industries with fast growing labour productivity cannot maintain their shares in total employment.Thus, the interaction term can be used to evaluate Baumol's hypothesis of a structural burden of labour reallocation which predicts that employment shares shift away from progressive industries towards those with lower growth of labour productivity (Baumol 1967;Havlik 2005).
We would expect to confirm the validity of the structural burden hypothesis in the NMS due to the abovesketched shifts from industry to services (with lower productivity levels) at the macro level, and due to shifts from heavy (and capital-intensive) to light industries within manufacturing, respectively (Havlik 2005).
The structural burden hypothesis: (3) Third, the 'within-growth' effect corresponds to growth in aggregate labour productivity under the assumption that no structural shifts in labour have ever taken place and each industry (sector) has maintained the same share in total employment as in the base year.Authors, however, recall that the frequently observed near equivalence of the within-growth effect and aggregate productivity growth cannot be used as evidence against differential growth between industries.Even in case all positive and negative structural effects net out, much variation in productivity growth can be present at the more detailed level of activities (Havlik 2005).
Accelerations and decelerations in aggregate productivity growth evaluation method
Recent studies of economic growth have moved from explaining average trends in long-term growth to study growth accelerations and decelerations, because of the great instability in growth rates within countries.Authors argue that the standard shift-share analysis is inadequate to measure the contribution of sectors to accelerations in productivity.Very few countries have experienced consistently high growth rates over long periods.Rather, the more typical pattern is that countries experience phases of growth, stagnation, or decline of varying length.A study of these separate periods seems more revealing for a study of the determinants of growth than a longperiod average (Pritchett 2000).This raises the natural question which sectors in the economy contribute most to accelerations and decelerations in growth.For example, Jones and Olken (2008) suggest that employment reallocation to more productive sectors lies behind accelerations and decelerations of growth in many developing countries.Because of missing sectorial data, they are unable to test this hypothesis.Authors provide empirical evidence on the significance of various sectors in generating aggregate productivity growth by introducing a novel shift-share analysis and by applying this method to a new sectorial database for 19 countries in Asia and Latin America, spanning the period from 1950 to 2005.Each sector can contribute to aggregate growth in two ways: by productivity growth within the sector (the within-effect) and by expanding its share in aggregate inputs (the between-or shift-effect).To measure these contributions authors modify a standard tool in an economic historians' tool-box: the shift-share analysis introduced by Fabricant (1942).The shift-share analysis is used in many studies to measure the contribution of structural change to aggregate growth.For example, it features prominently in the discussion about the extent of Britain's decline relative to Germany and the US since the end of the nineteenth century (Broadberry 1998).Unfortunately, the interpretation of results from the traditional shift-share method is not straightforward (Timmer, Vries 2008;Timmer, Vries 2007).
Authors propose two modifications to the traditional shift-share analysis, which make its results more useful.First, the standard method does not allow for disequilibria in factor markets in which average productivity differs from marginal productivity.Especially in early stages of development, the agricultural sector is characterized by wide-spread disguised unemployment (Broadberry 1998).Authors use estimates of the shadow price of labour to measure this wedge and adjust the shift share method accordingly.This adjustment increases the measured importance of structural change to growth.Second, the traditional method does not properly account for differences in productivity levels between sectors.For example, the expansion of a low-productive sector such as government services would show up as being positive for aggregate growth.Authors account for differences in productivity levels between sectors and derive more meaningful measures of the contribution of particular sectors to aggregate productivity growth.Authors find that resource reallocation is not the main driver of accelerations and decelerations in aggregate economic growth.Productivity improvements within sectors, in particular within manufacturing and market services, appear to be much more important for growth in Asia and Latin America since the 1950s (Timmer, Vries 2008;Timmer, Vries 2007).
Since long, the importance of sectorial development patterns for economic growth has been recognized.Changes in the sectorial composition of production and employment and their interaction with the pattern of productivity growth feature prominently.Technological change typically takes place at the level of industries and induces differential patterns of sectorial productivity growth.At the same time, changes in domestic demand and international trade patterns drive a process of structural transformation in which labour, capital and intermediate inputs are continuously relocated between firms, sectors and countries (Kuznets 1966;Chenery et al. 1986;Harberger 1998).One of the best documented patterns of structural change is the shift of labour and capital from production of primary goods to manufacturing and services.Another finding is that the level and growth rate of labour productivity in agriculture is considerably lower than in the rest of the economy (at least at low levels of income), reflecting differences in the nature of the production function, in investment opportunities and in the rate of technical change (Syrquin 1984;Crafts 1984).Together these findings suggest a potentially important, albeit temporary, role for resource allocation from lower to higher productive activities to boost aggregate productivity growth.This potential growth bonus was already identified in classical dual economy models such as Lewis (1954) and Fei and Ranis (1964).These models presumed that in early stages of development, agricultural labourers shift to the industrial sector without any reduction in total agricultural output.The existence of this source of inefficiency can be explained by the immobility of agricultural labour vis-a-vis the industrial sector caused by the discrepancy between private costs, approximated by the average product in agriculture, and social costs.Differences in the potential for structural change have featured prominently in explanations of differential growth within European countries in the post-World War II period (Temin 2002).
However, the quantification of its importance has been hampered by a clear methodology to measure the effect of structural change on aggregate productivity growth.The standard method to measure this is the shift-share decomposition originating from Fabricant (1942).This method is part of the standard tool kit of economic historians and used in many studies.One major problem of the traditional shift share method is the assumption that productivity growth within each sector is not affected by structural change.Clearly productivity growth rates are affected since, for example, productivity growth in agriculture is largely possible due to the employment reallocation to manufacturing and services.For example, labour productivity in South Korean agriculture increased 5% annually during the period 1963-2005.It is not likely that this high growth rate could have been sustained when in 2005 still 63% of the population was working in agriculture, as in 1963.Broadberry (1998) argued that the shift-share analysis should be modified by assuming that the marginal productivity of workers leaving shrinking sectors is equal to zero.Although this adjustment overestimated the effect of sectorial expansions (Booth 2003), authors propose an extension and improvement of the traditional shift-share analysis in a similar direction without overstating sectorial employment reallocation.
Authors suggest the following modified shift-share analysis: Where: P being labour productivity, S i sectorial employment shares in the i-th sector (1,…,10), T indicating the end of a period, 0 the beginning of a period, and a bar indicating period average.The first term on the right hand side measures the contribution of within-sector productivity growth (intra effect).The second term on the right hand side measures the contribution of sectorial reallocation of employment to aggregate productivity growth (shift effect).The modified shift-share analysis decomposes growth in GDP per worker into improvements within industries and improvements due to the reallocation of labour across industries.In the decomposition, authors account for surplus labour.Furthermore, expanding sectors only contribute to productivity growth if their productivity level is higher than the economy's average (Timmer, Vries 2007, 2008).
Conclusions
Productivity usually is perceived as a measure of output or value added per labour input (hour worked).
Analysis of relevant scientific literature in this research area let to reveal much more productivity measurement options.From one point of view it can be related to labour movement from low productivity to high productivity sectors and in such a manner contributing to aggregate country's productivity growth.And from another point of view productivity increase can be associated within sectors to capital accumulation, technological change, innovation, etc.
In the structure of economy, due to economic sectors' performance, productivity phenomenon can be accounted by different shift-share (decomposition) methods: aggregate productivity accounting method encompassing structural bonus, structural burden and within growth hypothesis; accelerations and decelerations in aggregate productivity growth evaluation method.Each of listed methods could be used ad hoc depending on the purpose of carried research.Hence, proper evaluating of productivity could provide possibilities of economy restructuring, which, in its turn would facilitate sustainable development and long-term competitiveness increase.
∑
of shrinking sectors, K the set of shrinking sectors.
. Demand side factors
-ICT and non-ICT capital -High, medium, low skilled labour5.Market structure , authors decompose the aggregate growth of labour productivity into three separate effects: | 2018-12-10T23:20:01.617Z | 2014-12-29T00:00:00.000 | {
"year": 2014,
"sha1": "bcde036842c0bb41740e1804260c76185b0c4ae5",
"oa_license": "CCBY",
"oa_url": "http://jssidoi.org/jesi/article/download/40",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bcde036842c0bb41740e1804260c76185b0c4ae5",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
9618068 | pes2o/s2orc | v3-fos-license | Human pluripotent stem cells: Towards therapeutic development for the treatment of lifestyle diseases
There are two types of human pluripotent stem cells: Embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs), both of which launched themselves on clinical trials after having taken measures to overcome problems: Blocking rejections by immunosuppressants regarding ESCs and minimizing the risk of tumorigenicity by depleting exogenous gene components regarding iPSCs. It is generally assumed that clinical applications of human pluripotent stem cells should be limited to those cases where there are no alternative measures for treatments because of the risk in transplanting those cells to living bodies. Regarding lifestyle diseases, we have already several therapeutic options, and thus, development of human pluripotent stem cell-based therapeutics tends to be avoided. Nevertheless, human pluripotent stem cells can contribute to the development of new therapeutics in this field. As we will show, there is a case where only a short-term presence of human pluripotent stem-derived cells can exert long-term therapeutic effects even after they are rejected. In those cases, immunologically rejections of ESC- or allogenic iPSC-derived cells may produce beneficial outcomes by nullifying the risk of tumorigenesis without deterioration of therapeutic effects. Another utility of human pluripotent stem cells is the provision of an innovative tool for drug discovery that are otherwise unavailable. For example, clinical specimens of human classical brown adipocytes (BAs), which has been attracting a great deal of attention as a new target of drug discovery for the treatment of metabolic disorders, are unobtainable from living individuals due to scarcity, fragility and ethical problems. However, BA can easily be produced from human pluripotent stem cells. In this review, we will contemplate potential contribution of human pluripotent stem cells to therapeutic development for lifestyle diseases. cells (ESCs)/induced pluripotent stem cells (iPSCs) is currently limited to remediless diseases due to risk of tumorigenesis. However, application of these cells to therapeutic purposes and drug discovery for lifestyle diseases is promising. Because a short-term presence of human ESC/iPSC-derived vascular endothelial cells reportedly exerts long-term therapeutic effects on injured stenotic arteries, immunologically rejections can nullify risk of tumorigenesis without deteriorating therapeutic effects. Another utility is to produce high-scarcity-valued cells such as brown adipocytes, which are unobtainable from living bodies and commercially available sources, as a new tool for drug discovery for lifestyle diseases. stem cells: Towards
Abstract
There are two types of human pluripotent stem cells: Embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs), both of which launched themselves on clinical trials after having taken measures to overcome problems: Blocking rejections by immunosuppressants regarding ESCs and minimizing the risk of tumorigenicity by depleting exogenous gene components regarding iPSCs. It is generally assumed that clinical applications of human pluripotent stem cells should be limited to those cases where there are no alternative measures for treatments because of the risk in transplanting those cells to living bodies. Regarding lifestyle diseases, we have already several therapeutic options, and thus, development of human pluripotent stem cell-based therapeutics tends to be avoided. Nevertheless, human pluripotent stem cells can contribute to the development of new therapeutics in this field. As we will show, there is a case where only a short-term presence of human pluripotent stem-derived cells can exert long-term therapeutic effects even after they are rejected. In those cases, immunologically rejections of ESC-or allogenic iPSC-derived cells may produce beneficial outcomes by nullifying the risk of tumorigenesis without deterioration of therapeutic effects. Another utility of human pluripotent stem cells is the provision of an innovative tool for drug discovery that are otherwise unavailable. For example, clinical specimens of human classical brown adipocytes (BAs), which has been attracting a great deal of attention as a new target of drug discovery for the treatment of metabolic disorders, are unobtainable from living individuals due to scarcity, fragility and ethical problems. However, BA can easily be produced from human pluripotent stem cells. In this review, we will contemplate potential contribution of human pluripotent stem cells to therapeutic development for lifestyle diseases.
Nishio M et al . Human pluripotent stem cells cells (ESCs)/induced pluripotent stem cells (iPSCs) is currently limited to remediless diseases due to risk of tumorigenesis. However, application of these cells to therapeutic purposes and drug discovery for lifestyle diseases is promising. Because a short-term presence of human ESC/iPSC-derived vascular endothelial cells reportedly exerts long-term therapeutic effects on injured stenotic arteries, immunologically rejections can nullify risk of tumorigenesis without deteriorating therapeutic effects. Another utility is to produce high-scarcity-valued cells such as brown adipocytes, which are unobtainable from living bodies and commercially available sources, as a new tool for drug discovery for lifestyle diseases.
INTRODUCTION
Embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs) are the only pluripotent stem cells that are applicable to therapeutic purposes ( Table 1). The first clinical trial of human ESCs (hESCs) was launched in 2010 by Geron Corporation in the Unites States [1][2][3][4] , aiming for the safety evaluation of transplanting hESC-derived oligodendrocyte progenitor cells for the treatment of spinal cord injury. Although the program was shut down due to fund shortage in 2011, no severe side effects were reported from all four cases. Another clinical trial was started in 2010 by Ocata Therapeutics in the Unites States (named Advanced Cell Technology, Incorporated until 2014), aiming for the evaluation of safety and efficacy of hESC-derived retinal pigment cells for the treatment of macular degeneration [5] . Up till now, positive results were reported from two open-label phase 1/2 studies although we have to wait for the final evaluation. Regarding human iPSCs (hiPSCs), a clinical trial was started in 2014 by RIKEN and Foundation for Biomedical Research and Innovation in Japan, aiming for the safety evaluation of hESC-derived retinal pigment cells for the treatment of macular degeneration. No severe side effects have been reported so far.
Although it was expected that the invention of hiPSCs had completely resolved the issue of immunological hurdles, it has turned out that the situation is not so simple. Up till now, two concerns have been raised. One is regarding the differentiation propensity of hiPSCs. It is known that there are marked differences in differentiation propensity among human pluripotent stem cell lines [6] , and thus, it is necessary to establish scores of hiPSC lines to obtain an appropriate line for the preparation of differentiated cells of the intended lineage. In some cases, however, an appropriate line may not be obtained and, as a result, transplantation materials would be unavailable from patients. In addition, genetic mutations may possibly occur during the process of hiPSC establishment. In such cases, usage of mutated hiPSC lines should be avoided. Thus, there may be cases where transplantation materials are unavailable from patients themselves. This actually occurred in the second patient during the clinical trial in Japan: The authorities announced that they resigned the utilization of autologous hiPSCs but decided to use allogenic hiPSCs instead in this case. It seems, however, allogenic hiPSCs are less advantageous than hESCs from the viewpoint of safety although they have merits from the perspective of ethics and labor. The second concern is regarding an issue of possible acquisition of immunogenicity of autologous iPSCs due to spontaneous mutations in the mitochondrial DNA, which is five to ten times more prone to be mutated than the chromosomal DNA. In addition, alterations of mitochondrial DNA reportedly occur upon an induction of pluripotency in hiPSCs [7] . Because cells that contain allogenic mitochondria are rejected by innate immune system [8] , autologous hiPSC-derived cells with mutated mitochondrial DNA may possibly be immunologically rejected, dissipating the effects of transplantation. These two concerns should be deeply reflected for the success of hiPSC-based cell therapies in near future.
Currently, the application of human pluripotent stemderived cells is limited to such diseases that have no other therapeutic options because there is a certain level of risk including tumorigenesis in the transplantation of human pluripotent stem cells. Nevertheless, the application range will be extended if the safety is secured. Regarding lifestyle diseases such as obesity-associated metabolic disorders and ischemic diseases, we already have various therapeutic options including medications and surgeries. In addition, a large number of candidate drugs are currently in the process of research and development. Thus, development of human pluripotent stem cell-base therapies for lifestyle diseases has not been eagerly sought thus far. Nevertheless, there is ample potential for hESCs/hiPSCs to effectively be utilized towards therapeutic development in this field. In this review, we suggest two cases as examples. One is a transplantation therapy for the treatment of ischemic diseases: hESC/hiPSC-derived vascular endothelial cells (VECs) having anti-stenotic capacities, which we termed as type-Ⅱ VECs [9][10][11] , can exert their full effects within a short time (< 1 wk) to produce long-term beneficial outcomes even after they are rejected [11] . In those cases, risk of tumorigenesis may be nullified because hESC-or allogenic hiPSC-derived cells are promptly rejected by immune systems. The second one is utilization of human pluripotent stem cells as a novel tool to provide cells that have high scarcity value but are unavailable from living individuals. Actually, anti-stenotic VECs is an example of such high-scarcity-valued cells [9] . As another example, we will describe human ESC/iPSC-derived classical brown adipocyte (BA), which has been much awaited as a new target of drug discovery for the treatment of obesity-associated metabolic disorders.
PROVISION OF A NOVEL TYPE OF VASCULAR ENDOTHELIAL CELLS
WITH ANTI-STENOCIC PROPERTY:
ISCHEMIC DISEASES
According to the report by World Health Organization, the top two leading causes of death in the world in 2012 are ischemic heart disease and stroke, both which are considered as lifestyle diseases. Ischemia is caused by narrowing of arteries (i.e., arteriostenosis), whose pathological basis is hyperproliferation of vascular smooth muscle cells (VSMCs). Stent revascularization is one of the most effective therapies, where a meshed tube made of shape-memory alloy is inserted into the affected artery (i.e., the coronary artery for ischemic heart disease and the carotid artery for stroke) to mechanically expand the stenotic region. Nevertheless, a comparative study in India in 2010 reported that 23.1% of patients with drugeluting stents and 48.8% of patients with bare metal stents developed restenosis [12] . Therefore, development of new therapeutics is required for the control of ischemic diseases.
Regarding the etiology of arteriostenosis, involvements of VSMCs and macrophages are well understood. By contrast, roles for VECs remained controversial for long time. Recently, we have clarified that there are two types of human VECs: Pro-stenotic VECs (type-Ⅰ) and anti-stenotic VECs (type-Ⅱ) [9][10][11] . We also showed that the vast majority of human VECs that are obtainable from commercially available sources such as biopsy samples and bone marrow-or umbilical cord blood-derived endothelial progenitor cells (EPCs) belong to type-Ⅰ VECs, which promote VSMC proliferation and exacerbate the development of stenosis in injured arteries [9,11] . By contrast, type-Ⅱ VECs, which suppress VSMC proliferation and prevent arteriostenosis [9,11] , are rarely obtained from commercially available sources. Because type-Ⅱ VECs are convert into type-Ⅰ VECs by oxidative stress and aging [9] , it seems that type-Ⅰ VECs are in a generative state. Intriguingly, hESCs/hiPSCs easily produce type-Ⅱ VECs, although they convert to type-Ⅰ VECs after repetitive subcultures. Thus, hESCs/hiPSCs provide an excellent tool to produce high scarcity-valued cells that are otherwise unavailable.
There is still another merit in utilizing hESC/hiPSCderived type-Ⅱ VECs as a transplantation material: They can generate beneficial outcomes by their antistenotic effects although they are immunologically rejected shortly after the transplantation (< 1 wk) [11] . A transient existence of hESC/hiPSC-derived type-Ⅱ VECs on the luminal surface of the injured artery effectively blocks injury-associated VSMC hyperproliferation. After immunological rejection of hESC/hiPSC-derived type-Ⅱ VECs, host VECs take over the role of hESC/hiPSCderived type-Ⅱ VECs [11] . If hESC/hiPSC-derived type-Ⅰ VECs cover the injured luminal surface, development of arteriostenosis is highly accelerated and, in most cases, injured arteries undergo total stenosis [11] . Thus, the critical point that determines the fate of injured arteries is which type of VECs, type-Ⅰ or type-Ⅱ, covers the luminal surface immediately after the arterial injury. Because hESCs/hiPSCs can steadily provide type-Ⅱ antistenotic VECs, which are extremely unobtainable from commercially available sources or clinical samples of patients, hESC/hiPSC-derived type-Ⅱ VECs will make a large contribution of therapeutic development of ischemic diseases (Figure 1). It should be remembered that any surgical operations which mechanically dilate stenotic arteries would more or less injure endothelial layers, causing the injury-mediated stenosis. In this sense, endothelial cell-transplanting therapies may become an indispensable mean for the control of ischemic diseases.
DISORDERS
Brown adipose tissue (BAT) is a unique adipose tissue that has high calorigenic capacities, thus contributing thermogenesis under cold environments. It is distributed in specific areas including interscapular spaces (mice
Table 1 Clinical application of human pluripotent stem cells
Advantages and disadvantages of each kind of human pluripotent stem cells are described. 1 Up till now, no severe side effects have been reported. ESCs: Embryonic stem cells; iPSCs: Induced pluripotent stem cells.
and newborn humans) and deep neck regions (mice and humans). BA is derived from myf5-positive myoblast [13] although the developmental process prior to the myoblast stage remains elusive. It is also known that BAlike cells called beige cells emerge in white adipose tissue under cold-acclimated conditions. To distinguish BA from beige cells, it is also called as classical BA. In addition to heat production, BAT plays crucial roles in metabolic regulation as demonstrated by murine studies: It contributes to prevention of obesity [14,15] and improvements of glucose [16][17][18] and lipid [16,19] metabolisms.
The existence of classical BAT in humans was first reported in 2009 [20][21][22][23] . After a minor dispute in 2012 [24,25] , the presence of classical BAT in adult humans was reconfirmed in 2013 [26] . Clinical studies have supported that the findings obtained from murine studies are also the case with humans [27][28][29][30] . Thus, human classical BA is attracting great attention as a new therapeutic target for obesity-associated lifestyle diseases. However, it is hardly possible to obtain high-quality human BA samples because of economical, technical and ethical problems. First, visualization of BA-distributing sites requires an expensive medical apparatus called positron emission and computer tomography (PET/CT). Secondly, PET/CT examinations impose gamma ray irradiations on young individuals (approximately early twenties), whose BATs are visualized by PET/CT at a high probability. Thirdly, biopsy-mediated removal of BAT, whose amount is assumed to be less than 150 g/body [31] , may possibly increase the risk of obesity-associated lifestyle diseases. Fourthly, BAT is known as a very fragile tissue to handle. Indeed, BioGPS database [32] shows that murine BAT expresses RNase1 and various chymotrypsin family peptidase genes at high levels. Therefore, it is extremely difficult to obtain high-quality BA samples even from mice, which have abundant BATs. Lastly, techniques for long-term cultures, expansions and frozen storage of BA do not currently exist. All those problems have been overcome by the establishment of a method for a directed differentiation of hESCs/hiPSCs into classical BA [33,34] . hESC/hiPSCderived BAs possess high capacities to improve glucose/ lipid metabolisms in vivo as proven by transplantation experiments [33] . Moreover, this technique correctly reproduces in vivo developmental process of BAT because hESCs/hiPSCs were differentiated into classical BAs via myoblast stage [33] . This innovative method has opened an avenue to the implementation of BA-based drug discovery (Figure 1). Moreover, it provides a groundbreaking system for basic studies to strip BAT of its aura of mystery. Although the developmental process of BA prior to the myoblast stage is currently unknown, it will be elucidated by using the method for the differentiation of hESC/hiPSC into classical BA (Figure 2). The elucidation of an early BA process will even provide new molecular targets for the drug discovery of obesity- [35] and the findings by Atit et al [36] . BA: Brown adipocytes; ESCs: Embryonic stem cells; iPSCs: Induced pluripotent stem cells. | 2018-04-03T00:15:29.816Z | 2016-02-26T00:00:00.000 | {
"year": 2016,
"sha1": "984dc768073541314bcfcee36802a8bab20f59d3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4252/wjsc.v8.i2.56",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "4b2d3be568116a2a868c0d0d8117869abe94bfcb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
216421467 | pes2o/s2orc | v3-fos-license | Feed-forward back-propagation (FFBP) algorithm for property prediction in friction stir spot welding of aluminium alloy
Facing the issue of cost and efficiency in experiments and tests in determining the properties of the welded structure is a challenge in friction stir spot welding (FSSW) optimization. Employing the machine learning technique of artificial neural network (ANNs) to develop a prediction model with fewer experiments and tests is a gentle solution to forecast the properties of the spot weld structures. In this study, the extended full factorial design with respect to thetool speed, plunge depth, anddwell time are applied to the FSSW specimens of aluminium A5052-H122 of 2mm thick through 27 experiments and evaluated via tensile shearing load testing. The multilayer neural network of feed-forward and back-propagation (FFBP) algorithm was engaged to learn and train the neural network iteratively with a set of weights and bias of 27 variations of inputs to fit the predicted tensile shear loads of the spot weld structures. Based on the resulted of regression plot, it is shown that the correlation coefficient (R) is perfect for training with the value of 0.999 and for testing the correlation coefficient (R) is reached to 0.958. However, the correlation coefficient is relatively good for validation with R equal to 0.921. For all data sets, the correlation coefficient is good with R of 0.833. It can be seen that the ANNs prediction model is relatively good since the correlation coefficient relatively close to 1.
Introduction
The issue in manufacturing demanding the process has to be quick and accurate in improving the quality of the manufacturing products. Friction stir spot welding (FSSW) as solid state welding process, has been broadly applied not only in manufacturing but also in many applications. FSSW is one of the welding technique used in Aluminium alloy as the variant of friction stir welding (FSW), developed by TWI in the UK in 1991 [1], offers advantageous more over low distortion and low energy consumption that normally used to replace resistance spot welding (RSM) due to weld consistency, short electrode tip life, and welding defect e.g. porosity or void [2]. Basically, FSSW is a complex process, affecting by factors of parameter such as spindle speed. The FSSW process generally composed of three significant actions of plunging, stirring, and retracting as depicted in Figure 1. The plunging acts to move the FSSW tool reaches the workpiece until the tool pin plunged into the predetermined of depth in the workpiece. When the shoulder reaches the top surface of the workpiece, friction is expanded and heat is generated significantly. The generated heat was formed due to friction among rotated tool and workpiece [2,3,4,5]. Furthermore, the stirring serves plastic deformation in the workpiece and allows the tool rotating at a certain time to form weldment [4]. Finally, the FSSW process is over through extracting the toolto the original position.
Problem Identification
Recent issue in manufacturing where such processes must be ran rapidly and properly to further improvement and performance of the welded structures, studies to optimize of FSW process has been done to define optimal settings of parameters. However, this technique competes with experiment cost as the price of raw materials and time-consuming. One heuristic approach is considered by employing machine learning technique to solve such issue that occurred many in manufacturing [6,7,8,9], and in welding [8,10]. An approach of machine learning technique such as Artificial Neural Networks (ANNs) can be found in [11,12,13,14].
Implementation of Artificial Neural Network
As a computational model, ANNshave been widely applied in engineering or manufacturing for optimization and forecasting. It is adopted brain system of human [15] consists of causality relation among factors yield outputs. This neural network is designed in such away able to determine outputs came from any alteration inputs accurately. The resulted output in this study is the mechanical properties of welded structure i.e. tensile strength influenced by input of governed parameters rotational speed and travel speed respectively. It is expected this machine learning technique is used instead of experimental work and test so that such of cost spent for experiment and tests can be eliminated and consequently reduce the time.
Basically the network can be described mathematically as a function of f about a distribution of x over y (f: x → y). The networks could be composed of input, hidden, and output layer [16]. The network must be trained previously via function with trial and error of given ANN architecture and proposed algorithm to find the best solution. Training is complimented through multiplication and summation of weight and bias [17]. Figure 2 represents a basic structure of FF-BP neural network [18,19]. The tangent sigmoid (Tansig) (x) and linear χ(x) are used as backpropagation function. The process of the neural network lay on the number of layers and neurons, where time is being an issue for such of complex structure of the network [16,17].
Experimental work
This study involved experimental work and mechanical testing. The extended full factorial design 3 3 [21] was employed in the experimental work with respect to 3 factors at 3 levels. The 27 experiments of TSL-tests were performed to obtain 27 sets of input-output patterns for ANNs prediction model with 3 repetitions of each FSSW specimens to obtain the mean TSL which can be seen in Table 1. The FSSW specimen was made by using two pieces of 40×125 mm sheets with a 40×40 mm overlap area according to JIS Z3136:1999 standard [22], as depicted in Figure 3a. The type of FSSW tool was a flat shoulder with a cylindrical pin tool as depicted in Figure 3b made of VCN-150 steel that can withstand the high-temperature experience during the process with the chemical composition of AA5052-H112 and VCN-150 Steel are referenced in [23] and in [24] respectively.
Model Development
The topology of the ANN network model as depicted in Figure 4, was developed having one input layers consists of three neurons which corresponds to three inputs of main parameters i.e. spindle speed (v), tool plunge depth (d), and tool dwell time (t) [25]. Two hidden layer with each 15 and 7 neurons are set. A neuron was set in the output layer corresponds to output response of the tensile shear load (TSL). The data sets of 27 input-output patterns were then trained, tested and validated in Matlab environment. 70% of data was used for training, 15% for validation, and 15% for testing the model. The tensile shear load was considered as an output of neural networks. The conceptual structure for this proposed ANN is represented in the Figure 4.
In the network back propagation algorithm is involved which consists of two processes i.e. forward process and backward process. The forward process propagate input vector through the network to provide output at the output layer, and backward process propagates the error values back through the network to determine how the weights are to be changed during the training. The BP algorithm is used with a double hidden layer improved with training function called Trainlm with number of neurons of 15 for hidden layer-1 and 7 neurons for hidden layer-2.
Result and Discussion
In this session, the prediction model of the tensile shear load had been developed through ANN model. A detail result of 27 variation of TSL Test with 3 repetitions is tabulated in Table 1. Figure 5 represents the regression plot for all patterns in training, validation, and testing sets in the prediction model. Based on the resulted plot, it is shown that the correlation coefficient (R) is perfect for training with the value of 0.999 and for testing the correlation coefficient (R) is reached to 0.958. However, the correlation coefficient is relatively good for validation with R equal to 0.921. For all data sets, the correlation coefficient is good with R of 0.833. It can be seen that the ANNs prediction model is relatively good since the correlation coefficient relatively close to 1.
Conclusion
The prediction system model for tensile shear load friction stir spot welding has been developed in this work. The proposed system model is develop in such away aimed to determine the desire information for the coming tensile shear load based on the selected parameters with reference the tensile shear load measurements. The extended full factorial design 3 3 was employed in the experimental work with respect to 3 factors at 3 levels. The 27 experiments of TSL-tests were performed to obtain 27 sets of input-output patterns for ANNs prediction model with 3 repetitions of each FSSW specimens to obtain the mean TSL. The multilayer neural network of feed-forward and backpropagation algorithm which consists of one input layer, two hidden layers, and one output layer is used to learn and train the neural network iteratively with a set of weights and bias of 27 variations of inputs to fit the predicted tensile shear loads of friction stir spot weld. Based on the resulted of regression plot, it is shown that the correlation coefficient (R) is perfect for training with the value of 0.999 and for testing the correlation coefficient (R) is reached to 0.958. However, the correlation coefficient is relatively good for validation with R equal to 0.921. For all data sets, the correlation coefficient is good with R of 0.833. It can be seen that the ANNs prediction model is relatively good since the correlation coefficient relatively close to 1. | 2020-03-19T10:14:11.897Z | 2020-03-13T00:00:00.000 | {
"year": 2020,
"sha1": "33acd1b4ec99d8d13a06a3d27f0c8afa134c584a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/426/1/012128",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "824b9983d74c5721f45c57039fac5e06e19f1687",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
231668924 | pes2o/s2orc | v3-fos-license | Trend of nutrition research in endocrine disorders, gaps, and future plans: a collection of experiences of an endocrinology research institute
Background Nutrition plays a pivotal role in the prevention and treatment of endocrine disorders. The aim of this study was to provide a window in order to display the 25-year activities of Endocrinology & Metabolism Research Institute (EMRI), and the gaps and future plans in the field of nutrition and endocrine disorders. Methods To collect papers affiliated to the EMRI in field of nutrition from the inception to December 1st 2019, the electronic databases including PubMed/Medline, Web of Science, and Scopus were searched. Publications in English and Persian languages were included. Scientific Landscapes (VOS viewer) software version 1.6.13 was used to provide bibliometric maps. Results Of 4082 studies identified in the initial search, 319 relevant papers were included. They contributed systematic review and meta-analysis/review (n = 76), clinical trials (n = 58), cross-sectional (n = 171), case-control studies (n = 11), and animal studies (n = 3). Accordingly, most nutrition studies were dedicated to the level of evidence III (cross-sectional studies: 53.60%) followed by systematic review studies (23.82%) with the level of evidence I. There was also an increasing trend in the nutrition studies through years, with a peak in 2019. Conclusion An increasing trend in the publications related to nutrition science is observed at EMRI. However, nutrition research and publications can grow further through expanding collaborations with other fields related to endocrine. Given nutritional assessments in national projects and focusing on the identification of preventive nutritional strategies, considering the situations of our society can be helpful to make nutritional findings more practical.
Introduction
Based on the report published by the World Health Organization (WHO), non-communicable diseases (NCDs) represent the leading cause of mortality across the world [1]. In the past 20 years, NCD deaths including cardiovascular diseases, diabetes, and obesity have increased by 14.5% in Iran [2]. Therefore, policy makers need to pay specific attention to identifying preventive strategies in order to reduce economic, social, and psychological burden of such diseases [2,3]. Prevention efforts focus on the four main factors including physical activity, tobacco use, alcohol consumption, and diet [4]. Adherence to healthy dietary patterns and changing from unhealthy eating habits to recommended ones can be helpful in preventing and treating a wide range of diseases including endocrine disorders [5].
To find preventive strategies to manage endocrine -related diseases, the research centers and groups affiliated to Endocrinology & Metabolism Research Institute (EMRI) consider nutrition assessments in their research activities. Furthermore, in both professional and public educational programs, nutrition recommendations and their associations with endocrine disorders are considered as well. Although there is no specific nutrition research center at EMRI, accumulating research has been published by researchers affiliated to this institute, particularly in the recent years.
In the present report, we aimed to summarize the 25-year activities of EMRI by focusing on the trend, types of publications, and their remarkable findings in the field of nutrition. Secondary aims were to shed light on nutrition research gaps and suggest a road for the future research, accordingly.
Methods
To find papers affiliated to the EMRI from the inception to 1 December 2019, PubMed/Medline, Web of Science, and Scopus, the electronic databases were searched. In the present study, we included papers in English or Persian language in the field of nutrition that at least the affiliation of one author was EMRI. Grey literatures including conference papers, theses, letter to editors, and interviews were not included.
All findings were exported to the Endnote library. After removing the duplicates, screening was conducted based on the titles and abstracts to collect all the publications in the field of nutrition. As in this specific issue of journal, there are specific reports on CASPIAN studies, probiotic, Islamic fasting, and osteoporosis, in this study; we did not consider the aforementioned topics to avoid repetition.
Eligible articles were classified based on the type of studies (systematic review, clinical trials, cross-section, case-control, and animal studies). Moreover, the level of evidence for the included studies was determined based on the evidence-based medicine pyramid. In order to clarify the main topics of the nutrition publications, papers were also allocated to either dietary pattern or dietary supplement categories in case of relevancy.
The frequency of publications in each category was expressed as number and/or percent. The trend of publications through years was illustrated as a graph. To provide bibliometric maps, Scientific Landscapes (VOS viewer) software version 1.6.13 was used.
Literature search
In total, 8049 papers (duplicate, n = 3951) were identified from PubMed, Scopus, and Web of Science. Initial screening based on titles and abstracts was conducted and 503 nutritional were considered possibly relevant. As publications studies conducted on probiotic, osteoporosis, and Islamic fasting will be explained in other reports, they were excluded from the study to avoid repetition. In the next step, selected papers were checked for affiliations and papers with irrelevant affiliations (n = 184) were excluded from the study. Finally, we reached 319 papers in the field of nutrition published by researchers affiliated to EMRI.
Main characteristics of the included studies
The included papers were classified based on the type of studies. They were systematic review and meta-analysis/review (n = 76), clinical trials (n = 58), cross-sectional (n = 171), case-control (n = 11), and animal studies (n = 3). Most studies were dedicated to secondary research. From the view of evidence-based pyramid, 23.82% of nutritional studies were placed at the top of the pyramid with the level of evidence I. The most studies had the level of evidence III (53.60%). Figure 1 shows the trend of studies conducted in the field of nutrition through years. The first papers with the scope of nutrition were published in 2004. Generally, there was an increasing trend from 2004 to 2019. As shown in Fig. 1, there was a fluctuation in the number of papers between 2004 and 2013. However, after 2013, an increasing trend was observed. Between 2018 and 2019, the number of papers was sharply increased. The most publications were published in the recent 3 years, particularly 2019 (n = 100) (Fig. 1). As shown in Fig. 2, most papers were conducted on obesity and metabolic syndrome. Other frequently used keywords in the title of papers were inflammation, oxidative stress, meta-analysis, health, insulin resistance, overweight, and prevalence. Outstanding authors with high publications in the field of nutrition were Prof. Bagher Larijani, Dr. Ahmad Esmaeilzadeh, and Dr. Leila Azadbakht (Fig. 3). As shown in Fig. 3, the mentioned professors apart from joint papers with each other, have several national and international networks in their publications. In 2019, 32 nutrition papers were dedicated directly to endocrine disorders.
Findings of systematic reviews and meta-analyses
Based on evidence-based medicine pyramid, the level evidence of systematic review and meta-analysis is "I" and, we only briefly focused on only some results of this type of study.
Systematic review and meta-analysis can be classified into dietary supplements [15][16][17][18][19] and food groups/dietary patterns [20][21][22][23][24][25][26][27]. For instance, cinnamon may be helpful in reducing the serum levels of glucose with no changes in other glycemic parameters and anthropometric indices in patients with diabetes. It can also reduce both systolic and diastolic blood pressures [28]. In addition, its positive effects on obesity measures [29]. Namazi et al. also concluded that conjugated linoleic acid had positive effects on anthropometric indices and body composition. However, from the clinical points of view, its effects were slight [15]. Regarding supplementation with calcium, it has been shown that it is not effective inreducing serum levels of total cholesterol and triglyceride in overweight and obese individuals. However, it may modulate low-density lipoprotein and high-density lipoprotein cholesterol concentrations [30]. In addition, the consumption of whole-grains did not show any effect anthropometric indices and body composition [20].
Rezaei et al., conducted a national study on 18,624 adults and found that the mean salt intake among Iranian population was 9.52 g/day. In 97.6% of participants, minimum level of salt consumption was 5 g/day. Besides, in about 41% of participants, the level of salt intake was at least twice greater than that recommended by the World Health Organization [44]. Gholami et al. (2016) performed STEPS survey in Iran, and demonstrated that salt intake could increase systolic blood pressure in both Iranian subjects with hypertension and normotensive individuals. However, the magnitude of this increase was greater in hypertensive ones [39].
Obesity is a growing metabolic disorder which has been examined from various aspects. For instance, Djalalinia et al. (2011) found that excess BMI was responsible for 39.5% of total deaths in subjects (%55 male, 45% female) aged 25 to 65 years old at national level. The highest mortality was attributed to ischemic heart diseases (55.7%) followed by stroke (19.3%) and diabetes mellitus (12.0%) [35]. Apart from original papers, several systematic reviews and meta-analyses on Iranian studies have been conducted. Based on a systematic review and meta-analysis on 119 studies in Iran, it was revealed that increased age, being married, low level of Fig. 3 Outstanding authors in nutrition publications at EMRI education, residence in urban regions as well as being female were positively associated with obesity [36].
Other activities
Apart from papers and conference abstracts, some specific activities in the field of nutrition particularly in diabetic research center affiliated to EMRI have been conducted. In the clinical guideline for diabetes published in 2014, a section was dedicated to this field that it was updated in 2018 considering main international guidelines in diabetes as well as national research. The level of evidence has been dedicated to each recommendation in order to help clinicians for making decision. In addition, the road map of diabetes including nutritional section has been provided in 2015 [48,49] and in 2019 an update based on new evidence has been started.
Furthermore, in several symposiums and conferences held by EMRI including diabetes, osteoporosis, and probiotics, some panels have been dedicated to nutrition. Apart from workshops for physicians and other clinicians, 7 visual educational programs in the field of nutrition and diabetes also have been prepared by cooperation with the visual faculty of Tehran University of Medical sciences, so far. Moreover, numerous booklets and pamphlets in various endocrine disorders, particularly different types of diabetes mellitus, osteoporosis, and elderly disorders have been published and they were updated after a certain period of time to provide recommendations based on new evidence. Several clinics affiliated to EMRI provide nutrition consult and diet therapy for patients. Apart from providing nutrition services for people, they can be suitable ground for doing research in various fields.
Activities in the COVID-19 pandemic
For adapting to the COVID-19 pandemic, public education in the field of nutrition has been shifted to virtual forms. Animations, motion graphics, E-books, and short films are examples of educational materials that spread through social networks and media by EMRI amid COVID-19. In addition, a guideline on diabetes management in the COVID-19 pandemic is prepared that a specific section has been considered for nutritional recommendations in this challenging time.
Discussion
Trends of publications in the field of nutrition showed a considerable increase in the recent years at EMRI. Due to the topics of publications, it can be reported that most research centers and groups have considered nutrition assessments in their projects. However, as there is no nutrition research center at EMRI, nutrition roadmap is not completely clear. However, since 2019 a specific group started to develop a roadmap in this field.
Many research projects affiliated to EMRI did not consider nutrition assessments as their main aims; therefore sometimes tools and questionnaires dedicated to this part of projects were not ideal and did not cover the main dimensions of nutrition assessments. Several national research projects have been designed and run by EMRI in collaborations with other institutes including STEPS [50], IMOS [51], Heavy Metal (unpublished protocol), BEHVARZ [52], CASPIAN [53], and Bushehr elderly health program [54] in which nutritional assessments has been considered and some topics on nutritional factors extracted from the aforementioned studies have been published [55][56][57][58][59], but nutrition assessments were only their secondary outcomes. It is suggested that considering specific nutrition aims in such national surveys to draw nutrition status of our society for different age groups and genders, to clarify nutrient deficiencies and other requirements. Based on these findings we can help to policy makers and health providers to develop and implement effective strategies to overcome nutritional problems.
Although the different levels of nutrition research, including international, national, and small studies with various study designs have been conducted at EMRI, more highquality studies are needed to find nutritional requirements and suggest appropriate strategies to prevent and manage endocrine disorders and other NCDs. On the other hand, it seems that paying more attention to basic studies in the field of nutrition can be helpful in developing the effective therapeutic and preventive methods based on nutrition knowledge. It seems that developing studies with several phases from In vitro to clinical trial phase for developing novel dietary supplements and clarification of pathways can be helpful in increasing the numbers of product-based projects.
Based on the publications and activities in the field of nutrition through 25 years, main research gaps in this field related to endocrine disorders are as follows: (i) No specific strategic plan and action plan in the field of nutrition (ii) Less attention to basic studies such as animal studies and In vitro studies. (iii) Undefined priorities in nutrition research (iv) Less specific attention to nutrition status of study populations and conducting specialized nutrition assessments in national studies conducted by EMRI (v) Few product-based projects Given the publications in the field of nutrition, more clinical trials with nutrition topics including different types of diet and dietary supplements were expected. Annual strategic plan and action plan based on the requirements of society, literatures, and the opinion of experts in nutrition sciences can improve the current status.
Developing multidisciplinary mega projects with practical aims and expanding national and international networks can be considered as a future plan for this research group.Along with increasing the numbers of nutrition projects with highquality methodology, paying attention to hot topics and checking the topics of nutrition research in valid international universities and centers periodically can improve our position in the world. Other proposed future plans in the field of nutrition and endocrine disorders are as follows: (i) Providing a strategic plan by a professional team in the field of nutrition sciences (ii) Identifying research gaps in the field of nutrition for each research center particularly for diabetes and obesity research centers to define research priorities (iii) Focusing on finding preventive strategies for endocrine diseases (national projects considering a collection of nutritional assessments can be helpful) (iv) Expanding collaborations with experts in the field of basic sciences (v) Increasing interdisciplinary projects There were two major limitations in this study that should be addressed. First, the quality of studies was not examined. Second, grey literature such as theses, books, and conference abstracts in the field of nutrition were not considered. The main strengths of this study were as follows: summarizing publications through 25 years in this field, clarifications of research gaps, and putting forward suggestions as future plans.
Conclusion
An increasing trend in the publications related to nutrition research is observed. However, nutrition activities and publications can grow further through expanding collaborations with other fields related to endocrine. Considering nutritional assessments in national projects and focusing on the identification of preventive nutritional strategies specific to our society can shed light on how to prevent NCDs and decrease the burden of such diseases.
Compliance with ethical standards
Conflict of interest All authors declared no conflict of interest. | 2021-01-22T15:34:10.160Z | 2021-01-22T00:00:00.000 | {
"year": 2021,
"sha1": "81d1790b77b8f44b03ee349ab8b9dad5d6227a26",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40200-020-00707-w.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "81d1790b77b8f44b03ee349ab8b9dad5d6227a26",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252070856 | pes2o/s2orc | v3-fos-license | Transverse phase space tomography in the CLARA accelerator test facility using image compression and machine learning
We describe a novel technique, based on image compression and machine learning, for transverse phase space tomography in two degrees of freedom in an accelerator beamline. The technique has been used in the CLARA accelerator test facility at Daresbury Laboratory: results from the machine learning method are compared with those from a conventional tomography algorithm (algebraic reconstruction), applied to the same data. The use of machine learning allows reconstruction of the 4D phase space distribution of the beam to be carried out much more rapidly than using conventional tomography algorithms, and also enables the use of image compression to reduce significantly the size of the data sets involved in the analysis. Results from the machine learning technique are at least as good as those from the algebraic reconstruction tomography in characterising the beam behaviour, in terms of the variation of the beam size in response to variation of the quadrupole strengths.
I. INTRODUCTION
Phase space tomography provides a valuable technique for understanding the properties of a beam in a particle accelerator, and has been applied in a range of different machines, for example [1][2][3][4][5][6][7][8]. However, conventional tomography techniques present some challenges, including the presence of artefacts in the reconstruction (which can be especially prominent when the number of projections is limited), and the computational time and resources required to construct the phase space distribution with good resolution. Tomography in two transverse degrees of freedom allows characterisation of betatron coupling, but the sizes of the data structures required for the analysis increase rapidly with the dimensionality of the system. Storage of a 4D phase-space distribution in an array with dimension N along each axis requires a data structure of N 4 numerical values, and the memory resources needed while processing the input data to construct the phase space can be much larger. The demands on computing power increase rapidly with increasing dimensionality of the phase space, and this may limit the use of high-dimensional phase space tomography (with good resolution) in applications where it could make a valuable contribution to machine operation, for example in short-pulse, short-wavelength free electron lasers [9] or injectors for machines using novel acceleration technologies such as plasma cells or dielectric wakefield structures [10,11].
Recent work [11] has shown (in simulation) how phase space tomography can be performed in 2 1 2 degrees of freedom, to provide transverse phase space properties as a function of longitudinal position along a bunch. Steps * a.wolski@liverpool.ac.uk have been taken towards full 6D phase space tomography, but the methods that have been employed (which include the use of machine learning) have not so far allowed the full reconstruction of the 6D phase space [12]. Where betatron coupling or synchro-betatron coupling are present, tracking a beam from a given point in the accelerator to determine its properties as function of position in the beamline requires the full phase space in the coupled degrees of freedom to be described, and in complex machines where multiple correlations can be present, full 6D phase space reconstruction would provide all the necessary information. Techniques allowing reduction of the processing time and data storage requirements for highdimensional phase space tomography offer the prospect of enabling routine complete and detailed characterization of the charge distribution within bunches in an accelerator, including all cross-plane correlations, with significant benefits for advanced accelerator facilities.
In principle, image compression techniques can be used to reduce the size of the data structures needed to store and process tomography data while maintaining the potential for reconstructing the phase space with a given resolution. Reduction in the size of the data sets can also be accompanied by reduction in the time taken to process those data sets. However, it is not clear how existing tomography algorithms can be adapted so that they can be applied directly to compressed data. Machine learning techniques offer an alternative to conventional tomography methods, and have the potential to allow direct tomographic analysis of data in a compressed form. Machine learning is already extensively used for image analysis and tomography, particularly in medical contexts [13]. There is also increasing interest in the use of machine learning for a range of applications in accelerator design and operation, including design optimization [14][15][16], modelling [17], collection and analysis of diagnostic data [18][19][20][21], and operational optimization [22,23]. Recent work [12] has shown (in simulation) the use of a neural network for constructing two-dimensional projections of a six-dimensional phase space.
In the current paper, we report results of experimental studies aimed at demonstrating the use of machine learning for phase space tomography, working with beam images and phase-space distributions stored in compressed form. We describe the principles of the technique, compare the results with those using a conventional tomography algorithm on the same data sets, and discuss the potential advantages of the use of machine learning for this application.
The experimental work that we present has been carried out on CLARA, the Compact Linear Accelerator for Research and Applications at Daresbury Laboratory [24][25][26]. Relevant features of the facility are outlined in Section II, in which we also describe the experimental technique (Section II A), and present the results of an analysis of the experimental data using a conventional tomography algorithm, algebraic reconstruction (Section II B). In Section III we describe and present results from the tomography analysis based on machine learning. Some conclusions from the work are discussed in Section IV.
II. CHARACTERIZATION OF TRANSVERSE PHASE SPACE IN CLARA USING A CONVENTIONAL TOMOGRAPHY TECHNIQUE
Previous studies of phase-space tomography in two transverse degrees of freedom using CLARA were reported in [27]. At the time of the previous tomography studies, carried out in 2019, the facility (CLARA Front End), included an electron source and short linac designed to provide bunches at a repetition rate of 10 Hz with charge up to 250 pC, momentum up to 50 MeV/c, and transverse emittance below 1 µm. Because of technical limitations during the tomography data collection, measurements in 2019 were made with beam momentum 30 MeV/c, and bunch charge up to 50 pC. Since then, CLARA has undergone further development, with a number of changes to components and layout; however, the recent measurements reported here were made with parameters comparable to those used in the original study, specifically with beam momentum 35 MeV/c, and bunch charge up to 100 pC. Further development of CLARA is planned, both to extend the energy reach, and to test new RF gun technology, in particular a low-emittance high repetition-rate source (HRRG). Detailed characterisation of the HRRG performance will include studies of the transverse phase-space. Work to develop novel phasespace tomography techniques, in particular making use of image compression and machine learning, has been motivated by the need to facilitate beam characterisation in CLARA generally, and HRRG performance in particular. The results reported here are from recent measurements on CLARA in its current form, with the existing 10 Hz RF electron gun.
A. Experimental technique: design parameters and calibrated model The tomography technique described in [27] was applied to CLARA, following upgrade work performed since the previous tomography studies. Some changes were made to details of the experimental procedure to take account of changes in the beam optics and machine layout; however, the overall procedure remained the same in its essential points. A beam momentum of 35 MeV/c was used. Measurements were made using a section of beamline between the exit of the linac (the "reconstruction point") and a fluorescent screen on which the transverse beam profile could be observed (the "observation point"). The beamline between the reconstruction point and the observation point contains five quadrupoles.
To prepare for the measurements, a machine model [28] was used to determine gradients for the five quadrupoles in the measurement section that would allow control of the betatron phase advances between the reconstruction and observation points, while keeping approximately constant the beta functions at the observation point (see the schematic layout of CLARA in Fig. 1). A sequence of 32 sets of quadrupole gradients was determined, providing a good range of variation in horizontal and vertical betatron phase advance over the sequence. Maintaining constant, and approximately equal beta functions at the observation point helps to provide good conditions for beam profile measurements: if the beam image has a large aspect ratio, or gets too large or too small, it can be difficult to determine accurately the beam sizes. Data collection consisted of recording the beam profile for each of the 32 steps in the sequence. The order of steps in the sequence was chosen to minimise the changes in strength of the magnets from each step to the next, and in particular to avoid as far as possible changes in polarity: this helps to reduce the frequency with which the magnets need to be degaussed (the quadrupoles were degaussed at the start of each scan, and midway through the scan). At each step, ten screen images were recorded, plus an extra image with the photoinjector laser turned off to allow for subtraction of background resulting from dark current. A complete quadrupole scan was carried out first with bunch charge 10 pC, and then with bunch charge 100 pC. Although space-charge effects in the injector are significant at 100 pC, in the measurements section at beam momentum 35 MeV/c space-charge has little impact.
The analysis presented here is carried out in normalised phase space: this helps to improve the accuracy of the phase space reconstruction [29]. Since the section of beamline in CLARA where the measurements were performed consists only of drift spaces and normal quadrupoles, coupling can be neglected in constructing the normalising transformation; however, it should be emphasised that the data analysis nevertheless still allows for full characterisation of any coupling in the beam. Normalised horizontal phase space co-ordinates (x N , p xN ) at a particular location along the beamline are related to the physical co-ordinates (x, p x ) by: where α x , β x are the usual Courant-Snyder optics functions at the specified beamline location. If the phase space distribution is matched to the optics functions, then the distribution in normalised coordinates ρ N (x N , p xN ) will be invariant under rotations in phase space. Furthermore, the transport matrices in normalised phase space are simply rotation matrices (through angles corresponding to the phase advance), so a matched phase space distribution will be invariant under linear transport along the beamline. In practice, the phase space distribution is not known in advance: the goal of the measurement is to determine the distribution. Phase space normalisation cannot, therefore, be carried out using optics functions known to be exactly matched to the phase space distribution. Instead, a machine model is used to generate an expected distribution, and the optics functions describing this distribution are used to normalise the phase space. If the real beam distribution is reasonably close to that expected from the machine model, then in the normalised phase space the real beam distribution will have at least approximate rotational symmetry. Phase space tomography (in normalised phase space) can be used to determine the actual distribution, which can be transformed back to the physical co-ordinates using the inverse of the normalising transformation given in Eq. (1). For the measurements in CLARA, a design model of the machine was used to predict the phase space beam distribution at the reconstruction point (the exit of the linac: see Fig. 1). The values of the optics functions are shown in Table I. Preliminary analysis of the experimental data was carried out using the parameter values from the design model. The results indicated substantial differences between the design values and the real values, largely arising from differences between the operational settings actually used for the injector and linac, and the settings assumed in the machine model when preparing for the experiments. Furthermore, closer investigations found that the magnetic lengths of the quadrupoles in the beamline following the linac (the section used for the tomography studies) were somewhat larger than had been thought, resulting in changes in the transfer matrices between the reconstruction point and the observation point for the quadrupole gradients (calculated using the design model) used during the quadrupole scan. Differences between the design parameters and the calibrated model are evident in Fig. 2, which shows the beta functions at the observation point and the phase advances from reconstruction to observation point, at each step in the quadrupole scan using the design quadrupole gradients. Note that the steps were not followed in the order in Fig. 2, which shows the steps in order of increasing horizontal phase advance, followed by increasing vertical phase advance: as mentioned above, the actual order of the steps during the measurements was designed to minimise the changes in quadrupole strengths between successive steps, to reduce the need for degaussing. The quadrupole gradients used in the scan were determined using the design model (top plots in Fig. 2); the same gradients, when used in the calibrated model with the revised quadrupole lengths and optics functions, lead to the observation point beta functions and phase advances shown in the bottom plots in Fig. 2. Following the initial analysis of the quadrupole scan data using the design parameters, the analysis was repeated using the parameters for the calibrated model (and the transfer matrices calculated using the design quadrupole gradients). The optics for the design model are shown in Fig. 2 only to illustrate the intended conditions for the tomography data collection, and for comparison with those for the calibrated model. In the remainder of this work, we refer only to the calibrated model.
B. Quadrupole scan analysis using the algebraic reconstruction tomography technique
Screen images collected during the quadrupole scans were used in an algebraic reconstruction tomography (ART) code, to determine the 4D transverse phase space charge distribution. The same tomography code was used for the recent data as was used in the studies on CLARA FE: the earlier work included validation of the code, using simulated data [27]. In principle, since the only changes in machine settings made during the course of a quadrupole scan are to the quadrupole gradients, the phase space distribution at the reconstruction point (in the current studies, at the exit of the linac, upstream of the quadrupoles) should vary little during a scan.
Beam images collected during a quadrupole scan are prepared for the tomography analysis by first subtracting a background image (to remove any artefacts from dark current), and then cropping and scaling the images. To crop the images, we remove the area outside a certain range of pixels from the point of peak intensity in the image. The same cropping range is used on each step in the quadrupole scan, so that the cropped images all have the same dimensions in pixels. The crop limits are chosen so that the beam occupies as much of the cropped images as possible, without clipping the beam in any of the images. To scale the images, we demagnify each image along each axis by the square root of the beta function corresponding to that axis (while maintaining the same number of pixels in each image). In effect, scaling means that given an initial calibration factor in mm/pixel, the calibration factor after scaling will be in mm/ √ m/pixel. The beta functions used for scaling are found from the optics in the calibrated model (propagating the values from the reconstruction point to the observation point, using the transfer matrix calculated from the corresponding quadrupole strengths). Scaling essentially transforms the images to normalised phase space: this means that if the phase space distribution at the reconstruction point was correctly matched to the optics in the calibrated model, then the scaled beam size (in pixels) would remain constant over the course of the quadrupole scan. Finally, the resolution of the normalised images is reduced (or increased, if necessary) to 39×39 pixels.
For the tomography analysis (using ART), we reconstruct the 4D phase space with a resolution equal (in pixels) to the image resolution, i.e. 39 pixels on each axis. The phase space resolution is not in principle constrained by the technique, but is a practical choice, decided by a balance between the desired level of detail in the reconstructed phase space distribution, and the computation time and resources needed for the analysis (which can increase rapidly with increasing phase space resolution). The results of the tomography can be validated by transporting, for each step in the quadrupole scan, the 4D phase space distribution from the reconstruction point to the observation point using the transfer matrix calculated from the known quadrupole strengths and drift lengths; and then comparing the projection onto co-ordinate space with the corresponding observed beam image.
Projections of the reconstructed 4D phase space distribution are shown in Fig. 3 for 10 pC and 100 pC bunch charges. Note that the scales on the axes for each image are given in normalised phase space (units of mm/ √ m). Validation images for 10 pC and 100 pC bunch charges are shown in Fig. 4 for three steps in the quadrupole scan. The screen images are generally reproduced from the co-ordinate space projection of the reconstructed phase space distribution with good accuracy, supporting the validity of the reconstructed 4D phase space distribution. The screen images with 100 pC bunch charge show significanly more structure than those with 10 pC bunch charge, though the additional structure is not immediately apparent from the projections of the 4D phase space distribution at the exit of the linac. The richer beam structure observed with 100 pC bunch charge is believed to be associated with the properties of the photoinjector laser.
Variations in the beam size at the observation point over the course of a quadrupole scan are shown in Fig. 5. The plots (upper plot for 10 pC bunch charge, and lower plot for 100 pC) compare the beam sizes calculated in four different ways: • The solid lines (labelled "linear optics") show the beam sizes (calculated at each point in the quadrupole scan) found by calculating the covariance matrix describing the reconstructed 4D phase space distribution at the reconstruction point, and then transporting the covariance matrix to the observation point. The shaded bands indicate the uncertainties on the beam sizes arising from the uncertainties on the elements of the covariance matrix.
• Crosses (labelled "observed beam size" in Fig. 5) show the rms beam sizes obtained from Gaussian fits to projections of the observed beam images onto the horizontal and vertical axes. The error bars indicate the standard deviations of the rms beam sizes over the ten images collected at each step (which dominate over uncertainties associated with the Gaussian fits).
• The circular markers (labelled "calibrated model") show the beam sizes at each point in the quadrupole scan expected from the lattice functions in the calibrated model, with emittances found from the reconstructed 4D phase space. The error bars show the uncertainty arising from the uncertainty on the emittance (increased a factor of 10, to make the error bars more clearly visible).
• Points (dots, labelled "tomography") show the rms beam sizes obtained from Gaussian fits to projections (onto the horizontal and vertical axes) of the reconstructed 4D phase space transported from the reconstruction point to the observation point. The error bars in this case indicate the uncertainties in the fit.
Although there is qualitative agreement between the beam sizes in the calibrated model (using the optics functions shown in Table I) and the observed beam sizes, there is better agreement with the observed beam sizes in the case of linear transport of the covariance matrix calculated from the reconstructed phase space distribution, and in the case of linear transport of the phase space distribution. For completeness, and for comparison of the results from tomographic analysis using ART and analysis using machine learning, the emittances and optics functions at the reconstruction point are given in Table II. The values shown are calculated from the covariance matrices describing the reconstructed 4D phase space distributions, for 10 pC and 100 pC bunch charges. Note that the values given are for the normal mode emittances γε I , γε II and optics functions, B I , B II [30]. In terms of these quantities, the covariance matrix is expressed: where the elements of the covariance matrix are the second-order moments of the beam distribution over all combinations of phase space variables: with x i = x, p x , y, p y , for i = 1, 2, 3, 4, respectively. The symmetric matrices B k can be written in terms of 2 × 2 sub-matrices σ k uu (with u = x or y): In the absence of coupling: and:
III. PHASE SPACE TOMOGRAPHY USING MACHINE LEARNING
Although the results shown in Section II suggest that the algebraic reconstruction technique can be of value in constructing the 4D transverse phase space distribution of the beam in a machine such as CLARA, the method can have some limitations. First, the structures visible in the beam images at the observation point (especially at the higher bunch charge) are not clearly evident in any of the projections shown of the 4D phase space distribution at the reconstruction point. The reasons for this are not well understood: it may simply be a result of the relatively poor resolution with which the 4D phase space distribution is determined; or it may be that the orientation of the distribution in phase space is such as to obscure the structure for the chosen 2D projections - note that the structures seen at the observation point are only really evident for particular steps in the quadrupole scan, i.e. for some specific range of betatron phase advances.
A second limitation of the algebraic reconstruction technique is that it can take some time to process the data to obtain the phase space distribution. The demands in terms of processing time and computational resources increase rapidly with increasing resolution of the reconstruction, and with increasing dimensionality of the phase space. For the results presented here, a phase space resolution of 39 pixels in each dimension of the 4D phase space is used: this limits the detail visible in the phase space, but allows the reconstruction to be completed reasonably rapidly (within a few minutes) using a standard PC. Where a high resolution is required, or a rapid reconstruction would be of value (for example, for several iterations of machine tuning) then more powerful computing resources may be needed if algebraic reconstruction, or a similar tomography technique, is to be used. There is also interest in extending tomography from four to five or six dimensions [12,16]: this can be of particular value in short-wavelength free electron lasers, for example, where understanding the transverse beam profile and energy spread as a function of longitudinal position in the bunch can be of significant importance.
Approaches based on machine learning may offer ways to address some of the issues associated with conventional tomography techniques for reconstruction of the beam phase space in four (or more) dimensions. The method presented here, which we apply to the two transverse degrees of freedom, uses a pre-trained neural network, to which the beam images at the observation point are provided, in compressed form, as input; the output from the neural network consists of the 4D phase space distribution, again in compressed form. In principle, using a neu-
FIG. 5. Variation in horizontal (blue points and lines) and vertical (red points and lines) at the observation point, for 10 pC bunch charge (top) and 100 pC bunch charge (bottom). Error bars on the observed beam sizes (marked as crosses)
show the standard deviation of Gaussian fits to the ten beam images collected at the observation point for each step in the quadrupole scan. Error bars on the beam sizes from the tomographic reconstruction (solid points) show the uncertainty in a Gaussian fit to the phase space density projected onto the horizontal or vertical axis. Open circles show the beam sizes calculated by propagating the lattice functions for the calibrated model (Table I) from the reconstruction point to the observation point, and combining with the emittances calculated by a fit to the 4D phase space from ART tomography ( Table II). The line shows the beam sizes obtained by propagating the covariance matrix fitted to the 4D phase space distribution reconstructed by ART (Table II), with shaded range showing the uncertainy arising from the uncertainties on the elements of the covariance matrix. ral network in this way allows a rapid (almost immediate) reconstruction of the 4D phase space distribution once the beam images are provided. The computing resources needed for carrying out the reconstruction can also be much more modest than those needed for algebraic reconstruction tomography. If images in uncompressed form are used, the input and output data sets can still be of significant size, but use of machine learning enables image compression techniques to be applied, reducing the size of input and output data sets. In principle, a neural network can be trained on images and phase space dis-tributions represented in some chosen compressed form, for example as discrete cosine transforms (DCTs) [31][32][33]. Image compression would be difficult to apply in the case of conventional tomography methods, which usually rely on a relationship between the sinogram and the object to be reconstructed that is intrinsically expressed in regular co-ordinate space. Neural networks offer much greater flexibility, and do not require a specific representation of the input or output data.
In using a neural network to perform tomographic reconstruction, an issue does arise with the need to train the network. Training must necessarily be based on simulated data, which would ideally include features characteristic of the beam; but at least in cases where the beam shows some detailed structure, the relevant features may not be known at the time of generating the training data. In the current study, we simply take the approach of generating random phase spaces consisting of a number of superposed 4D Gaussian distributions, with the component distributions in each generated phase space varying randomly in position, shape and intensity. Given the shape of the phase space distribution in CLARA suggested by tomography using ART, the phase space distributions constructed in this way may not provide ideal training data; however, it is interesting to consider the ability of a neural network to reconstruct phase space distributions presenting features significantly different from those present in the training data. If the techniques described here are to be of value in a reasonably wide range of situations, then they should be able to reproduce phase space distributions with features significantly different from those in the training data.
A. Implementation of machine learning method
Before presenting the results of tomography using machine learning, we discuss some further details of how the technique was implemented.
For preparation of training data, phase space distributions were generated as mentioned above, by superposing 4D Gaussian distributions with random variations in position, shape and intensity. The distributions are constructed in normalised phase space; the sinograms are then obtained by transforming the distribution using phase space rotations (corresponding to the steps in a quadrupole scan), and then projecting the distribution onto the (normalised) x-y plane at each step in the quadrupole scan. Note that we used phase advances corresponding to those in the calibrated model, shown in Fig. 2 (bottom right). For consistency, it is important that the phase advances should match those resulting from the quadrupole strengths applied in the quadrupole scan, given the lattice functions used for normalising the phase space. It should be emphasised, however, that the chosen lattice functions do not need to match those describing the actual beam distribution (which in general, is not known in advance).
Having obtained the sinograms for the simulated 4D phase space distributions, we compress both the phase space distributions and the sinograms using discrete cosine transforms (DCTs). There are several types of DCT: we use a Type II DCT, which is the default in many standard scientific computing packages. In the case of a 2D M × N array, a Type II DCT is defined by: where the values x mn are the components of the initial array, and y jk (for j = 0 . . . M − 1, k = 0 . . . N − 1) are the components of the transformed array. Compression is achieved by truncating the transformed array at some point, either defined in terms of the magnitudes of the components (which should all be below some specified threshold beyond the truncation point) or simply in terms of a fixed limit on the size of the transformed array. The inverse of the Type II DCT of an M × N array is given by: where: The expressions in (7) and (8) can be extended to higherdimensional arrays by including an additional summation for each additional index, and making the appropriate modification to the numerical factors in (8). Truncating the transformed array corresponds to reducing the upper limits on the summations in the inverse transformation (8); in this case, the array x mn is reconstructed with approximated values for its elements, but the number of elements in the array remains the same. In the case of an image, the effect of truncating the DCT is to lose some of the fine detail. Figure 6 illustrates image compression using DCTs truncated to different sizes, using (as an example) a beam image collected during the quadrupole scan with 100 pC bunch charge. The original image has resolution (in pixels) M × N = 161 × 161. Truncating the DCT to 21 × 21 results in some loss of clarity, but the main features and some details can still be clearly seen. Truncation to 16 × 16 results in more significant loss of detail.
The training data for the neural network consists of some number of pairs of the DCTs of the sinograms (input) and corresponding phase space distributions (output). The neural network itself is implemented in Keras [34]. We use a rather straightforward architecture. Apart from the input and output layers, there are two hidden layers, defined as dense layers in Keras. To limit overtraining, each dense layer is followed by a dropout layer. We use a resolution of 19 points on each axis for the DCT of the 4D phase space (i.e. 19 4 voxels in total), and a resolution of 21 × 21 for the DCT of each 2D projection in the set of "images" forming the sinogram. In practice, these resolutions capture sufficient numbers of DCT modes to allow representation of the screen images and the 4D phase space with good resolution. Note that the size of the data for the 4D phase space using machine learning (19 4 ) is substantially smaller than the size used for the ART tomography reported in Section II (39 4 ). We have found that for the data collected in CLARA, increasing the numbers of DCT modes, either in the input sinograms or the reconstructed phase space, does not improve the quality of the results as judged by a com-parison between the projections of the phase space at the observation point, and the original beam images (as shown, for example, in Fig. 11). In constructing the sinogram, we use phase space rotations corresponding to the phase advances in the calibrated model (see Fig. 2, bottom right plot), i.e. with 32 steps in the quadrupole scan. With these parameters, the neural network has an input layer with 32 × 21 2 nodes, and an output layer with 19 4 nodes. We use 1500 and 3000 nodes for the first and second hidden (dense) layers, respectively, with a dropout layer specified to set 20% of inputs (selected randomly) to zero for each dense layer during training. The tomography process using image compression and machine learning is illustrated schematically in Fig. 7.
A total of 3000 sets of 4D phase space distributions and sinograms were generated as training data; 100 sets were reserved as validation sets for testing the performance of the trained network, and were not used in the training process itself. Training was carried out using the Adam optimization algorithm [35]. Training takes several minutes on a standard laptop PC. The training time is comparable to the time taken to process a single data set using ART; however, training only needs to be performed once, to produce a neural network that can (in principle) be applied to any data set collected in a quadrupole scan using given quadrupole strengths. The ART analysis would need to be performed separately for each data set.
Two examples illustrating results from the trained network are shown in Fig. 8. The examples are selected at random from the validation data sets. Each row of images in the figure shows a different projection of a 4D phase space: in each example, the top row shows the projections from the original phase space, and the bottom row shows the projections from the phase space reconstructed by the neural network when provided with the (DCTs of the) corresponding sinograms. While there are clearly some differences between the original and the reconstructed phase spaces, the reconstruction is sufficiently similar to the original to provide a useful practical indication of the beam distribution in phase space.
To characterise further the reliability of the machine learning reconstruction of the phase space, we calculate the residuals between the original phase space density in the test data and the phase space density found from the sinograms using the trained neural network. The residuals are shown in Fig. 9, as histograms of ∆DCT/σ DCT and ∆ρ/σ ρ . Here, ∆DCT is the difference between a particular DCT coefficient predicted by the neural network, and the corresponding DCT coefficient in the phase space distribution used to generate the sinogram data provided as input to the network. σ DCT is the standard deviation of the DCT coefficients. ∆ρ is the difference in the phase space density (at a particular element of 4D phase space) between the original distribution and the distribution found by the neural network, after performing an inverse DCT of the network output; and σ ρ is the standard deviation of the phase space density. Figure 9 shows histograms of these quantities for 20 cases from the validation data sets. Typically, between 75% and 80% of phase space density values from the neural network are within 0.1 σ ρ of the true phase space density.
B. Experimental results from tomography using machine learning
The trained neural network was applied to analysis of the quadrupole scan data collected on CLARA, described in Section II. The screen images from each step of the quadrupole scan were prepared in the same way as for the ART analysis, by cropping, and then scaling to transform to normalised phase space. The images were then compressed by constructing the DCTs, which were truncated to 21 modes on each axis. The DCTs were provided as input to the trained neural network, which provided the DCT of the 4D phase space distribution, with resolution 19 modes along each axis. Projections from the reconstructed 4D phase space distribution for 10 pC and 100 pC bunch charges are shown in Fig. 10.
In Section II, we validated the ART reconstruction of the 4D phase space distribution by comparing the projection of the distribution onto x-y co-ordinate space at the observation point with the observed beam images at different steps of the quadrupole scan. We can make similar comparisons to validate the 4D phase space distribution reconstructed using the neural network: some examples (for the same steps as shown in Fig. 4) are shown in Fig. 11. Once again, we see generally good agreement between the projection of the 4D phase space distribution and the observed images, in both the 10 pC and the 100 pC cases. Comparing with projections from the phase space reconstructed using ART in Fig. 4, the machine learning projections do not all have the same clarity, in terms of the finer details in some of the images. It should be remembered, however, that the ART tomography uses beam images with resolution 39×39 pixels, to reconstruct the 4D phase space distribution with a resolution of 39 pixels on each axis. The machine learning technique uses beam images and 4D phase space in a compressed form: the beam images are represented by 21 DCT modes on each axis, and the phase space is represented by 19 DCT modes on each axis. Although this is sufficient to capture a significant amount of detail, the truncation of the DCTs means that the compression is not lossless. Given the compression ratio, the machine learning method retains a reasonable level of detail in the phase space distribution.
Comparisons between the observed and reconstructed beam sizes are shown in Fig. 12: the results here can be compared with those in Fig. 5, which shows the beam sizes reconstructed using ART. While there are some differences in detail in the quality of the match between the beam sizes expected from the reconstructed phase space and the beam sizes observed during the quadrupole scan, both the ART and the machine learning techniques show similar performance in describing the beam behaviour.
IV. CONCLUSIONS AND POSSIBLE FURTHER DEVELOPMENTS
The machine learning technique we have described in this paper uses relatively simple methods for reconstructing the 4D phase space. Nevertheless, this approach appears capable of producing useful results, as shown by the comparison between projections onto x-y co-ordinate space at the observation point for different quadrupole strengths, and beam images collected over the course of a quadrupole scan. Values obtained for parameters describing the distribution (emittances and lattice functions) are consistent with those obtained using a conventional tomography technique. Data collection and analysis were planned using a design model of the machine; despite significant differences between the design model and the actual machine conditions during collection of experimental data, results from both the ART and the machine learning techniques provide useful information on the beam properties in CLARA.
Use of image compression (in the present case, using discrete cosine transforms) allows reduction of the size of the data sets that need to be processed, in particular, for representing the 4D phase space. Machine learning allows direct tomographic analysis of compressed beam images and phase space representations, without the additional complications or difficulties that would be encountered in attempting to apply conventional tomography techniques to compressed images.
Inspection of projections of the 4D phase space onto various planes (in particular, comparison of Fig. 3 with Fig. 10) suggests that the machine learning technique is capable of producing a representation of the 4D phase space distribution that appears clearer than that obtained by the conventional tomography algorithm. On the other hand, the beam images obtained by projecting the 4D phase space distribution onto co-ordinate space at the observation point have slightly higher fidelity in the case of the conventional tomography technique. Nevertheless, the consistency in the results of the two methods, and the fact that the neural network produces a 4D phase space distribution immediately the quadrupole scan images are available (compared to a potentially lengthy computation time required by the conventional tomography technique) suggests that the machine learning approach could have some practical value.
There are a number of ways in which the machine learning approach could be further developed. With an improved understanding of the operational conditions of CLARA, some optimisation would be possible in terms of the quadrupole strengths (and number of steps) used in the quadrupole scan. More sophisticated neural network architectures, or use of more sophisticated machine learning tools generally, could lead to a better reconstruction of the 4D phase space distribution from a given set of sinograms. There may be some benefits in further increasing the number of sets of training data. An indication of the quality to be expected in the reconstruction can be obtained using simulated data, for example by calculating the residuals as shown in Fig. 9. Although the phase space distributions in the training data we used for the neural network had very different features from the phase space distribution in the real machine, the trained network was still capable of reconstructing a phase space distribution that provided a good description of beam behaviour. It is possible, however, that using training data more closely resembling the real beam (once some initial characterisation of the beam has been obtained) could lead to better results.
Discrete cosine transforms may not be the optimal way to represent images and phase space distributions in compressed form for the application described here. A DCT essentially represents a multidimensional array as a set of orthogonal modes, with each mode described by a cosine function. This provides a convenient general purpose approach, but alternative basis functions may allow more accurate representation of beam images and phase space distributions with fewer modes. It may be possible, for example, to take advantage of properties generally expected of the beam (such as approximate symmetries) to construct a more appropriate basis. The scope for further development is rather wide, and while the results shown here are encouraging and demonstrate the value of machine learning for tomographic reconstruction in principle, more extensive studies would be required to understand the full potential of the technique.
ACKNOWLEDGMENTS
We would like to thank our colleagues in STFC/ASTeC at Daresbury Laboratory for help and support with various aspects of the simulation and experimental studies of CLARA. In particular, we would like to thank Amy Pollard for useful discussions and advice on machine learning.
This work was supported by the Science and Technology Facilities Council, UK, through a grant to the Cockcroft Institute. show the standard deviation of Gaussian fits to the ten beam images collected at the observation point for each step in the quadrupole scan. Error bars on the beam sizes from the tomographic reconstruction (solid points) show the uncertainty in a Gaussian fit to the phase space density projected onto the horizontal or vertical axis. Open circles show the beam sizes calculated by propagating the lattice functions for the calibrated model (Table I) from the reconstruction point to the observation point, and combining with the emittances calculated by a fit to the 4D phase space from machine learning (Table II). The line shows the beam sizes obtained by propagating the covariance matrix fitted to the 4D phase space distribution reconstructed by machine learning (Table II). | 2022-09-05T06:44:00.660Z | 2022-09-02T00:00:00.000 | {
"year": 2022,
"sha1": "7ec51c63a8e252eacf4a03409d09753f270cf2d2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7ec51c63a8e252eacf4a03409d09753f270cf2d2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
247777399 | pes2o/s2orc | v3-fos-license | R&D during public health emergencies: the value(s) of trust, governance and collaboration
In January 2021, Dr Tedros Adhanom Ghebreyesus, director–general of the WHO, warned that the world was ‘on the brink of a catastrophic moral failure [that] will be paid with lives and livelihoods in the world’s poorest countries’. We are now past the brink. Many high-income countries have vaccinated their populations (which, in some cases, includes third and even fourth doses) and are loosening public health and social measures, while low-income and middle-income countries are struggling to secure enough supply of vaccines to administer first doses. While injustices abound in the deployment and allocation of COVID-19 vaccines, therapies and diagnostics, an area that has hitherto received inadequate ethical scrutiny concerns the upstream structures and mechanisms that govern and facilitate the research and development (R&D) associated with these novel therapies, vaccines and diagnostics. Much can be learnt by looking to past experiences with the rapid deployment of R&D in the context of public health emergencies. Yet, much of the ‘learning’ from past epidemics and outbreaks has largely focused on technical or technological innovations and overlooked the essential role of important normative developments; namely, the importance of fostering multiple levels of trust, strong and fair governance, and broad research collaborations. In this paper, we argue that normative lessons pertaining to the conduct of R&D during the 2014–2016 Ebola epidemic in West Africa provide important insights for how R&D ought to proceed to combat the current COVID-19 pandemic and future infectious disease threats.
INTRODUCTION
In January 2021, Dr Tedros Adhanom Ghebreyesus, director-general of the WHO, began his address to WHO's Executive Board with the bold statement that the world was 'on the brink of a catastrophic moral failure [that] will be paid with lives and livelihoods in the world's poorest countries'. 1 We are now past the brink. Many high-income countries (HICs) have vaccinated their populations (which, in some cases, includes third and even fourth doses) and are loosening public health and social measures, while low-income and middle-income countries (LMICs) are struggling to secure enough supply of vaccines to administer first doses. While injustices abound in the deployment and allocation of COVID-19 vaccines, therapies and diagnostics, an area that has hitherto received comparatively inadequate ethical scrutiny concerns the upstream structures and mechanisms that govern and facilitate the research and development (R&D) leading to the development of such therapies, vaccines and diagnostics.
R&D spans many diverse activities. The focus taken in our paper draws on the strategy outlined in WHO's R&D Blueprint for COVID-19 and the R&D activities addressed therein, which encompass a diverse but coordinated set of activities aiming to accelerate R&D for vaccines, therapies and diagnostics,
Summary box
⇒ The global response to the COVID-19 pandemic is characterised by global vaccine inequity. While the grossly inequitable global allocation of vaccines and other interventions like therapies and diagnostics deserves scrutiny, so too do the upstream structures and mechanisms of R&D leading to the development of such interventions. ⇒ We can improve the R&D response to COVID-19 and future infectious disease threats by learning from past experiences. Yet, much of the 'learning' from past epidemics and outbreaks has largely focused on technical innovations, overlooking the essential role of normative innovations, namely, the importance of fostering multiple levels of trust, building strong and fair governance, and cultivating broad research collaborations. ⇒ Cultivating these normative innovations to R&D from past epidemics and outbreaks, including the 2014-2016 Ebola epidemic in West Africa, is likely to play a key role in building trust in therapeutics, vaccines and diagnostics for COVID-19, particularly if (or when) high-income countries turn their attention from their domestic needs to supporting R&D efforts in low-income and middle-income countries.
BMJ Global Health
undertaken by stakeholders such as scientists, research institutions, manufacturers, governments and regulatory bodies. 2 3 These include traditional R&D activities like preclinical research, clinical research and manufacturing, but also activities like global research platforms, research priority setting, community engagement, data sharing, funding, and associated regulatory and ethical pathways. 4 In addition, given our focus on the ethics of R&D, we also consider the broader social and political contexts within which R&D occurs, which includes the impacts that R&D activities may have for those contexts. Many of the pressing R&D challenges faced during the COVID-19 pandemic are not new. Some were present during the 2014-2016 Ebola virus disease outbreak in West Africa. [4][5][6] Controlling the Ebola epidemic required novel approaches to R&D-largely with respect to the speed and degree of communication requiredto rapidly study and produce novel therapeutics and prophylactics to complement the public health measures deployed to curb the spread of disease. 4 7 From collaboration between countries to efforts to encourage trust in local and international leaders, many innovations in the role of human relationships in R&D were key in curbing the Ebola epidemic. 8 9 The R&D response during the West African Ebola epidemic demonstrated the speed with which therapeutics, vaccines, diagnostics and related R&D architecture can be developed to address outbreaks. Much can be learnt from these experiences and others for the world's response to COVID-19 and future public health emergencies. Indeed, those evaluating the global response to the Ebola epidemic have subsequently urged for financial investments to jumpstart research innovations, facilitate manufacturing capacity and enhance information systems. 9 However, as these examples illustrate, much of the 'lessons learnt' and associated recommendations have largely focused on technical or technological innovations informed by the West African Ebola epidemic and overlooked the essential role of normative (eg, ethical, relating to a value judgement) developments pertaining to R&D during public health emergencies, 5 6 namely, the importance of fostering multiple levels of trust, building strong and fair governance, and cultivating broad research collaborations. We argue that these normative lessons provide important insights for how R&D ought to proceed to combat the current COVID-19 pandemic and future infectious disease threats.
Given the relevance of social and political contexts for the normative evaluation of R&D conducted during public health emergencies, we begin by briefly addressing key elements of the social and political context for the response to the West African Ebola epidemic. We then highlight the normative relevance of trust, governance and collaboration in the R&D response to that epidemic. Finally, we discuss how successes and setbacks in R&D related to the West African Ebola epidemic should inform R&D responses to the COVID-19 pandemic and future infectious disease threats.
SOCIAL AND POLITICAL CONTEXT OF THE WEST AFRICAN EBOLA VIRUS DISEASE EPIDEMIC
The successes and setbacks in R&D during the Ebola epidemic must be situated within the social and political contexts of Liberia, Guinea and Sierra Leone to understand how they relate to the central themes of trust, governance and collaborative partnerships explored in this paper. First, as others have noted, many of the interactions and instances of initial hostility towards international healthcare workers and foreign aid experienced in some cases were due to a legacy of colonialism in West Africa. 2 For instance, in each of the countries that were primarily affected by the Ebola epidemic, aid and research (which are often difficult to disentangle) were largely directed through or governed by national institutions with direct ties to former colonial powers: France intervened in Guinea, the UK in Sierra Leone and American organisations in Liberia. 2 These colonial legacies shaped how aid was initially offered and distributed in West Africa, as well as how research was designed and implemented. 10 Underlying historical distrust of Western involvement led in some cases to local communities hesitating or refusing to comply with directions, thereby aiding the spread of the disease. 2 Initial Ebola response strategies-including those for designing and implementing R&D-were not readily accepted by communities across the three countries and were erroneously framed as 'resistance' by media in HICs. 3 While some Ebola-related initiatives were able to overcome this entrenched distrust, acknowledging the historical, colonial injustices visited on many LMICs by HICs is an important aspect of creating a strategy for R&D in response to a global public health emergency. This is particularly critical to keep in mind as much of the Global North vaccinates their populations while planning to 'aid' the Global South once the pandemic is 'over' in their home countries, and while new and ongoing clinical trials for novel COVID-19 vaccines and therapies are conducted in LMIC settings. In the following sections of this paper, we elaborate on why trust, governance and collaboration played vital roles in R&D during the West African Ebola epidemic, and how these normative developments for the conduct of R&D are crucial to COVID-19 and other disease threats.
TRUST As a result of the colonial legacy in West Africa, fostering and building trust was difficult but paramount to the success of international involvement in R&D during the Ebola epidemic. While Guinea, Liberia and Sierra Leone each responded to the epidemic differently, trust-and a lack thereof-played a significant role in each case.
Trust was inhibited or otherwise eroded, for example, by approaches to data collection, storage and use during the West African Ebola epidemic. 11 The generation and sharing of data are crucial for R&D, but their useand by whom-is complicated and imbued by ethical BMJ Global Health considerations, and so must be carefully navigated for data generation and sharing to be successful. 12 13 The Ebola epidemic created an avenue for data exploitation and hoarding, given the extensive exportation of biological samples and data from West Africa to Europe and North America. 4 To date, these remain largely inaccessible to researchers and governments of Liberia, Guinea and Sierra Leone. 5 Responsibility in the collection, storage, use and sharing of data was a key determinant of the successes or failures of the Ebola emergency response and related R&D. 14 In order to learn from-rather than repeat-the poor data sharing examples observed during the Ebola epidemic, policies for sharing high-quality data that preserve and promote trust must be defended to enhance the quality and integrity of global COVID-19 R&D. 6 For instance, leaders in Sierra Leone expressed confusion over healthcare workers' need to take a blood sample from a woman who was dying from Ebola. 7 The team that collected the sample was not treating the woman, nor was the village leader who expressed confusion made aware of why the blood sample was needed to confirm that the woman had Ebola. 7 The already low level of trust was exacerbated by the spare-no-expense approach to controlling the Ebola outbreak, which was in stark contrast to the hands-off approach the international community generally employs for other diseases endemic to Africa. 7 People living in communities affected by Ebola distrusted the large number of R&D initiatives implemented to control the Ebola epidemic, while no such action had been taken for other diseases, and some wondered if interventions such as taking blood from Ebola patients was part of a conspiracy to sell blood to international buyers. 7 Families were hesitant to report cases of Ebola in their households because they distrusted the healthcare system, available interventions and ongoing R&D projects. 7 To foster and build trust, the WHO has proposed global norms for public health emergencies that should be incorporated into R&D strategies for COVID-19 and future infectious disease threats, namely, timely and transparent sharing of data and results during public health emergencies as a global norm; timely publications of public disclosure information of relevance to public health emergencies; demonstrated responsibility by researchers for accuracy of shared data; data sharing as a default practice; and incentivising data sharing and enhancing data management and analysis expertise. 8 Each of these strategies can promote trust, which can facilitate more successful R&D initiatives, largely because trust promotes collaboration, an important factor discussed later. Trust is a reciprocal process; two or more parties must engage in good faith in order to forge a trustworthy relationship that can further these aims in the context of R&D. The strategies outlined by the WHO ought to be considered and employed by all stakeholders involved in COVID-19 R&D and particularly those working with LMICs. These stakeholders include (but are not limited to) researchers in both HICs and LMICs, multilateral organisations, non-governmental organisations (NGOs) and communities at large.
GOOD GOVERNANCE
Strong governance of R&D, especially during a public health emergency, often consists of the formation, coordination, and implementation of policies, guidelines and arrangements for participation, access to information and decision-making for the various R&D stakeholders operating within a given context. 15 Two instruments of normative governance were especially important during the Ebola epidemic: regulations and rapid ethics review. This was in addition to the involvement of relevant international bodies with normative functions, such as the WHO (though, following the conclusion of the Ebola epidemic in 2016, the WHO was criticised for being ill-prepared to effectively lead the response to an epidemic or pandemic). 9 A number of panels subsequently published reports critical of the WHO's handling of the Ebola epidemic, calling for widespread reforms 16 ; however, the WHO's own advisory group on the Organisation's emergency reform did not endorse some of these major changes. 16 Other international governments were criticised for their interventions having more to do with protecting international interests than helping those who were actually sick at the time. 17 This phenomenon has been referred to as the 'pharmaceuticalisation' of global health governance strategies; in the context of Ebola, this was critiqued as the interventions being approved at the time were seen as unlikely to be useful in curbing the epidemic. 17 This highlights how instruments of normative governance (eg, research oversight and ethics review) ought to be guided by a principle of subsidiarity, which is itself predicated on efforts to build local capacity. 18 As others have noted, this requires that research teams actively engage with affected communities while planning research to determine suitable trial designs that best reflect normative requirements. 11 In the context of the outbreak, the governance of ethics review involved input and oversight from a number of different organisations, including the WHO's advisory committee on ethics, along with local organisations on the ground in affected countries. Ultimately, many of the successes and failures in the response to the West African Ebola epidemic were a result of speed (or a lack thereof). Initially, trust levels were low in affected communities, which made it difficult to implement quarantine measures. 7 Once that trust was further developed, it became easier for local governance measures to be implemented, such as the introduction of community leaders, who had more personal rapport with people living in small towns away from centralised governments. 2 A similar situation unfolded on the ethics approval side of governance. Groups such as the Médecins sans Frontières (MSF) Ethics Review Board committed themselves to rapid project reviews even for complicated interventions, illustrating that rapid ethics response is possible BMJ Global Health (though as noted by Schopper and colleagues, future emergency ethics reviews must be completed faster than those completed during the Ebola epidemic). 16 19 COLLABORATIVE PARTNERSHIPS Collaboration was a third normatively crucial factor in both international and more local or regional settings during the Ebola epidemic. As stated previously, different HICs were directly engaged with countries with whom they have a colonial history: France intervened in Guinea; the UK in Sierra Leone; and American organisations provided initial aid in Liberia. 2 However, such interactions between HICs and LMICs were not always 'true' collaborations as they tended to prioritise the interests of HICs rather than those of the affected countries. 11 The interactions embodied colonial legacies in which the balance of power tilted toward HICs and in which industrialised nations dictated the nature of engagement.
Several HICs framed their response in the context of their domestic agendas and prioritised effort in securing their national borders ahead of sending healthcare workers to West Africa with their engagements underpinned by selective historical alliances. 11 Collaborative R&D partnerships were therefore being established or cultivated in a context were HICs were in some instances working for their own good. For instance, Nohrstedt and Baekkeskov identified five main political motivations that shaped HICs' decisions to deploy healthcare workers in Liberia, Guinea and Sierra Leone, including threats to a foreign country's own national security due to epidemics abroad, interdependence on medical and other resources, the presence/activity of international organisations and networks, domestic priority setting, and the influence of national institutions in intervening countries. 10 This pernicious pattern of HIC-LMIC interaction is unfortunately familiar, but not every collaboration of this sort was forged to protect the interests of HICs. Notable among these was a unique data-related collaboration between Sierra Leone and the US Centres for Disease Control and Prevention (CDC). The Sierra Leone Ministry of Health -the body that owned the data collected in the country -partnered with the CDC primarily to consolidate Ebola data in order to share the locations of loved ones' graves with surviving family members. 11 20 This counterexample illustrates an instance where the decision to trust other groups, form good governance practices, and collaborate effectively led to a positive experience with data sharing.
LESSONS FOR COVID-19 AND FUTURE PUBLIC HEALTH EMERGENCIES
Experiences with past outbreaks, like the 2014-2016 Ebola epidemic, have led to significant technical innovations to the way in which we approach R&D for vaccines, therapeutics, and diagnostics in the context of public health emergencies. It is critical that we also learn normative lessons from our experiences with R&D during past public health emergencies. While scientific advances contribute valuable lessons to how we can better combat future public health emergencies, normative lessons show us how to better manage the human elements that are intrinsic to emergency management, and which facilitate or otherwise pave the way for the success of technical innovations. These lessons can inform individuals, groups, organisations and countries about how to best act in the face of a crisis and how to interact with other stakeholders. Failure to consider these normative aspects of R&D can lead to a failure to enact lasting change, both within the R&D space and in communities affected by health emergencies. Decision-makers in HICs must heed the many lessons learnt during the West African Ebola epidemic as efforts to vaccinate the global population against COVID-19 succeeds in HICs but flounders in LMICs. Studying the successes of certain relationships related to R&D during the Ebola epidemic and the conditions that led to their success is important, particularly where the inequities surrounding current vaccination efforts are concerned. Table 1 summarises these comparisons.
Considering the challenges faced during the West African Ebola epidemic response, the global approach to curbing the COVID-19 pandemic must involve the development of trust on micro, meso and macro levels. This would involve actors including, but not be limited to, local and national politicians, organisations working in LMICs on COVID-19 R&D, and healthcare workers whose work may bridge both patient care and R&D projects. The development of a framework for R&D that addresses the importance of community engagement and transparency will play a key role in building trust in therapeutics, vaccines and diagnostics for COVID-19, particularly if (or when) HICs turn their attention to supporting R&D efforts in LMICs. This entails the involvement of local organisations and leadership by engaging health-related volunteer groups, such as those present in Sierra Leone during the Ebola epidemic. 13 Volunteers in the healthcare sector liaised between healthcare providers and the general public, helped set up clinic and testing sites, and dispelled myths the public held about healthcare providers. 12 21 Local and national groups have contextual knowledge about their communities' health that must be acknowledged, respected and funded. Large international foundations have come under scrutiny for their decisions regarding donations and overall involvement in foreign aid during epidemics such as Ebola. 22 During the Ebola epidemic, there was no clear or robust framework for ensuring accountability of independent agencies and NGOs such as the Bill and Melinda Gates Foundation (BMGF) and MSF for Ebola R&D initiatives. 23 24 An accountability framework for R&D during global health emergencies is thus crucial to ensure all major stakeholders can be held to account and that more equitable outcomes from R&D are produced. Upholding a global health system accountability strategy BMJ Global Health is crucial for the success of COVID-19 R&D initiatives. The implication for R&D during the COVID-19 pandemic is that large organisations ought to consider allotting their financial contributions to local-level and national-level groups already working on the ground in communities, whether in LMICs or HICs, as opposed to establishing parallel R&D initiatives that may end up competing for the limited local health resources. As was observed during the Ebola epidemic, organisations such as the BMGF sought to fund the distribution of supplies, namely, through donations to United Nations agencies, along with 'private and public sector partners to accelerate the development of therapies, vaccines, and diagnostics'. 25 Global philanthropic organisations should avoid allocating resources to large international groups and NGOs and instead channel them to domestic groups and institutions that have already built strong bonds with local communities in LMICs hard hit by the COVID-19 pandemic.
A global pandemic naturally requires a global response. The WHO's Access to COVID-19 Tools Accelerator (ACT-A) initiative has emerged as an international initiative that has significant potential to bring crucial vaccines, therapeutics and diagnostics to LMICs. 26 While there are reasons to be hopeful that agreements via the ACT-A give LMICs a seat at global health policy tables, there is still reason for concern. Inclusion does not necessarily entail meaningful involvement, and it is possible that LMICs who have signed on to ACT Accelerator mechanisms, like COVAX, may hold little power as compared with the HICs in the development and implementation of policies surrounding the development of COVID-19 vaccines, diagnostics and therapeutics. 27 The goal of COVAX, for instance, is to ensure equitable access to vaccines globally, so that self-financed and funded countries can access safe and effective vaccines. 21 However, this egalitarian, collaborative approach to the distribution of COVID-19 vaccines can be compromised by funding shortages or offers for additional support of COVAX at an additional cost for the programme. 28 This is pertinent as there is precedent in global health collaboration where LMICs have been largely included without being equally involved. For instance, global health initiatives (GHIs) in Africa were introduced to align and harmonise health interventions by governments and development partners. 29 Since their introduction, however, GHIs have largely operated independently of the governments and bypassed country systems. 14 Most importantly, GHIs often do not align with national strategic plans, and their specific earmarked funding has been used to impose restrictions on countries' health development priorities. 14 In the aftermath of the 2014-2016 Ebola epidemic, international efforts were made to strengthen global outbreak response systems, leading to the establishment of at least nine agencies, including Africa CDC and the Coalition for Emergency Preparedness Innovations . 16 30 However, these initiatives were laden with significant disparities in the level of ownership granted to HICs as compared with those of LMICs; only three of nine initiatives reviewed significantly involved LMICs, while the others were largely sponsored and controlled by HICs. 16 CONCLUSIONS Key normative lessons of the importance of fostering multiple levels of trust, building strong and fair governance, and cultivating broad research collaborations gleaned from R&D efforts during the West African Ebola epidemic should inform the R&D response to the COVID-19 pandemic, with particular emphasis on mitigating the growing disparities and inequities between HICs and LMICs. It is essential to build trust with local Collaboration is required in order to conduct research that equitably engages with the affected communities.
Positive effect on R&D during or following the West African Ebola epidemic More timely and open data sharing was suggested as a way to build trust between researchers and communities.
Policies were implemented to streamline ethics review for interventions relevant to R&D that was beneficial to curbing the Ebola epidemic.
Collaborative efforts between Sierra Leone and the USA facilitated the dissemination of data on deceased loved ones to surviving family members.
Applicability to COVID-19 R&D Researchers should engage with local leadership in order to build trust with affected communities, especially as COVID-19 is brought under control in HICs while the pandemic continues to rage in LMICs.
As HICs rein in their domestic COVID-19 case numbers, it is vital that international governments recognise that while the pandemic may be under control in HICs, the securitisation of their interests is insufficient to curb the pandemic.
In order to amend and improve on the lack of collaboration between HICs and LMICs early in the pandemic, researchers must initiate collaborations that actively engage individuals who have local expertise regarding their own communities' needs.
BMJ Global Health
communities and researchers in affected countries. Legitimate collaborations between HICs and LMICs should emphasise justice and equity and should prioritise the needs of populations in LMICs. Crucially, it should be clear that local communities in LMICs have expertise and extant relationships that should be acknowledged, respected and included in R&D efforts related to the pandemic. Efforts to operationalise these normative lessons for R&D ought to be guided by a principle of subsidiarity, which is predicated on efforts to build local capacity for research collaboration and governance. The issues examined in this paper can help build the foundation for more efficient and equitable R&D approaches to the public health emergencies that will inevitably surface in the future. | 2022-03-30T06:17:47.475Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "d791d7de18cffbfb7026b6b0698ba3808f0a3c7d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "dc60b64ee2a24e6bc9cddc7fa776bb4893e2227f",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
27584068 | pes2o/s2orc | v3-fos-license | In This Issue
Olympic rowing teams have strokes. The rest of us have “ischemic events”. The brain's response over time to a brief blockage of circulation is the subject of interest to Costain et al. In particular, they are studying the proteomic response of synaptosomes to coronary occlusion. Using ICAT labeling and nanoLC‐MS/MS, they compare proteomes of synaptosomes at 0, 3, 6, and 20 h of reperfusion after 1 h of focal ischemia. Not too surprisingly, most of the differentially regulated proteins were found to come from mitochondria. Total RNA isolated from the synaptosomes and quantitated by semi‐qPCR demonstrated that the response was not transcriptionally regulated. Gene ontology was also examined. A significant regulatory system that was disrupted was the Psap conversion into four saposin activator proteins.
In This Issue
Editorial Comments by Nicole J LeBlanc 1 and Luana Marques 1,2 Despite ongoing research on the aetiology, phenomenology and treatment of anxiety disorders, these conditions continue to afflict myriads of individuals worldwide. An estimated 11.6% of the global population meets criteria for an anxiety disorder each year, 1 and individuals with anxiety disorders experience both functional impairment 2 and increased risk for physical health problems. 3 In addition to the personal suffering associated with anxiety disorders, these conditions lead to considerable costs for society, including medical expenditures and lost work productivity. 4 Moreover, although efficacious interventions for anxiety disorders are available, many individuals do not respond to these treatments 5 or struggle to access evidence-based care. 6 Thus, much work remains to be done to reduce the global burden of anxiety disorders.
In response to this challenge, this special issue highlights emerging directions in the field of anxiety disorders research. Dr Jerrold Rosenbaum, emeritus Psychiatrist-in-Chief at the Massachusetts General Hospital, provides an introduction to the issue with a commentary on current challenges and opportunities in the field of anxiety disorders research. 7 The articles that follow highlight a variety of approaches to reduce the suffering associated with anxiety disorders, including studies exploring the aetiology of anxiety and those examining novel methods for prevention, early intervention and treatment.
First, several authors present articles that explore biological and psychological factors associated with the development of anxiety disorders. For example, Clauss presents a review of studies suggesting that the bed nucleus of the stria terminalis (BNST) may serve as the neural substrate for behavioral inhibition, which is a risk factor for anxiety disorders. 8 In support of this hypothesis, she reviews research showing an association between the experience of uncertain threat and BNST activation in both animal models and humans.
Robinaugh, Ward and colleagues also explore the aetiology of anxiety disorders in their systematic review of studies examining response to biological challenge paradigms as a predictor of panic attacks and panic disorder. 9 Specifically, they note that cognitivebehavioural theories of panic disorder posit a causal link between catastrophic misinterpretations of bodily sensations and the experience of panic attacks. Thus, these theories predict that individuals who experience anxiety in response to physiological arousal will be at increased risk for panic attacks and the development of panic disorder. The authors tested this prediction by conducting a systematic review and meta-analysis of published studies examining participants' response to biological challenge paradigms (eg, CO 2 inhalation) as a predictor of future panic attacks and panic disorder. They found a small but significant effect for the prediction of panic attacks and no effect for the prediction of panic disorder. However, they note a paucity of studies on this topic, indicating a need for more research to test causal models of anxiety disorders.
In addition to studies examining the aetiology of anxiety, several authors present research exploring novel prevention and early intervention strategies. For example, Bui and colleagues report the results of a study that evaluated intranasal oxytocin as a potential secondary prevention intervention for PTSD. 10 Specifically, they used a classical conditioning paradigm to investigate whether intranasal oxytocin administered following fear conditioning would lead to reduced fear consolidation in healthy individuals. Their results did not support the efficacy of intranasal oxytocin for reducing fear acquisition, which suggest that further innovation is needed to develop prevention methods for PTSD.
In another article, Hirshfeld-Becker and colleagues report on two case studies that tested the feasibility and potential efficacy of family-based cognitive-behavioral therapy (CBT) for anxiety in toddlers. 11 The authors note that anxiety in young children tends to be persistent and interferes with cognitive and social development. They therefore propose that early intervention with anxious toddlers could shift patients' mental health trajectories across childhood. As an initial test of this hypothesis, the authors adapted family-based CBT for use with 2-and 3-year-olds and administered the treatment to two patients. Their results suggest that the treatment was feasible, acceptable and shows promise for reducing patients' symptoms. The next step will be to test the treatment in a controlled trial.
Finally, several authors report on research aimed at improving existing treatments for anxiety disorders and increasing access to evidence-based care. For example, Robinaugh, Brown and colleagues explore the use of ecological momentary assessment (EMA) data to personalise CBT for panic disorder. 12 Specifically, they demonstrate the use of EMA data to derive indices of symptom variability at different time-points in treatment and discuss how these indices could inform the clinical picture of individual patients. In addition, they demonstrate the use of a vector autoregressive modelling approach to identify patient-specific relationships among panic disorder symptoms. These models could ultimately be used to guide the selection of specific CBT interventions in treatment.
Youn and colleagues address the topic of patient engagement in CBT for anxiety disorders in a study investigating patient-level predictors of engagement in cognitive processing therapy for PTSD. 13 Their results indicated that individuals receiving treatment in Spanish (relative to English) were more likely to require the repetition of treatment content and were more likely to be impacted by logistic and financial barriers to treatment. The authors discuss the importance of attending to these patient characteristics when delivering CBT for PTSD, as doing so could improve treatment engagement and potentially response.
Finally, Baker and Simon present the results of a study examining the psychometric properties of the Anxiety Symptom Questionnaire (ASQ), which is a potential new screening instrument for anxiety disorders. 14 The ASQ had good internal consistency and test-retest reliability in this study, and improved the detection of patients with anxiety disorders above and beyond a clinician-rated measure. Thus, the ASQ shows promise as a tool to identify individuals who could benefit from evidence-based treatment for anxiety.
As these articles demonstrate, research on anxiety disorders is thriving and experts are working continuously to develop and test novel research questions regarding the aetiology, prevention and treatment of anxiety. Collaboration between research groups will be essential for future progress in this area-a point illustrated by Barako and colleagues in their article describing the partnership between Massachusetts General Hospital and the Shanghai Mental Health Center. 15 Continued teamwork and innovation will be paramount to reduce the global burden of anxiety disorders. | 2018-02-06T14:03:03.054Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "8af3c1f8c66b7d0fcf1795dc0d3a9f577d0859e6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "45e149e7b3aa215f3476eb62e5fe185f7934f079",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
270174101 | pes2o/s2orc | v3-fos-license | Unveiling an Uncommon Scenario of Co-occurrence of Multiple Odontomes With Impacted Maxillary Lateral Incisor and Canine in a 17-Year-Old Girl: A Unique and Rare Case Report
This case report presents the enigma of multiple odontomes with overretained deciduous teeth leading to the impaction of permanent successors (22, 23) in an abnormal position in a 17-year-old female patient who reported the chief complaint of maligned teeth. Permanent maxillary canines and lateral incisors are the most common teeth to face the brunt of impaction due to a wide range of etiological factors. It is imperative for a clinician to diagnose cases at an early stage to accelerate the rate of eruption of such teeth. This is especially important in cases where initially the etiology seems to be simple but on careful and judicious evaluation of the case, numerous other etiologies are found to map together for the underlying pathology. This case discusses how the presence of multiple odontomes with delayed exfoliation of deciduous teeth leads to the impaction of a permanent successor. Understanding the underlying pathology is seemingly important to devise intricate treatment mechanics for traction of impacted teeth without taxing anchorage from dental units and taking cognizance of the amount of alveolar bone loss post-removal of multiple odontomes. The appropriate thickness of alveolar bone scaffolding is required for the canine to extrude down, with an adequate band of marginal gingiva encircling the cement-enamel junction of the impacted canine, preventing any kind of fenestration and dehiscence. Hence, meticulous care was taken during surgical exposure and removal of odontomes to preserve an adequate labial cortical plate intact for traction. These excavated tooth-like structures were later subjected to histopathological evaluation, which confirmed the diagnosis of compound odontomes.
Introduction
Maxillary canines being the cornerstone of the arch play an important role in function and aesthetics.Interestingly, canine impaction is more than twice as common in the maxilla compared to the mandible [1].Overretained deciduous teeth impede the path of eruption of permanent teeth, leading to various types of malocclusions.The autonomous eruption of a permanent successor after the removal of overretained deciduous teeth depends on its proximity to the root of fully erupted adjacent teeth in the arch.Horizontal overlap of the incisor root governs the path of eruption of the impacted canine following the removal of overretained deciduous teeth [1].The maxillary canine has a tortuous path of eruption in the oral cavity, while the root of the maxillary lateral incisor serves as a guide for the eruption of permanent canine [2].Maxillary canine impaction is commonly encountered when overretained deciduous teeth block its path of eruption.In the aforementioned case, impaction of the maxillary lateral incisor further impeded the path of eruption of the permanent canine.Hamartomas of interrupted tooth-formed structures are basically odontomas, which account for 22% of the odontogenic tumors [3].The overlying odontomes in the region of 22-23 further accentuated the complexity of the scenario.
The co-occurrence of multiple pathologies together often puts the operator in a diagnostic and therapeutic dilemma.The current case presents one such unique case, representing a clinico-therapeutic conundrum of the co-occurrence of multiple impacted teeth with supernumerary odontomes.On intraoral examination, there was the presence of overretained deciduous 62 and 63 and no signs of autonomous eruption of 22 and 23 to be expected (Figure 2).These findings provided crucial information for the treatment plan.The presence of multiple radiopaque tooth-like structures required further histopathological investigation to determine their nature and any associated risks.Overall, histopathological analysis provides valuable insights into the nature and potential risks of the radiopaque structures, enabling the dental team to make informed decisions regarding the patient's care.
Case Presentation
Taking cognizance of the complexity of the clinical scenario, a treatment plan was devised for traction of impacted teeth without anchorage of adjacent permanent teeth in an attempt to prevent inadvertent tooth
FIGURE 5: Mini implant placement for traction of impacted canine
A full-thickness flap was raised and reflected with a periosteal elevator.The overretained deciduous lateral incisor and canine were extracted.Careful removal of odontomes was planned to create a site for the bonding of attachments on impacted teeth (Figures 6, 7).After the removal of odontomes, a moisture-insensitive primer was placed to initiate efficient bond strength between the bonded attachment and tooth surface in the blood-contaminated field.Attachment with an eyelet type of design was placed with an additional ligature type J hook placed in the eyelet to aid in the traction of impacted teeth (Figure 8).
Discussion
An odontoma is a benign tumor of odontogenic origin that is formed from both the epithelial and mesenchymal components of the odontogenic apparatus.The infiltration of extra-odontogenic epithelial cells from the dental lamina leads to the development of an odontoma.The odontoma progresses through the same stages as the developing tooth.Initially, there is resorption, and the lesion is radiolucent.In the intermediate stage, the odontogenic tissue becomes radiolucent and radiopaque due to partial calcification.The most radiopaque stage occurs when the calcification of dental tissues is complete [4].
In the present case, there was the occurrence of multiple (10 individual) odontomes in the lateral incisorcanine region (Figures 3, 4), suggesting that it was a compound odontome, which has an affinity for being in the maxillary anterior segment, i.e., about a 62% occurrence rate is seen [5].In the case of a missing permanent lateral incisor, the permanent maxillary canine finds it difficult to erupt as the roots of the laterals serve as guidance for its eruption.The occurrence of these multiple odontomes in the path of eruption of the 22 and 23 could also be a possible reason behind the impaction of these teeth in the present case.
Extraction of overretained deciduous lateral and canine was imperative to create a path of eruption for succedaneous blocks in permanent lateral and canine.The anchorage requirement is critical for the traction of impacted canines.If adjacent permanent teeth bonded with a preadjusted edgewise appliance are utilized for traction of impacted teeth using the piggyback technique, it ushers in the need for full-thickness wire in the bracket slot to prevent inadvertent tooth movements during the extrusion of impacted canines.It takes seven to eight months, at times, from the inception of fixed orthodontic treatment to reach the fullthickness archwire.In an attempt to reduce this time frame and initiate the extrusion of impacted teeth at the outset, it was pivotal to incorporate strong anchorage control units into the treatment plan.Temporary anchorage devices (TADs) were decided to be used for traction of impacted permanent teeth, as this would not tax the anchorage.The use of TADs for traction of impacted permanent teeth during fixed orthodontic mechanotherapy helped reduce taxation on dental units.Also, patients' burnout phase is eliminated or significantly reduced as traction is initiated at the outset.Immediate traction of impacted teeth helps to increase the pace of tooth movement, as extraction of over-retained teeth followed by removal of odontomes simulates an environment similar to RAP [6][7][8].
An odontoma, being the most common odontogenic tumor, is frequently associated with the development of a calcifying cystic odontogenic tumor (CCOT) in 24% of cases [9,10].The elimination of these odontomes will help prevent any future risk of the development of a cyst.Post-operatively, 10 tooth-like structures were obtained (Figure 9).
FIGURE 9: Macroscopic aspects of 10 tooth-like structures that have been surgically removed
These structures were then subjected to decalcification using 5% nitric acid.The decalcified H and E stained tissue sections of multiple bits of the specimen collected revealed enamel space, a longitudinal section of dentinal tubules, and inner delicate connective tissue with few blood vessels and inflammatory cells resembling pulp tissue (Figure 10).The arrow shows enamel space The connective tissue associated with another bit was moderately dense, with few blood vessels and focal areas of odontogenic epithelial islands and calcifications (Figure 11).
FIGURE 11: Photomicrograph of H and E decalcified section at 40x view shows focal areas of odontogenic epithelial islands and fibrous connective with few moderately dense blood vessels and calcification
The black arrow shows odontogenic epithelial islands and the white arrow shows calcifications.
Histopathological features were suggestive of the odontome compound type.Based on the histopathological findings, the final diagnosis is suggested to be a compound odontoma.There were no masses of ghost cells observed in the cystic lumen or in many areas of the fibrous wall, which excludes the possibility of it being a calcifying odontogenic cyst (CCOT) associated with odontoma or a dentigerous cyst associated with odontoma.Therefore, the presence of typical histological features of a compound odontoma supports this conclusion.The post-operative healing was un-inventful (Figures 12, 13).
Conclusions
In summary, this case sheds light on the intricate interplay between exfoliation-eruptive cycles and odontogenic pathology, underscoring the importance of a multidisciplinary approach for effective management.The innovative use of TADs provided reliable anchorage for orthodontic traction, minimizing the strain on dental units and facilitating successful tooth eruption.This unique case exemplifies the integration of pathology, radiology, and orthodontics in clinical practice, highlighting the collaborative efforts required to address complex dental issues.Moving forward, continued exploration of such interdisciplinary approaches is essential for advancing treatment outcomes and improving patient care in similar challenging cases.
FIGURE 2 :FIGURE 3 :
FIGURE 2: Pretreatment intraoral view of dentition depicting bony hard bulge distal to 21 and mesial to 24
FIGURE 4 :
FIGURE 4: CBCT image of impacted canine with odontome incisal to cusp tip of 23 and thinning of the labial cortical plate CBCT, cone beam computed tomography
FIGURE 6 :FIGURE 7 :
FIGURE 6: Surgical exposure of the site with an overlying odotome resembling the tooth-like structure
FIGURE 8 :
FIGURE 8: Attachment with eyelet bonded to the accessible facial surface of maxillary impacted canine
FIGURE 10 :
FIGURE 10: The photomicrograph shows enamel space and the longitudinal section displays dentinal tubules and inner delicate connective tissue with few blood vessels and inflammatory cells resembling pulp tissues
FIGURE 12 :
FIGURE 12: The crown surface of 13 being clinically visible after traction | 2024-06-02T15:06:58.194Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "fa5c5b59ca99c0a48102854233d06e5d4fd0ee7d",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/249187/20240531-31971-1xlf1im.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "221957c096ee7216458e9f9d9006c62e92cbea05",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219283487 | pes2o/s2orc | v3-fos-license | The Impact of Recessive Deleterious Variation on Signals of Adaptive Introgression in Human Populations
Admixture with archaic hominins has altered the landscape of genomic variation in modern human populations. Several gene regions have been identified previously as candidates of adaptive introgression (AI) that facilitated human adaptation to specific environments. However, simulation-based studies have suggested that population genetic processes other than adaptive mutations, such as heterosis from recessive deleterious variants private to populations before admixture, can also lead to patterns in genomic data that resemble AI. The extent to which the presence of deleterious variants affect the false-positive rate and the power of current methods to detect AI has not been fully assessed. Here, we used extensive simulations under parameters relevant for human evolution to show that recessive deleterious mutations can increase the false positive rates of tests for AI compared to models without deleterious variants, especially when the recombination rates are low. We next examined candidates of AI in modern humans identified from previous studies, and show that 24 out of 26 candidate regions remain significant, even when deleterious variants are included in the null model. However, two AI candidate genes, HYAL2 and HLA, are particularly susceptible to high false positive signals of AI due to recessive deleterious mutations. These genes are located in regions of the human genome with high exon density together with low recombination rate, factors that we show increase the rate of false-positives due to recessive deleterious mutations. Although the combination of such parameters is rare in the human genome, caution is warranted in such regions, as well as in other species with more compact genomes and/or lower recombination rates. In sum, our results suggest that recessive deleterious mutations cannot account for the signals of AI in most, but not all, of the top candidates for AI in humans, suggesting they may be genuine signals of adaptation.
The detection of AI relies mostly on independently looking for signatures of introgression (Plagnol and Wall 2006;Green et al. 2010;Durand et al. 2011;Martin et al. 2014;Browning et al. 2018) and signatures of positive selection (Tajima 1989;Fay and Wu 2000;Sabeti et al. 2002Sabeti et al. , 2007Voight et al. 2006;Grossman et al. 2010). Additionally, a number of allele frequency-based summary statistics have been shown to be particularly powerful at directly inferring AI without needing to apply separate tests for introgression and selection at genomic regions. These statistics include: the number of uniquely shared alleles between donor and recipient populations (U statistic), the quantile distribution of derived alleles in the recipient population (Q statistic), and the sequence divergence ratio (RD) (Racimo et al. 2017). Racimo et al. (2017) further demonstrated the robustness of these statistics to several factors that may confound the detection of AI, including incomplete lineage sorting and ancestral population structure.
While there is tremendous interest in identifying candidate regions for AI, most mutations that occur in genomes are likely either neutral or deleterious (Lynch et al. 1999;Eyre-Walker and Keightley 2007;Lynch 2010;Lohmueller 2014). Deleterious mutations continue to accumulate in the distinct populations after they split from each other (Henn et al. 2016). These deleterious mutations can also affect the genomic landscape in the recipient population after introgression. The genetic load (i.e., reduction in population fitness due to deleterious variants) of archaic hominins is usually higher than that of modern humans due to the former's small effective population size (Prüfer et al. 2013). Thus, most introgressed archaic ancestry is ultimately purged from the modern human gene pool (Harris and Nielsen 2016;Juric et al. 2016). Conversely, a higher frequency of archaic variants and longer introgressed tracts are the typical signatures indicating AI. However, recent studies suggest that other population genetic processes can also generate long introgressed tracts at high frequencies in a recipient population. For example, if the recipient population harbors many recessive deleterious mutations that are not shared with the donor Bierne et al. 2002;Agrawal and Whitlock 2011;Harris and Nielsen 2016;Kim et al. 2018), after introgression admixed individuals will have higher heterozygosity at those sites and the deleterious effect will be reduced ( Figure 1). As such, an initial heterosis effect occurs, since admixed individuals have higher fitness compared to unadmixed individuals due to the masking of recessive deleterious variants. The neutral markers nearby the recessive deleterious variants would also increase in frequency Bierne et al. 2002), leading to an overall increase of introgressed ancestry in the admixed population (Harris and Nielsen 2016), resembling what is expected from AI (Racimo et al. 2015(Racimo et al. , 2017. As an example of this, Harris and Nielsen (Harris and Nielsen 2016) simulated a modern human-Neanderthal admixture, and suggested that the heterosis effect from recessive deleterious variants can increase the Neanderthal ancestry in modern humans by up to 3%. Kim et al. (2018) showed that low recombination rate, high exon densities, and small recipient population size can all amplify the effect of deleterious variants leading to an increase in introgressed ancestry. However, both Harris and Nielsen (2016) and Kim et al. (2018) illustrated the confounding effect of deleterious variants on AI by directly tracking the introgressed ancestry from simulations. Although straightforward and convenient in simulation studies, introgressed ancestry is difficult to measure precisely in empirical data. Thus, it remains unclear whether other summary statistics aimed to detect AI are affected by the presence of deleterious variants.
Our present work aims to systematically explore the behavior of the summary statistics for detecting AI in the presence of recessive deleterious variants in realistic human demographic models. By performing extensive simulations under different evolutionary parameters (demography, recombination rate, and genic structure), we show that null models assuming neutrality, without accounting for the heterosis effect caused by recessive deleterious mutations, lead to increased false positive rates for most statistics.
By examining the currently known AI candidate regions in modern humans, we next show that most of the human AI candidate genes cannot be explained by deleterious variants, suggesting they may be genuine targets of AI. However, we also show that at least several candidate genes previously identified as being under AI [HYAL2 (Ding et al. 2013) and the HLA gene cluster (Abi-Rached et al. 2011)] may alternatively be false-positives due to the presence of deleterious variants. We further show that the greater exon density and low recombination rate are the main factors contributing to the high false positive rates in the two genes. Greater exon density generates a higher density of recessive deleterious mutations, leading to a higher probability of heterosis upon admixture (Kim et al. 2018). A low recombination rate maintains haplotypes at a given genomic region in a population. The combination of the two factors maximizes the heterosis effect due to deleterious mutations upon admixture. We discuss implications of these results for detecting AI in different regions of the genome and different species.
Simulations and measurement of AI
We used the software SLiM (version 3.2.0) (Haller and Messer 2018) throughout this work for the simulations. We obtained 200 simulation replicates under each of the different demographic models of admixture. Each of the models consists of three populations: an ancestral population at equilibrium that splits into two subpopulations (pD for "donor population" and pO for "outgroup"), and one of the subpopulations subsequently splits again after a period of time (pO, and pR for "recipient population"). After the pO-pR split, a pulse of admixture (lasting one generation) occurs from pD to pR and the admixture proportion is 10%. Figure 2 shows an illustration of the two demographic models used in this study: (1) Model_0 ( Figure 2A) represents a demography where the recipient population size is 10 times smaller than the donor population size throughout the simulation, and the pulse of admixture occurs at 10,000 generations ago; and (2) Mod-el_h ( Figure 2B) represents a more realistic human demography with a single pulse of archaic admixture introduced to the non-African population (Reich et al. 2010;Gravel et al. 2011;Sankararaman et al. 2012;Prüfer et al. 2013Prüfer et al. , 2017 1610 generations ago. Here the recipient population (pR) represents a non-African population, the outgroup population (pO) represents Africans, and the donor population (pD) represents an archaic group such as Neanderthals or Denisovans. Kim et al. (2018) reported that a long-term population contraction can greatly influence the dynamics of introgression, and that a prolonged bottleneck in the recipient population leads to a drastic increase of introgressed ancestry when the deleterious mutations are recessive. Thus, we use Model_0 as a general model to examine the robustness of the summary statistics when the heterosis effect from recessive deleterious variants is maximized. In contrast, Model_h serves as a comparison to evaluate the behavior of the summary statistics under a realistic demography for human populations.
We introduced mutations in the simulations that could have one of four different effects on fitness: (1) "Neutral": all mutations being neutral (s = 0); (2) "Deleterious": recessive deleterious mutations present in the populations, drawn from a gamma distribution of fitness effect (DFE) with a shape parameter of 0.186 and average selection coefficient of 20.01315 (see Kim et al. 2017), as well as a 2.31:1 ratio ) of nonsynonymous to synonymous mutations; (3) "Mild-Pos": the Deleterious model with an adaptive mutation with milder strength of positive selection (s = 0.01) introduced in pD (donor population) after the initial pD-pO split; (4) "Strong-Pos": the Deleterious model with an adaptive mutation with stronger strength of positive selection (s = 0.1) introduced in pD after the initial split.
All simulated genomic regions have a length of 5 Mb, with genic structure from the modern human genome build GRCh37/hg19. We used the exon ranges defined by the GENCODE v.14 annotations (Harrow et al. 2012) and the sex-averaged recombination map by Kong et al. (2010) averaged over a 10-kb scale. The per base pair mutation rate was fixed at 1.5*10 28 . For comparison purposes, we also applied a uniform recombination rates at 10 28 and 10 29 per base pair per generation as specified below. We also scaled the simulation parameters by a scaling factor of c (c = 5) to increase computational efficiency. The population size thus was rescaled to N/c, all generation times to t/c, selection coefficient to s*c, mutation rate to m*c, and the recombination rate also at r*c {approximation from 0.5 [12(122 r ) c ] for small r and small c}. Other evolutionary parameters remain the same before and after rescaling. For each simulation, we sampled 100 chromosomes from the recipient, donor and outgroup population. Unless otherwise stated, deleterious mutations are recessive (dominance coefficient h=0).
To explore a potential extreme case of how recessive deleterious mutations could influence the false positive rate, for each of the models described above with different fitness effect, we simulated a 5 Mb region with the genic structure of a window in the human genome (Harrow et al. 2012) that has the highest density of exons (chr11:62.3-67.3 Mb; referred to as "Chr11max"; Figure S1; Figure 3, Figure 4, and Figure 6). To explore the effect of recessive deleterious mutations on putatively adaptively introgressed regions in humans, we identified the genomic coordinates using the original studies that identified the AI candidate genes (Table S1), and extracted their flanking regions upstream and downstream of the gene region to a total length of 5 Mb, with the gene region positioned in the center.
Computing the mean exon density, recombination rate, and B-statistic across the human genome To tabulate exon density across the genome, we scanned the 22 autosomes of the human genome using a sliding window of 5 Mb with step size of 100 kb, and counted the number of exons per 5-Mb window. For each window, we calculated "exon density" as the total number of exons per window, the mean recombination rate (Kong et al. 2010), the mean of the B-statistic (McVicker et al. 2009) that captures the strength of background selection, and the mean of dN/dS ratio computed for all genes within the 5 Mb windows over primate phylogeny (Enard et al. 2016).
Summary statistics for detecting adaptive introgression
For each simulation replicate, we computed the summary statistics for detecting adaptive introgression for nonoverlapping 50 kb windows throughout the simulated segment. A full list of the AI summary statistics used in our study can be found in Table 1. We also directly tracked the Introgressed ancestry in the recipient population that originated from the donor population using the tree sequence file generated from SLiM, and reconstructed the information using pyslim (Kelleher et al. 2018) and msprime (Kelleher et al. 2016) modules in Python3, which was referred to as "introgressed ancestry" or pI (Kim et al. 2018). Therefore, the introgressed ancestry calculated from this study is the true proportion of ancestry.
For the other summary statistics that capture the signature of adaptive introgression, we used a custom Python script to extract the sampled haplotype matrices that are in MS style from the SLiM output (100 haplotype samples per population), and filled in the nonsegregating ancestral alleles to match the size of the haplotype matrices from the donor, recipient, and outgroup populations respectively. We calculated the summary statistics at nonoverlapping 50-kb windows using the same Python script pipeline for each simulation replicate.
For each statistic, we defined the critical value as the most extreme 5% quantile value in the distribution of Neutral simulations, grouping all windows and replicates together. For the Deleterious simulations, the false positive rate (FPR) is defined as the proportion of simulations per 50-kb window exceeding the critical values. Similarly, for the Mild-and Strong-Pos simulations, the true positive rate (TPR) is defined as the proportion of simulations per-window exceeding the critical value. For the D statistic Durand et al. 2011), since the critical value from the Neutral model can reach its highest possible value (D = 1), we calculate the FPR as the proportion of simulations per window that equals the critical value.
Summary statistics for non-African modern human populations
We calculated a variety of AI summary statistics using modern human genome variation data from Phase 3 of The 1000 Genomes Project Consortium et al. (2015). To illustrate the signals of AI captured by the summary statistics from previous studies, we used all individuals from seven representative populations from Eurasia and the Americas as recipient populations (for archaic introgression). Specifically, we used Western Europeans (CEU), British (GBR), Finnish (FIN), Italians (TSI), Han Chinese (CHB), Indians (GIH), and Peruvians (PEL). We also used Yorubans (YRI) as the unadmixed outgroup population. For the donor population, we used the unphased, high-quality whole genome sequences from the Altai Neanderthal (Prüfer et al. 2013) and/or the Altai Denisovan (Meyer et al. 2012), depending on which archaic group was identified as the AI source (Column 4 in Table S1). We referred to the coordinates of AI candidate genes listed in Table S1 to identify each 5 Mb region centered on the candidate gene, and extracted the corresponding genomic sequences from the modern populations and their respective donor populations. We additionally removed sites in the archaic genomes that have potential quality issues (quality score ,40 and/or mapping quality ,30). If a previously identified AI gene was found to be associated with more than one archaic group, we used only the Altai Neanderthal sequence for these cases. As we did on the simulations, the summary statistics were calculated at nonoverlapping 50-kb windows in the empirical data.
To compute the FPR due to deleterious mutations, we use the neutral simulations (i.e., no deleterious mutations) to define the critical values for each test statistic. We then use the simulations with recessive deleterious mutations as the test datasets to examine the FPR (see Figure 5). These simulations used the recombination rate and exon structure in the 5-Mb region around each candidate AI gene and assumed the demography described by Model_h. Again, the FPR represents the proportion of simulations for a given statistic in a 50-kb window in a candidate gene that are as extreme as, or more extreme than, the 5% neutral critical value. Here, we also computed P-values for each of these empirical AI candidate regions under two null models. The first null model assumed all mutations are neutral, while the second included fully recessive deleterious mutations. We then defined the Figure 1 The heterosis effect from an increase in heterozygosity due to admixture. A red or yellow star represents a mutation that is deleterious and recessive (h = 0). Each individual in the pre-admixed populations is homozygous for recessive deleterious variants at two distinct sites. If the two populations admix in equally, all mutations that were private to the original populations and were previously homozygous are now heterozygous in the F1 population.
critical values for each test statistic using these simulations. We computed P-values for each 50-kb window within the candidate region by examining where the empirical summary statistics computed from the 1000 Genomes Project data fell within simulated distributions (see Figure 7).
Data availability
The authors state that all data necessary for confirming the conclusions presented in the article are represented fully within the article. All scripts necessary for reproducing the simulations presented in this work are available at: https:// github.com/xzhang-popgen/HeterosisAIScripts/. Supplemental materials, including additional methods, Figures S1-S16 and Table S1, are available online through FigShare. Supplemental material available at figshare: https://doi.org/10. 25386/genetics.12404324.
Recessive deleterious variants affect summary statistics used to detect AI
We first tested how the presence of recessive deleterious variants affects the distribution of the AI summary statistics listed in Table 1. To maximize the heterosis effect, here we simulated the genic structure of the "Chr11Max" genomic region with a uniformly low recombination rate (r = 1e29) under the Model_0 demography. Figure 3 shows the distribution of one of the summary statistics, U50 in nonoverlapping 50-kb windows. U50 captures the number of high-frequency introgressed-derived alleles in the recipient population. Under the scenario where all mutations are neutral, we expect the dynamics of introgressed-derived alleles to be influenced simply by gene flow and other subsequent neutral processes. With a small pulse of admixture, only a small fraction of the introgressed alleles is expected to drift to high frequencies, which is reflected by the low to zero U50 allele count in the distribution of U50 under the Neutral simulations ( Figure 3A). However, in the presence of recessive deleterious variants, the count of U50 alleles becomes elevated in all genomic windows ( Figure 3B). This pattern is illustrated by the substantially increased mean and variance in the distribution, in contrast to the Neutral comparison ( Figure 3B). In cases of AI where a beneficial mutation is introduced in the donor population prior to admixture (Figure 3, C and D), a notable increase of the mean and variance of U50 is also observed. Therefore, the signatures of AI and the heterosis effect due to deleterious mutations are similar, but AI leads to a more pronounced peak at the beneficial mutation. Additionally, an adaptive mutation elevates the range of summary statistics in the flanking region, and the length of the region under its influence positively correlates with the strength of selection. However, when the elevation in U50 is due to recessive deleterious mutations, there is a slight, but consistent, upward shift across the entire region.
We next examined the distribution of other summary statistics under the four fitness scenarios ( Figure S2), and observed similar patterns as for U50. These findings indicate that, consistent with what Kim et al. (2018) observed for introgressed ancestry, deleterious variations can generate similar patterns as AI in the absence of beneficial alleles and local adaptation.
To better understand the spatial patterns of variation across the simulated region, we visualized the haplotypes (Supplemental Methods; Marnetto and Huerta-Sánchez 2017) in a 100-kb window in the middle of the segment containing the adaptive mutation when applicable ( Figure S3). The haplotypes left by recessive deleterious mutations ( Figure S3A) and true adaptive mutations ( Figure S3B) differ in structure. Interestingly, both scenarios lead to higher haplotype homozygosity in the recipient population. However, in the AI scenario ( Figure S3B), the haplotypes from the donor and recipient populations are more like each other (i.e., the number of differences between the donor haplotype and the introgressed haplotype is smaller, shown in the right panels of Figure S3) than under the scenario with recessive deleterious mutations.
Deleterious mutations increase the FPR for AI detection
To quantify the extent to which deleterious mutations can give false evidence of AI, we used the neutral distribution of summary statistics in each 50-kb window across the large Figure 2 Simulated demographic models. Going forward in time, after a burn-in period of 10*N generations (100k generations for Model_0 and 73k for Model_h), the ancestral population diverged into two subpopulations, the donor population (pD) and the ancestral population of pO and the recipient population (pR). The second population split results in pR and pO. Some time after the split of pO and pR, a single pulse of admixture occurred such that 10% of the ancestry of pR came from pD. Beneficial mutations are denoted by the yellow star.
5-Mb segment to define the critical values for a test of AI. We define the critical value as the most extreme 5% quantile value grouping all windows from neutral simulations together.
For the recessive deleterious model, we obtain the proportion of simulations (200 replicates) per window that exceeds the critical value under the neutral model, and define this proportion as the FPR, as no true adaptive mutations are present. Similarly, we define the TPR for the mild-and strong-positive selection models as the per-window proportion of simulations exceeding the critical value, where the critical value is again defined from the neutral model. Figure 4 shows the neutral critical value and the true/false positive rates in U50 and RD statistics under the simulation setting described in the section above. The TPR/FPR distribution for other summary statistics can be found in Figure S4. The neutral model simulations have FPRs 5%, by definition. In contrast, the recessive deleterious simulations show elevated FPRs in most windows for both statistics (8.62-34.48% for RD; 3.45-22.41% for U50). The high FPRs are not negligible, as the identification of AI in empirical data relies on looking for outliers in summary statistics when the presence and location of the adaptive mutation is unknown. Deleterious variation is also more common in human genomes than adaptive variation (Lynch et al. 1999;Eyre-Walker and Keightley 2007;Lynch 2010;Lohmueller 2014), which may further compound this effect.
To further understand how demographic history and recombination rate influence the FPR/TPR of the tests for AI, we simulated the "Chr11Max" 5 Mb segment (see Simulations and measurements of adaptive introgression in Materials and Methods) using the human demographic model (Model_h), and realistic estimates of recombination rate in this region (referred to as r = hg19 in Table 2). We summarized the FPRs and TPRs of a subset of statistics (pI, RD, U50, Q95) under these scenarios in Table 2 (also see Figures S5-S7). We observed that simulations with low recombination rate showed higher mean FPRs using these statistics. Moreover, the standard deviation (SD) of the statistics increases when the realistic recombination rates are applied (average recombination rate higher than 1e29).
On average, the TPRs are close to, or higher than, the FPRs in corresponding windows, and they are especially distinguishable from the neutral and deleterious models with a distinct peak in the focal windows containing the adaptive mutation (Figure 4). This shows that the summary statistics have high statistical power in general at detecting a true AI signal, as they reject the null hypothesis more often for true positives (density plots in Figure 4). It should be noted that the power varies across statistics, and correlates positively with the FPR. For example, the power of pI can be up to 100% in AI models, but its mean FPR in the deleterious models is also high (Table 2).
Altogether, recessive deleterious variants contribute to a higher FPR for AI detection in all summary statistics examined. Some statistics appear to be more vulnerable than others, with pI, RD, U stats, and Q stats being most affected ( Figure S2 and Figure S4). Low recombination rates amplify the heterosis effect that mimics the AI signature, while the modern human demography (Model_h) results in fewer false positives than Model_0 in general, which has a relatively long-term contraction in the recipient population ( Figures S5 and S6).
Deleterious mutations have a limited effect on top candidates for AI in humans
Next, we sought to systematically assess whether the patterns of AI summary statistics caused by recessive deleterious variants could lead to false detection of AI when we simulate under the genic structure observed for previously identified AI candidate regions in humans. This is an important consideration because these regions were detected as unusual either in comparison to the rest of the genome or under demographic models that assumed all mutations were neutral. Thus, it remains unclear whether deleterious variation could provide an alternate mechanism for the observed patterns.
We extracted the recombination rates and genic structure of the 5 Mb sequences surrounding 26 previously identified AI regions (Table S1). For each candidate region, using its recombination rate and exon density, we ran 200 simulation replicates under the human demography described by Model_h. We simulated under two models (the Neutral and Deleterious models) to compute the FPRs in the AI candidate gene regions.
Overall, we find that most statistics do not have extremely elevated FPRs across most of the gene regions in the presence of deleterious mutations ( Figure S7). The D statistic, however, is a notable exception, showing a higher FPR across all candidates. This is rather unsurprising because, although the D statistic is powerful at detecting genome-wide excess of shared derived alleles between groups (a metric indicating admixture), studies have shown its limitations and reduced reliability for inferring local ancestry using small genomic regions (Martin et al. 2014). The fD statistic, on the other hand, is powerful at detecting introgression at localized loci, and does not show unusually high FPR for all candidate regions.
Notably, with the exception of two simulated regions (representing the regions of HLA and HYAL2, Figure 5), we find that the FPR is well-controlled in the other 24 simulated AI candidate regions ( Figure S7). Here, we show the FPRs for the EPAS1 and the BNC2-like regions ( Figure 5) since these two regions have similar recombination rates, exon density and FPRs as the other AI regions considered here. Other than the D statistic discussed above, the rest of the summary statistics show an average of FPR around or ,5%. In particular, the Q and U statistics appear to be the most robust against false positives from deleterious mutations. In contrast, HLA-A, HLA-B, and HLA-C genes (referred to as "HLA" in this work), and a segment on chromosome 3 that contains HYAL2 gene show elevated FPRs on nearly all statistics.
High exon density and low recombination rate can lead to deleterious mutations mimicking AI in humans
To understand why the HYAL2 and HLA genes exhibit higher FPRs in the presence of recessive deleterious variants, we evaluated several possible factors that could contribute to the false positives, including: (1) recent human population growth, (2) the mean recombination rate, (3) the density of exons where deleterious mutations occur, and (4) the strength of natural selection in these genes.
We first simulated genomic regions with the structure of the four genes shown in Figure 5 under four different scenarios of population size change ( Figure S9). We find that outlier regions, such as HYAL2 and HLA, continue to have high FPRs across the different growth scenarios. Growth (e.g., "Growth 2" and "Growth 4" in Figure S9 where the population size at the end generation is .70-fold larger than the initial size) slightly intensifies the already high FPRs in these two genes ( Figure S10), which can be explained by an increase in the efficacy of selection when the effective population size is large (Fisher 1923;Wright 1931). The other two simulated regions (representing the BNC2 and EPAS1 regions) do not exhibit increased FPRs in the presence of population growth.
We next explored how changes in recombination rate impact the FPRs for the summary statistics used to detect AI. By using a uniformly low or high recombination rate in the simulations under Model_h ( Figure S11), we observed that a high recombination rate can substantially reduce the FPRs to nominal levels (0.05) on all statistics in all genes. Conversely, a uniformly low recombination rate led to high FPRs in the two outlier regions (HYAL2 and HLA), while the FPRs do not necessarily increase in most statistics in other regions like BNC2 and EPAS1.
Motivated by this finding that the recombination rate can influence the FPR in HYAL2 and HLA regions as well as prior work suggesting that low recombination rate and high exon density can lead to deleterious mutations mimicking signals of AI (Kim et al. 2018), we performed a more detailed analysis as to whether the combination of exon density and recombination rate can explain the elevated FPRs in the HYAL2 and HLA regions. We computed the mean recombination rates and exon densities for sliding 5-Mb windows across the human genome (see Materials and Methods), and found that that HYAL2 and HLA regions are indeed outliers. These two genes have both high exon density and low recombination rate compared to most of the other regions of the genome ( Figure 6A).
It is also possible that the high FPR in HLA and HYAL2 could be due to mutations in these genes being unusually Average ratio of sequence divergence between an individual from the recipient and an individual from the donor population, and the divergence between an individual from the outgroup and an individual from the donor population Racimo et al. (2017) D Patterson's D statistic, which measures the excess allele sharing between the recipient and donor population than between the recipient and an outgroup population that is unadmixed. Green et al. (2010) fD A statistic that measures the excess allele sharing while controlling for local variation in ancestry in the recipient population Martin et al. (2015) U20/U50/U80 Number of uniquely shared alleles between the recipient and donor population that are of frequency ,1% in the outgroup, 100% in the donor, and more than 20/50/80% in the recipient population Racimo et al. (2017) Q90/Q95 90/95% quantile of the distribution of derived allele frequencies in the recipient population, that are of frequency below 1% in the outgroup and 100% in the donor population. Racimo et al. (2017) Heterozygosity Expected heterozygosity in the recipient population measured by the mean of 2*p*(1-p), with p being the frequency of any given allele in the recipient population Crow et al. (1970) deleterious compared to mutations in the other candidate AI regions. To test for this, we considered two summary statistics that quantify the amount of selection in local regions of the genome. Specifically, we examined the degree of background selection measured by the B-statistic inferred across the human genome (McVicker et al. 2009), and, second, we used the dN/dS ratio computed across primate species including humans (Enard et al. 2016) as a proxy for the degree of selective constraint (i.e., selection coefficients at nonsynonymous mutations) in these genes. We found that HLA and HYAL2 have low B-values (McVicker et al. 2009) relative to other 5-Mb regions of the genome ( Figure 6B and Figure S14), suggesting that these genes are experiencing more linked selection than the rest of the genome. However, HYAL2 and HLA have dN/dS ratios that are well within the genome-wide distribution ( Figure 6B), suggesting that the strength of selection is not the main factor inflating the FPRs in these regions. Since the B-values are influenced by the combined effects of the density of functional elements in which deleterious mutations occur, recombination rate, and the selective effects of coding and noncoding regions, the fact that HLA and HYAL2 are outliers on this metric confirms our conclusion that the high exon density, together with low recombination rate, are the major factors influencing false-positive inferences of AI due to recessive deleterious mutations. We calculated the critical values for all summary statistics using the most extreme 5% tail values under the two null models, and computed the P-values of the empirical data points for the statistics. Among the four genes we use as examples ( Figure S15), the "outlier" genes (HLA region and HYAL2) on average have higher P-values under the deleterious null models than under the neutral null models. This trend is reflected by the points falling mostly above the diagonal in Figure S15. The higher P-values when we use the Deleterious null model indicate that this model is more conservative at AI inference. Note that, for the two "typical" AI genes, the P-values fall along the diagonal ( Figure S15), suggesting that a null model with and without deleterious mutations yield similar results.
To summarize the difference between the two null models, we computed the number of 50-kb windows that fell in the extreme 5% tail of the Neutral or Deleterious null distribution. We calculated the difference between the number of windows that are significant under the Neutral null model and the number of windows that failed to reach significance under the Deleterious null model, computed within a 500-kb core region that encompasses each AI candidate gene (Figure 7 and Figure S16). Promisingly, we find that most of the candidate regions (24/26) show similar P-values on most, if not all, of the statistics, regardless of whether a null model with deleterious mutations or neutral mutations is used. This observation further confirms the conclusion from an earlier section, that the recessive deleterious variants have a limited impact on the detection of the majority of modern human adaptive For the deleterious model, we computed the false positive rates (FPRs) in 50-kb nonoverlapping windows using the most extreme 5% value from the neutral distribution as the critical value, and show the mean FPR in the third column. For the AI models (Mild-Pos and Strong Pos), we computed the TPRs using the same neutral cutoff value in all windows, and show the TPR in the window that contains the adaptive mutation ("Focal TPR"). Note that a properly calibrated null model should have a FPR of 0.05.
introgression candidates. However, two genes (HLA and HYAL2) do exhibit a reduced signature of AI under a deleterious null model. As shown in the previous section, these two genes have low recombination and high exon density which are two factors that enhance the effect of heterosis. Therefore, these regions may not be adaptively introgressed, in contrast to previous findings (Ding et al. 2013;Vernot and Akey 2014;Racimo et al. 2017;Browning et al. 2018).
Discussion
Our work represents one of the first comprehensive efforts to consider the influence of negative selection in the detection of AI in humans. We systematically examined whether recessive deleterious variants carried by populations prior to admixture can affect the robustness of signals in summary statistics that have been shown to be informative about AI. Through these simulations, we found that the presence of recessive deleterious mutations alone is sufficient to significantly increase the mean and variance of AI summary statistics in at least some genomic regions. These shifts in the distribution of statistics (Figure 3) lead to a higher probability of falsely identifying "AI candidates" when using a neutral demographic model to define the critical value for the AI summary statistics. However, most of the previously identified top AI candidates in humans are unaffected, due to the fact that their signals of AI are too strong to be accounted for by deleterious mutations and/or that the exon density and recombination rates of these regions decrease the chance that recessive deleterious mutations can generate false-positive signals. However, recessive deleterious mutations may still impede the detection of weaker signals of AI or for genes within a specific genomic context. In fact, by examining population genomic data, we show that such effects from recessive deleterious variants can result in spurious signals of AI in two candidate genes (HLA and HYAL2) in humans.
We tested which individual genomic and/or evolutionary parameters can explain why certain genes like HLA and HYAL2 are more susceptible to false-positives in humans, compared to the other candidates. We found that these two genes have a high exon density and low recombination rate when compared to the rest of the genome (Figure 6). High exon density effectively creates a larger mutational target, which leads to the accumulation of more deleterious mutations in a given genomic region. Low recombination rate lowers the probability of crossing over, so linked recessive deleterious variants are more likely to remain linked on a given haplotype. Effectively, both high exon density and low recombination rate maximize the heterosis effect because admixture with a distantly related population will bring in haplotypes carrying nondeleterious alleles at these positions. Therefore, the introgressed ancestry at these regions will increase in the recipient population despite carrying a different set of deleterious variants, leading to the elevation of FPRs in the AI summary statistics. This process acts in a similar manner as AI, except that no beneficial mutations are involved. Fortunately for human geneticists, the density of exons in the human genome is rather low, mitigating the effect of recessive deleterious mutations on generating false positive signals of AI for most (24/26) of the previously identified top candidates.
Other genomic factors, like the density of noncoding functional elements as well as the strength on natural selection acting on deleterious mutations could, in principle, affect the FPR in certain genomic regions. To quantify the importance of these factors, we examined the distributions of B-statistics and dN/dS ratios. The B-statistic measures the strength of background selection due to linked variants (Hudson and Kaplan 1995;Charlesworth 2012), and its value is computed by combining information from the distribution of exons, Figure 5 False positive rates (FPR) for summary statistics from human AI candidate regions. FPRs for several summary statistics are computed by simulating data under the Deleterious mutation model, using critical values determined from the neutral model. All simulations assume Model_h and the recombination rates and exon density of these regions of the genome. The HLA and HYAL2-like regions result in the highest FPRs, while the EPAS1 and BNC2-like regions have similar FPRs as the other regions simulated.
noncoding variants, recombination rate, and selection coefficient (McVicker et al. 2009). Notably, HYAL2 exhibits a strikingly low B-statistic when compared to the rest of the genome and HLA also has a below-average B-statistic, suggesting these genes experience more linked selection. To test whether the low B-statistics in these genes is driven by mutations in these genes being highly deleterious, rather than their high exon density and their low recombination rate, we examined the distribution of dN/dS ratios. Neither gene has unusually low dN/dS values, implying that the selection coefficients for nonsynonymous mutations in these genes are not more deleterious than these in most other genes. Taken together, these results argue that exon density and recombination rate are the important factors driving the elevated FPR for AI in these regions of the genome.
We also show that the demographic history of human populations, including a change in the recipient population size, does not play a major role in affecting the FPR of tests for AI. However, the near-exponential population growth in the recent history of modern humans may have increased the FPR in genes that are already susceptible to false-positive results due to deleterious mutations. This is consistent with the findings of Kim et al. (2018) where they showed that a recovery of population size after a bottleneck in the recipient population can exaggerate the heterosis effect. This is likely due to the fact that a large effective population size restricts the extent of genetic drift, leading to a more prominent effect of natural selection, including the complementation of deleterious alleles via the heterosis effect.
Our modeling approach makes a number of assumptions. For instance, we mainly considered the extreme case where deleterious variants are completely recessive (h = 0). The reason for this is that we set out to determine whether deleterious variants are a concern for AI signals when this effect is maximized. Kim et al. (2018) already studied the effect from additive variants and observed little effect on introgressed ancestry. In empirical genomic data, the distribution of dominance should be in between the two extremes (Lynch et al. 1999;Whitlock et al. 2000;Eyre-Walker and Keightley 2007;Lynch 2010;Agrawal and Whitlock 2011;Harris and Nielsen 2016;Kim et al. 2018). A current challenge is that the empirical values of dominance coefficients for deleterious mutations in humans remain unknown. We show in our simulations (Figure S13) that the genomic regions with elevated FPRs maintain this behavior under models with a wide range of dominance coefficients, including when the mutations are partially recessive (hs relationship (Henn et al. 2016)). It is promising that, even when the heterosis effect acts in its most extreme manner (assuming h = 0), the signature of AI in the top candidate regions persists. Other values of h would be unlikely to affect the conclusion that 24/26 candidates are robust to confounding by deleterious mutations.
Another simplifying assumption made in most of simulations with genuine AI is that the positive selection on the archaic variant began immediately after introgression. To explore whether the time of positive selection affects the distribution of AI statistics on HLA and HYAL2, we performed additional simulations of these regions when there is a gap between the timing of introgression and positive selection [Standing Archaic Variation (SAV); Supplemental Methods; Jagoda et al. 2017]. We observe that the AI summary statistics from this model, which in effect resemble a weaker positive selection signal, are even less distinguishable from the Deleterious model than the Mild-Pos model ( Figure S8). Even though the signals in most of the top AI candidate genes in humans are unaffected by deleterious variation, there are several reasons why deleterious mutations should still be considered in null models for detecting AI. First, the combination of evolutionary parameters (low recombination rate and high exon density) that leads to an elevation of falsepositives may occur much more commonly in other study systems. Moreover, even for modern humans, the demography used in simulations is an approximation of the modern Eurasian population history, which may not represent the true evolutionary history of all non-African populations. For example, when more than one introgression event occurs [e.g., Denisovan introgression in Asia (Browning et al. 2018;Jacobs et al. 2019)], and when the ancestral modern human populations were small, the heterosis effect from deleterious variants could have a different impact under a complex demography. And finally, subtle signals of true AI might not be as distinct from the signals left by deleterious mutations. For instance, a model where selection does not act immediately after introgression may lead to a weaker signature of adaptive introgression. We recommend caution in interpreting these weaker signals, especially in regions of the genome with low recombination rate and high exon density.
Future work to try to distinguish true AI from false-positives due to deleterious mutations in regions of the genome with low recombination rate and high exon density could use the spatial pattern of summary statistics across a genomic region. Indeed, Figure 3 shows that genuine AI leads to a more peaked elevation of U50 at the adaptive mutation compared to recessive deleterious mutations. However, these plots show the distribution over 200 simulation replicates. By visualizing the distribution of statistics values in randomly selected single replicates of simulations ( Figure S12), an elevated statistical "peak" value, which is a typical signature of AI, can be generated at a random region of the genome by recessive deleterious mutations only. Thus, the spatial pattern may not be a complete solution in any particular region of the genome with low recombination rate and high exon density.
Although heterosis upon admixture effectively reduces the deleterious effect of recessive variants, its mechanism and biological consequences are essentially different from adaptive introgression, which we expect to produce phenotypic variation in biologically meaningful genes under a given environment. It is thus important to distinguish the signals generated by the heterosis effect on recessive deleterious mutations from legitimate adaptive introgression. Therefore, improving null models to better distinguish between these two processes is important, especially when studying organisms that have compact genomic structures, and/or distinct demographic events that may accelerate the dynamics of the heterosis effect after introgression.
Acknowledgments
The authors thank their colleagues from the Lohmueller laboratory at University of California Los Angeles (UCLA) and the Huerta-Sánchez laboratory at Brown University for helpful discussions during the development of this study. We also thank Fernando Racimo at University of Copenhagen, Denmark for kindly sharing sample code for computing AI summary statistics. This work was supported by National Institutes of Health (NIH) grant R35GM119856 (to K.E.L.) and E.H.-S was supported by NIH grant R35GM128946 and National Science Foundation (NSF) grant DEB-1557151. The difference in the number of significant hits in null models with and without deleterious regions within a 500-kb region surrounding HYAL2 and HLA genes. Each point represents the difference in the number of hits (y-axis: number of windows significant under a neutral modelnumber of windows significant under the deleterious null model) for the statistics shown on the x-axis. The positive values, highlighted in the gray-shaded area and colored by population, imply the deleterious null model is more conservative for a given statistic.
Literature Cited
If an AI candidate region shows points above zero for most of the summary statistics, such candidate region is likely prone to false positives due to the heterosis effect, and the validity of adaptive introgression on this region requires further investigation. | 2020-06-04T09:03:59.832Z | 2020-06-02T00:00:00.000 | {
"year": 2020,
"sha1": "2ebec441a943aa2cc5df96fa22b72f209d4aee7b",
"oa_license": null,
"oa_url": "https://academic.oup.com/genetics/article-pdf/215/3/799/35514668/genetics0799.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "eab4ea384b72fec48e8698c577a7f6fb76feac8e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
219281745 | pes2o/s2orc | v3-fos-license | Dental disorders in sows from Swedish commercial herds
Knowledge on dental disorders in commercial sows is limited although such conditions may have important animal welfare implications. In a pilot study, the dental and periodontal health of 58 sows (Landrace*Yorkshire-crosses) from 8 Swedish commercial pig herds, slaughtered at one abattoir, were investigated. The oral cavity was inspected and abnormalities were recorded on a dental chart modified for pigs. Dental abnormalities, absence of teeth, supernumerary teeth, tooth fractures, signs of caries, and malalignment were recorded. The study revealed that 19% of the sows had supernumerary teeth and 59% of the sows missed at least one tooth. Periodontitis, calculus and malalignment were observed in 33%, 45% and 17%, respectively. Tooth wear was very common both in incisors (total 83%) and in premolars/molars (total 84%). One or more tooth fractures (between 1 and 6 per sow) was found in 41%. Signs of caries was found in 9%. In order to assess oral health, three indices were used: calculus index (CI), periodontal index (PDI) and tooth wear index (TWI). Severe periodontitis, tooth wear in incisors and tooth wear in premolars/molars were found in 7%, 34% and 35%, respectively. With respect to animal welfare, the etiology and the effects of the disorders on health, stress and pain need to be investigated.
© The Author(s) 2020. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article' s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article' s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Findings
A Swedish study on wild boars showed that a high proportion of supplementary fed animals suffered from dental lesions [1]. For commercial pig herds, attention has been given to problems in piglets after teeth clipping [2], but there has been less focus on dental health issue in adult animals. Few studies on the tooth health of sows in commercial herds have been published [3][4][5]. In humans, it is well known that periodontal infections may lead to coronary heart disease [6], artery endothelial dysfunction and systemic inflammation [7] but whether this is the case in pigs is to our knowledge not known. In this study, the dental and periodontal health of sows (Landrace*Yorkshire-crosses) from 8 Swedish commercial pig herds was investigated. The heads (n = 58) were collected at one abattoir at ordinary slaughter (permit no SE3801001912, Swedish Board of Agriculture). It was not possible to get detailed information about all individual sows due to loss of ear marks so individual background data were excluded from the study. According to data from five herds, age varied between four and 7 years (n = 35, mean 6.1 ± 1.3 SD). To enable examination of the oral cavity, the jaws were opened by lateral incision through the masseter muscle and manually separated. The oral cavity was inspected and abnormalities were recorded on a dental chart modified for pigs (Additional file 1) [1]. All examinations were made by the same observer (AM). Dental abnormalities, absence of teeth, supernumerary teeth, tooth fractures, caries, and malalignments were recorded. In order to assess oral health, three indices were used: calculus index (CI, 0, 1-3), tooth wear (TW, 0, 1-3), and periodontal index (PDI, 1-3), (Additional file 2) [1]. The severity of the lesion increased with index number. Spearman rank correlations between
Open Access
Acta Veterinaria Scandinavica The study showed that 19% of the sows (n = 11) had supernumerary teeth (Fig. 1) and that 59% (n = 34) missed at least one tooth. About 50% of the missing teeth were premolars (Fig. 2). The cause of tooth missing could not be assessed. By macroscopic observation, differentiation between hypodontia (congenital absence of one or more teeth), failure to erupt, and tooth loss for other reasons cannot be made.
Calculus was found in 45% of the sows (CI 1 = 3%, Cl 2 = 16%; CI 3 = 26%) while periodontitis was found in 33% of the sows (PDI 1 = 7%; PDI2 19% and PDI 3 = 7%). In the sows with PDI3, defined as gingival recession exposing > 70% of the root, the teeth were loose. Dental malalignment was found in 28% of the sows. Tooth wear was also very common and observed in incisors (83%) as well as in premolars/molars (84%). Severe tooth wear was found in both incisors (34%) and molars (35%). One or more tooth fractures (between 1 and 6 per sow) was detected in 41% (n = 24). Fractures were more common in incisors and found more often in the mandible than in the maxilla. The most severe fractures were observed in incisors but also a few cases were found in premolar/molars. Caries was found in 9%. There was a negative correlation between fracture and tooth wear (p < 0.001) and positive correlations between tooth wear incisors and tooth wear premolar/molars (p < 0.001), between periodontitis and tooth wear (p < 0.05) as well as between calculus and tooth wear (p < 0.05).
The results show that dental disorders are common among Swedish commercial sows and different from those found in female wild boars in Sweden [1]. The domestic sows from commercial pig herds had some disorders that may be of genetic origin (e.g. supernumerary teeth, absence of teeth, malalignment) and which were uncommon in wild boars. All these three disorders may lead to abnormal wear and also predispose to dental diseases such as caries and periodontitis, e.g. due to impaction of food between teeth. A genetic basis for certain anomalies of the teeth is well known in humans [8].
In the present study, high proportions of tooth wear were found both in incisors and premolars/molars. One possible explanation could be that the sows had been bar-biting, which may be a behaviour around feeding [9]. High frequency of tooth wear (71%) was also reported from a recent Finnish study on commercial sows found dead or being euthanized [5]. According to Davies et al. [10], tooth wear was found in both outdoor (28%) and indoor sows (30%). In wild boars, tooth wear was more common in molars than in incisors [1], which may be explained by the wild boars rooting behaviour resulting in mastication of soils and gravels. Fractures were observed more on incisors than on premolars/ molars, which also may be due to the sow behaviour to chew on stable interior.
Ala-Kurikka et al. [5] classified the dental disorders in 'tooth wear, fracture, periodontal disease and calculus' and showed that fractures were the second most common dental disease. The proportion of periodontitis was higher in the present study (33%) than in the Finnish study (26%) [5]. The reason may be different assessment of tooth disorders but also the type of sows examined. In the present study the sows were sent to an abattoir, i.e. the sows were considered fit for transport and human consumption. In spite of this, many of the slaughtered sows (26%) had severe periodontitis (PI2 and PI3).
The present study, which was based on a limited sample size, clearly showed that tooth disorders are common in at least some Swedish commercial sow herds. More studies on adult pigs are needed to determine the effect of tooth disorders on sow welfare and health and the association between dental health and culling as well as the effect of housing and feeding regimes. | 2020-06-04T14:30:42.932Z | 2020-06-04T00:00:00.000 | {
"year": 2020,
"sha1": "1d98b752e6392ba83e1d56f6792d06a16a0f8088",
"oa_license": "CCBY",
"oa_url": "https://actavetscand.biomedcentral.com/track/pdf/10.1186/s13028-020-00521-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1d98b752e6392ba83e1d56f6792d06a16a0f8088",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21410513 | pes2o/s2orc | v3-fos-license | Possibilities for modifying risk factors for the development of hospital-acquired pneumonia in intensive care patients: results of a retrospective, observational study
Background. Hospital-acquired pneumonia (HAP) development is affected by a range of risk factors. Methods. A retrospective, observational study processing data on all consecutive intensive care patients older than 18 years of age between 1 January 2011 and 31 December 2015. The aim was to determine the incidence of potential risk factors and their impact on the development of HAP. Results. A total of 2229 patients. The overall mortality was 24.0%; the mean APACHE II score 21.4. The mean length of ICU stay was 5.9 days and the mean length of hospital stay was 20.5 days. The criteria for HAP were met by 310 patients (13.9%). Earlyand late-onset HAP was diagnosed in 45 (14.5%) and 265 (85.5%) patients, respectively. The mean APACHE II score was 22.1, the mean length of ICU stay was 7.6 days and the mean length of hospital stay was 23.5 days. The most important non-modifiable factors increasing the risk of HAP were multiple organ failure (OR 13.733; P<0.0001), cardiac heart disease (OR 2.255; P<0.0001) and chronic renal failure (OR 2.194; P<0.002). The most common modifiable factors were intolerance to enteral nutrition (OR 3.055; P<0.0001), urgent tracheal intubation (OR 1.511; P<0.024), reintubation (OR 1.851; P<0.001), and bronchoscopy (OR 2.558; P<0.0001). Stress ulcer prophylaxis was administered to 83% of HAP patients and 68% of patients without HAP. Prophylaxis with famotidine was associated with a lower risk of HAP in 40.0% of patients (non-HAP in 49.9%), (OR 0.669; P=0.001) than prophylaxis with pentoprazol in 42.6% and 49.5% of patients, respectively (OR 0.756; P=0.027). Conclusions. Factors associated with the highest risk of the development of HAP can be determined. Pharmacological prophylaxis of gastric and duodenal stress ulcers was identified as an independent risk factor for HAP. The study was registered in the ClinicalTrials.gov database under the number NCT02779933.
INTRODUCTION
Early and correct identification of risk factors in the nursing process with respect to the development of nosocomial bacterial infections is a prerequisite for selecting rational and safe treatments in intensive care unit (ICU) patients. Moreover, effective preventive measures, together with adequate and early empirical antibiotic therapy, contribute to successful therapy, make it shorter and less expensive 1,2 , and also lead to reduced mortality rates 3 . One of the most common infections associated with healthcare in ICU patients is hospital-acquired pneumonia (HAP). According to the American Thoracic Society and the Infectious Diseases Society of America, HAP includes ventilator-associated pneumonia (VAP) (ref. 4 ). In ICU patients, HAP accounts for 10-47% of nosocomial infections 5 , with mortality rates ranging from 20% to 60% (ref. 6,7 ). Most frequently, HAP is associated with invasive airway management and mechanical ventilation, with the latter being referred to as VAP (ref. 8 ); this develops more than 48 h from endotracheal intubation. From an epidemiological perspective, two types of HAP are distinguished: early-and late-onset. The clinical and laboratory manifestations of early-onset HAP occur within 48 to 96 h from hospital admission; late-onset HAP develops from day 5 of a hospital stay but no later than 14 days after discharge 9 . The primary ways of transmission of etiological agents into the lower airways are most frequently microaspiration of microbes colonizing the oropharyngeal region or upper gastrointestinal tract 7 or transmission of infection from the environment. The risk factors for the development of HAP may be either non-modifiable (patient-related) or modifiable (hospital-related).
The present study focused on assessing risk factors that may contribute to the development of HAP, determining their prevalence and proposing their modification in an effort to reduce the risk of developing HAP in intensive care patients.
Setting and Study design
A retrospective, observational study was proposed, to obtain clinical and epidemiological data on ICU patients. The sample was divided into patients who developed HAP and a subgroup of those who did not. In both subgroups, the frequency of potential risk factors for therapy and nursing care was investigated. The study was approved by the University Hospital Olomouc Ethics Committee (No. 63/16). Informed consent from patients enrolled in the study was not required. The study was registered in the ClinicalTrials.gov database under the number NCT02779933. The enrolment was influenced by neither the type of lower airway management (invasive/non-invasive) nor the result (positive/negative) of microbiological testing of samples collected from the lower airways (endobronchial aspirate or bronchoalveolar lavage) as it has been demonstrated that approximately one-third of collected samples may be microbiologically negative even if pneumonia is clinically manifested 10 .
Participants
Enrolled in the study were patients staying at the ICU of the Department of Anesthesiology and Intensive Care Medicine, Faculty of Medicine and Dentistry, Palacky University Olomouc and University Hospital Olomouc, between 1 January 2011 and 31 December 2015. The participants were all patients older than 18 years of age consecutively admitted to the ICU.
Definitions
Pneumonia is acute inflammation of the respiratory bronchioles, alveolar structures and pulmonary interstitium. Clinically it is defined as the presence of newly developed or progressive infiltrates on chest radiographs plus at least two other signs of respiratory tract infection: temperature >38 °C, chest pain, purulent sputum, leukocytosis or leukopenia, signs of inflammation on auscultation, cough and/or respiratory insufficiency 11 . HAP is defined as pneumonia that occurs 48 h or more after admission, which was not incubating at the time of admission 4 .
Outcome Assessment
The primary outcome was investigation of the relationship between individual risk factors and the development of early-and late-onset HAP. The risk factors were assessed based on their presence in the time interval between hospital admission and the moment of fulfilling the criteria for pneumonia. The risk factors were classified into two subgroups: patient-related (non-modifiable) or hospital-related (modifiable). The patient-related factors were gender (male/female), age at enrolment (years), multiple organ failure (MOF), hypertension (HN), coronary heart disease (CHD), chronic renal failure (CRF), continuous renal replacement therapy (CRRT), acute kidney injury (AKI), diabetes mellitus (DM), chronic obstructive pulmonary disease (COPD), immunosuppression (immune) or leukopenia (WBC <1.5x10 9 /L), impaired consciousness (GCS <8) and craniocerebral trauma (CCT).
Statistical Methods
No replacement of missing values or outliers was performed in order to minimize bias due to changed content of retrospective clinical records. Standard descriptive statistics were applied to summarize the primary data; continuous variables as means and 95% confidence intervals or median and range; categorical variables by absolute and relative frequencies. Multivariate logistic regression was adopted for adjusting univariate results for age and for defining the final multivariate model. The selection of variables for the multivariate model was based on univariate P<0.1 and redundancy analysis of these preselected predictors. P≤0.05 was adopted as the level of statistical significance for all analyses. In the tables, the odds ratio (OR) with 95% confidence interval was calculated. The statistical significance (P-value) was assessed with Fisher's exact test. Factors with OR and P-value in bold type are statistically significant (the confidence interval does not include 1). The association of risk factors with HAP was also verified with multivariate logistic regression. As independent predictors, the model included variables with decreased P-value (P<0.2) and risk factors present in both subgroups. The independent predictors were proton pump inhibitor (PPI), H2 antagonist (H2 antag), MOF, HN, CHD, CRF, CRRT, AKI, DM, immuno, COPD, GCS<8, tracheostomy (TS), CCT, thor, aspir, urg TI, re TI, BSC, GT, intol EN, trans and phys. The dependent variable was HAP (early-/late-onset). The model was constructed using the forward stepwise method involving 4 steps. SPSS 21 (IBM Corporation, 2012) was the software used.
Patients and descriptive data
During the above period, a total of 2229 patients, of whom 761 (34.1%) were females and 1468 (65.9%) were males, were admitted to an ICU for a total of 13,139 days. Their mean age was 58.7 ± 17.2 years (median, 63 years), specifically 62.9 ± 18.1 years (median, 67 years) for females and 57.5 ± 17.4 years (median, 62 years) for males. Their mean APACHE II score was 21.4. The mean length of ICU stay was 5.9 days and the mean length of hospital stay was 20.5 days. Based on their admission diagnosis, the participants were classified as non-surgical (1195 patients; 53.6%) or surgical (1034 patients; 46.4%). The overall mortality was 24.0% (535 patients irrespective of their diagnosis), of whom 170 were females and 365 were males.
The criteria for HAP were met by 310 patients (13.9%), 108 females and 202 males. Their mean age was 60.7 ± 17.2 years (median, 64 years), specifically 64.9 ± 17.8 years (median, 68 years) for females and 59.5 The presence of factors in the last 7 days prior to the onset of HAP: MOF -multiple organ failure, HN -hypertension, CHD -coronary heart disease, CRF -chronic renal failure, CRRT -continuous renal replacement therapy, AKI -acute kidney injury, DM -diabetes mellitus, immuno -immunosuppression, COPD -chronic obstructive pulmonary disease, GCS < 8 -Glasgow Coma Scale < 8, TS -tracheostomy, CCT -craniocerebral trauma, thor -thoracotomy, aspir -aspiration into the lower airways, urg TI -urgent tracheal intubation, re TI -reintubation, BSC -bronchoscopy, GT -gastric tube, intol EN -intolerance of enteral nutrition, trans -transport outside the ICU, phys -physiotherapy ± 16.5 years (median, 63 years) for males. Females were statistically significantly older than males (P=0.022). Early-and late-onset HAP was diagnosed in 45 (14.5%) and 265 (85.5%) patients, respectively. The flow chart is shown in Figure 1. No statistically significant relationship was found between the HAP type and patient gender (P=1.000). There was a statistically significant The presence of factors in the last 7 days prior to the onset of HAP: PPI -administration of the proton pump inhibitor pantoprazole in therapeutic doses, H2 antag -the H2 antagonist famotidine at therapeutic doses, MOF -multiple organ failure, HN -hypertension, CHD -coronary heart disease, CRF -chronic renal failure, CRRT -continuous renal replacement therapy, AKI -acute kidney injury, DM -diabetes mellitus, immuno -immunosuppression, COPD -chronic obstructive pulmonary disease, GCS < 8 -Glasgow Coma Scale < 8, TS -tracheostomy, CCT -craniocerebral trauma, thor -thoracotomy, aspir -aspiration into the lower airways, urg TI -urgent tracheal intubation, re TI -reintubation, BSC -bronchoscopy, GT -gastric tube, intol EN -intolerance of enteral nutrition, trans -transport outside the ICU, phys -physiotherapy The presence of factors in the last 7 days prior to the onset of HAP: MOF -multiple organ failure, CHD -coronary heart disease, CRFchronic renal failure, urg TI -urgent tracheal intubation, re TI -reintubation, BSC -bronchoscopy, intol EN -intolerance of enteral nutrition nificant (P=0.062). No association was found between the APACHE II score and HAP type. The mean length of HAP patients' stay in the ICU was 7.6 days and their mean hospital stay was 23.5 days.
Main results
The absolute and relative frequencies of modifiable/ non-modifiable factors for therapy and nursing care with respect to the risk of developing early-and late-onset HAP are shown in Table 1.
Stress ulcer prophylaxis was administered to 83% of HAP patients and 68% of patients without HAP.
The absolute and relative frequencies of modifiable/ non-modifiable factors for therapy and nursing care with respect to the presence/absence of HAP are shown in Table 2.
After the relationships between therapy / nursing care factors and the risk of the development of HAP were assessed by multivariate logistic regression, statistically significant predictors for the development of HAP were identified as shown in Table 3.
DISCUSSION
Data are presented from a long-term study of a large sample of ICU patients comparing the risk posed by individual factors of therapy and nursing care with respect to the development of early-and late-onset HAP; the impact that the two most common types of stress ulcer prophylaxis have on the incidence of HAP is also documented. The study showed that certain factors, both modifiable and non-modifiable, significantly increase the risk of HAP. The most important non-modifiable factors increasing the risk of developing HAP are MOF, CHD and, at a lower level of statistical significance, the presence of CRF. Patient-related risk factors were investigated, among others, in a large study of 8657 ICU patients which identified atrial fibrillation as a significant risk factor for the development of HAP. However, gender, smoking, CHD, DM, rheumatic heart disease, non-rheumatic valvular disease, myocardiopathy/myocarditis, hyperlipidemia, electrolyte disturbance and congenital heart disease were not significant risk factors for HAP (ref. 12 ). Another study of ICU patients with HAP in association with Staphylococcus aureus showed that significant risk factors are diseases such as liver cirrhosis or DM. On the other hand, the study failed to show a relationship to COPD, hypertension or CRF (ref. 13 ). Consistent with the present study, Vardakas et al. did not identify DM as a risk factor for HAP (ref. 14 ). Finally, monitoring of residual gastric volume was not a factor significantly reducing the risk of developing HAP (ref. 15 ).
Among hospital-related, or modifiable, factors included in the present study, intolerance of enteral nutrition was the most significant, with urgent tracheal intubation, reintubation and bronchoscopy showing a lower level of statistical significance. If well-tolerated, enteral nutrition is not a risk factor. This was documented, for example, in a study of polytrauma patients showing that enteral nutrition can decrease the incidence of nosocomial pneumonia 16 . By contrast, the presence of an inserted GT is considered as a significant risk factor, as seen from a recent large study of 4427 patients documenting that mechanical ventilation and the use of a GT were the most significant risk factors for the development of HAP (ref. 17 ). Apart from the insertion of a GT, patient immobility is a stronger risk factor for HAP than dysphagia, as shown by Brogan et al. 18 . Also consistent with the present study are the results of a Polish study of 1227 ICU patients showing a statistically significant correlation between the development of HAP and incidence of reintubation, tracheostomy and bronchoscopy19. Another study showed the effect of a history of pre-hospital aspiration or the presence of blood and emesis in the airways after intubation on the development of HAP (aspiration 16% vs. no aspiration 4%) (ref. 20 ). In the HAP group 10% of patients and in the non-HAP group 8% of patients had undergone thoracic surgery; the difference was not statistically significant and therefore the present study failed to identify thoracic surgery as a risk factor for HAP. Some studies referred to the incidence of pneumonia and subsequently of thoracic surgery in 3.3-25%. Similarly, a study of 604 patients undergoing resection of bronchogenic carcinoma showed 5% incidence of HAP (ref. 21 ). In a group of major heart surgery patients, however, the incidence of HAP was 46% in those requiring more than 48 h of mechanical ventilation. The independent risk factors for HAP were age older than 70 years, perioperative transfusions, days of mechanical ventilation, reintubation, previous cardiac surgery, emergent surgery and intraoperative inotropic support 22 . In the present study, previous aspiration was only a risk factor at a low level of statistical significance. Craniocerebral trauma or neurosurgical intervention were associated with the development of HAP in 13% of patients in the present study. In neurosurgery patients, univariate analysis demonstrated that a low GCS, long hospital stay, use of wide-spectrum antibiotics, mechanical ventilation, total parenteral nutrition and reoperation were risk factors for nosocomial infections 23 . Similarly, in abdominal surgery patients, an ICU stay longer than or equal to 7 days and a postoperative hospital stay of 15 days or more were the predictive factors most strongly associated with lung infection 24 . The varied results for individual risk factors may also be documented by one study on the incidence of HAP in non-ICU patients. Malnutrition, CRF, anemia, depression of consciousness, previous hospitalization and thoracic surgery were significant risk factors for HAP in these patients 25 . There were lower mean ages of 55.3 and 61.9 years for patients with early-and lateonset HAP, respectively. We explain that the late-onset HAP affects more weakened patients with polymorbidity, who are more susceptible to infections caused by MDR pathogens. Further, in the presented study bronchoscopy was associated with a higher incidence of HAP, but bronchoscopy cannot be considered as a risk factor for HAP, because risk factors are the reasons which led to its implementation. These were most commonly: massive congestion, aspiration into the lungs, chronic lung disease or esophagotracheal fistula. Also intolerance of enteral nutrition is associated with higher incidence of HAP more due to capillar action from around the gastric tube and more frequent gastric fluid retention in the space above the obturation balloon of the tracheal tube causes silent microaspiration.
An important outcome of the present study is assessment of the impact of the two most common types of stress ulcer prophylaxis on the incidence of HAP. Stress ulcer prophylaxis was administered to 83% of HAP patients and 68% of patients without HAP. Prophylaxis with famotidine used to prevent stress ulcer was associated with a lower risk of HAP in 40% of patients (OR 0.669; P=0.001) than pentoprazol prophylaxis in 43% of patients (OR 0.756; P=0.027), but the results are not as significant as in another similar study. The rate of HAP was lower for an H2 antag (10%) than for a PPI (30%). Administration of the H2 antag was also associated with fewer hospital days (5.6 vs. 17.6) (ref. 26 ). A statistically significant difference in the incidence of HAP was found in a study comparing the effects of sucralfate (14%) and a PPI (36%) (ref. 27 ). Similar findings were also reported by authors of a large retrospective study of 21.214 cardiac surgery ICU patients, with the incidence of HAP being higher in patients receiving a PPI as compared to an H2 antag 28 . However, the administration of stress ulcer prophylaxis itself is linked to a higher risk of HAP, as documented, for example, by a large study of 63.878 patients showing that acid-suppressive medication was associated with a higher incidence of HAP, the association being significant for a PPI and non-significant for H2 antag 29 .
The present study found differences in the incidence of certain modifiable/non-modifiable factors for early-and late-onset HAP (Table 1). In early-onset HAP, the statistically significantly more frequent factors were CHD, COPD and phys, while in late-onset HAP, AKI, CRRT and TS were statistically significantly more common. However, the presence of physiotherapy in early-onset HAP is not considered a factor increasing its incidence. This is rather associated with physiotherapy provided to at-risk patients who subsequently develop HAP. Similarly, higher TS rates in patients with late-onset HAP is considered a sign of more severe pneumonia requiring longer ventilator use. TS was performed in patients prior to randomization, prior to formation of HAP and the reason for its implementation was different than the current HAP attack. Most often it was longterm impairment of consciousness, respiratory insufficiency after previous severe pneumonia or long term ventilator dependence in chronic pulmonary disease patients. The presence of TS increases the HAP incidence probably due to the bypassed upper airway, as a natural bacterial filter.
CONCLUSION
Epidemiological data on ICU patients obtained over the five-year period show that the highest risk of HAP is associated with the patient-related factors MOF, CHD and CRF and the following hospital-related factors: urgent tracheal intubation, reintubation, bronchoscopy and intolerance of enteral nutrition. Additionally, stress ulcer prophylaxis was found to be an independent risk factor for the development of HAP. Prophylaxis with famotidine was associated with a lower risk than pentoprazol prophylaxis. | 2018-04-03T04:20:16.825Z | 2017-04-26T00:00:00.000 | {
"year": 2017,
"sha1": "4e3167740462c485eca9a514317f2dd62f32d272",
"oa_license": "CCBY",
"oa_url": "http://biomed.papers.upol.cz/doi/10.5507/bp.2017.019.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4e3167740462c485eca9a514317f2dd62f32d272",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119397425 | pes2o/s2orc | v3-fos-license | de Sitter geodesics
The geodesics on the $(1+3)$-dimensional de Sitter spacetime are considered studying how their parameters are determined by the conserved quantities in the conformal Euclidean, Friedmann-Lema\^itre-Robertson-Walker, de Sitter-Painlev\'e and static local charts with Cartesian space coordinates. Moreover, it is shown that there exist a special static chart in which the geodesics are genuine hyperbolas whose asymptotes are given by the conserved momentum and the associated dual momentum.
Introduction
The free geodesic motion on the (1+3)-dimensional de Sitter (dS) spacetime was considered by may authors which studied the role of the conserved quantities [1,2] or the so called dS relativity in a dS local chart where the Lorentz transformations of the SO(1, 4) isomety group have the same form as in special relativity [3].
Recently we proposed our own version of relativity on anti-de Sitter [4,5] and dS [6,7] backgrounds which solves completely the problem of the relative geodesic motion thanks to the Lorentzian isometries that can transform among themselves any local charts in which we study geodesic trajectories [5,6]. Our approach is based on the classical conserved quantities given by the Killing vectors associated to isometries. With their help we marked the different local charts as fixed or mobile frames finding the parametrization of the Lorenzian isometries relating them.
In the case of the dS spacetimes we studied the conserved quantities associated to the SO(1, 4) isometries [8,9,10] showing that apart from the energy, momentum and angular momentum, there exists a dual momentum which plays an important role in understanding the significance of the principal classical invariant that in the flat limit gives the mass condition of special relativity [10,6]. These conserved quantities which helped us to build the dS relativity play an important role in determining the geodesic trajectories. Here we would like to concentrate on this problem completing thus our previous investigations with a systematic study of the dS geodesics.
In general, on the dS spacetime the geodesics depend on the initial condition and the conserved momentum which determine the trajectory parameters and, implicitly, the conserved quantities along geodesics. The inverse problem we intend to study here is how the initial conditions depend on the conserved quantities when these are given. In other words, we would like to investigate how the form of the geodesic trajectories depend on these conserved quantities, eliminating the undetermined arbitrary initial conditions. This goal restricts the investigation area of the present paper which remains thus a technical review rather then a major original contribution. Nevertheless, here we obtain new results concerning the properties of the above mentioned conserved quantities and their role in determining geodesic parameters. Moreover, we introduce a new conserved vector which offers one some technical advantages when we use exclusively Cartesian space coordinates as we proceed in this paper.
We start in the second section presenting the physical meaning of the classical conserved quantities related to the dS isometries and the local charts we use. In the next section we discuss the properties of these quantities in the conformal Euclidean chart and we introduce the new conserved vector that helps us to separate the contribution of the initial conditions to the geodesic equations of this chart. The section 3 is devoted to the geodesics in comovong charts with Cartesian space coordinates, i. e. conformal Euclidean, Friedmann-Lemaître-Robertson-Walker (FLRW) and de Sitter-Painlevé (dSP) ones. The goodesics in the static and special static charts with similar space coordinates are studied in the next section. Finally, in section 6, a special attention is paid to the form of the null cones of all the local charts we consider here. The presence of the event horizons [11] in the dSP and static charts is pointed out and briefly commented. Few concluding remarks are presented in the last section.
Preliminaries
The de Sitter spacetime (M, g) is defined as a hyperboloid of radius 1/ω in the five-dimensional flat spacetime (M 5 , η 5 ) of coordinates z A (labeled by the indices A, B, ... = 0, 1, 2, 3, 4) having the pseudo-Euclidean metric η 5 = diag(1, −1, −1, −1, −1). The local charts {x} of coordinates x µ (α, µ, ν, ... = 0, 1, 2, 3) can be introduced on (M, g) giving the set of functions z A (x) which solve the hyperboloid equation, where ω denotes the Hubble de Sitter constant since in our notations H is reserved for the energy (or Hamiltonian) operator [10]. The de Sitter isometry group is just the gauge group G(η 5 ) = SO(1, 4) of the embedding manifold (M 5 , η 5 ) that leave invariant the metric η 5 and implicitly Eq. (1). Therefore, given a system of coordinates, defined by the functions z = z(x), each transformation g ∈ SO(1, 4) defines the isometry x → x = φ g (x) derived from the system of equations z[φ g (x)] = gz(x). For these isometries we use the canonical parametrization with skew-symmetric parameters, ξ AB = −ξ BA , and the covariant generators S AB of the fundamental representation of the so(1, 4) algebra carried by M 5 . These generators have the matrix elements, The principal so(1, 4) basis-generators with an obvious physical meaning [10] are the energy H = ωS 04 , angular momentum J k = 1 2 ε kij S ij , Lorentz boosts K i = S 0i , and a Runge-Lenz-type vector, R i = S i4 , generating rotation involving the z 4 axis. The effect of the SO(1, 4) isometries depends on the concrete coordonates we use as we showed in Refs. [6].
The corresponding classical conserved quantities can be derived with the help of the Killing vectors k (AB) whose covariant components in an arbitrary chart {x} of (M, g) are defined as where z A = η AB z B . The principal conserved quantities along a timelike geodesic of a point-like particle of mass m and momentum P [10] have the general form K (AB) = ωk (AB) µ mu µ where u µ = dx µ (s) ds are the components of the covariant four-velocity that satisfy u 2 = g µν u µ u ν = 1. The conserved quantities with physical meaning [10] are, where E is the conserved energy, L i are the usual components of angular momentum while K i and R i are related to the conserved momentum, P , and its associated dual momentum, Q, whose components read [10] In what follows we use only Cartesian space coordinates which satisfy z i ∝ x i such that the SO(3) symmetry becomes global, any quantity bearing space indices transforming under rotations as SO(3) vectors and tensors. Under such circumstances, we may use the vector notation for the SO(3) vectors, including the position one, The norms of the conserved vectors will be denoted simply as V = | V |.
Three sets of coordinates are under consideration here: those of the conformal Euclidean chart (called here simply conformal chart) denoted by (t c , x c ), the dSP coordinates (t, x) and the static coordinates (t s , x s ) where t s is the static time of the usual static chart {t s , x} while x s are the special static space coordinates defined in Refs. [12,13]. All these coordinates are related as and can be combined for defining various local charts. In the following table we list the charts in which we have to study the geodesic equations either in covariant parametric form, x = x(λ), or in the closed form x = x(t).
Conserved quantities in conformal charts
Let us start with the conformal charts, {t c , x c }, with the conformal time t c and Cartesian spaces coordinates x i c (i, j, k, ... = 1, 2, 3), defined by the functions These charts cover the expanding part of M for t c ∈ (−∞, 0) and x c ∈ R 3 while the collapsing part is covered by similar charts with t c > 0. In both these cases we have the same conformal flat line element, We stress that here we restrict ourselves to consider only the expanding portion (with t c < 0) which is a possible model of our expanding universe. In this chart the contravariant components of the Killing vectors can be calculated according to Eq. (4) as where we denote Taking into account that the particle of mass m has the momentum P of we deduce the components of the four-velocity, deriving the rectilinear timelike gedesic trajectory [10], which is completely determined by the initial condition x c (t c0 ) = x c0 and the conserved momentum P . Consequently, the conserved quantities in an arbitrary point on geodesic, (t c , x c (t c )), depend only on this point and the momentum P having the form [10,6], These quantities are not independent since they satisfy, which means that the vectors x c (t c ), P and Q are in the same plane, orthogonal to L. Moreover, one may verify the identity corresponding to the first Casimir operator of the so(1, 4) algebra [10]. In the flat limit, ω → 0, when −ωt c → 1 and Q → P this identity becomes just the usual mass-shell condition p 2 = m 2 of special relativity. Thus we can conclude that there are only six independent conserved quantities, say the components of the vectors ( P , Q), that form a basis generating freely all the other conserved quantities.
On the other hand, we remark that the geodesic equation (17) can be split as X(t c , x c ) = X(t c0 , x c0 ), for any arbitrary initial condition, if we introduce the new vector which is an useful auxiliary conserved quantity that offers us the possibility of changing the basis ( P , Q) into the new one ( X, P ) where we can write observing that the identity (22) holds for any X and P . The conserved basis-vectors ( P , Q) or ( X, P ) determine the plane of the geodesic trajectory where it is convenient to consider the Cartesian frame in O whose orthonormal basis, { n ⊥ , n P }, is formed by n P = P P and its orthogonal complement, n ⊥ . In this frame we use the local Cartesian coordinates (x ⊥ , x ) such that any position vector can be written as x = x ⊥ + x = x ⊥ n ⊥ + x n P . Notice that when L = 0 then the geodesic is passing through the origin O and, consequently, n ⊥ remains undetermined.
Geodesics in comoving charts
Turning back to our problem of the relation among the initial conditions and the conserved quantities, we study first first the comoving charts, i. e. the conformal, FLRW, and dSP ones.
In the euclidean chart, Eq.(18) allows us to put the vector (23) in the form . In other respects, from Eqs. (20) and (22) we may write the useful representation, which helps us to find that the vector is conserved, giving the position of the particle at the time when this is passing through the point A, of position vector x c (t c⊥ ) = x cA , where the energy (18) takes the form E = m 2 + ω 2 P 2 t 2 c⊥ . Now we can solve our problem by choosing the initial condition t c0 = t c⊥ and x c0 = x cA that allow us to write X(t c , x c ) = X(t c⊥ , x cA ) deriving the geodesic trajectory as Thus we succeeded to express the geodesic equation in terms of conserved quantities without resorting to an explicit initial condition. The function x c (t c ) describes the motion along the direction n P between the limits since −∞ < t c ≤ 0.
The above results can be extended to any comoving chart with Cartesian space coordinates the time of which depends on t c but remaining independent on x c . The best example is the dSP chart {t, x} whose coordinates can be introduced directly by substituting where t ∈ (−∞, ∞) is the proper time while x i are the 'physical' Cartesian space coordinates. This chart, having the line element is useful in applications since in the flat limit (when ω → 0) its coordinates become just the Cartesian ones of the Minkowski spacetime. Performing then the substitution (33) in Eq. (31) we obtain the geodesic equation, whose transverse part is no longer conserved. Since in this chart t ∈ R the space domain of the geodesic is given by The mobile reaches the point A, of coordinates (x A , 0), at time t ⊥ which means that x A = x ⊥ (t ⊥ ). Therefore, we find Other chart frequently used in applications is the FLRW one which combine the proper time with the conformal space coordinates, {t, x c }. In this chart the geodesic equation reads determining a rectilinear trajectory as in the conformal charts. The space domain remains the same as that of the conformal chart, given by Eq. (32), while point A of coordinates (t ⊥ , x A ) is reached now at proper time (37).
The conclusion is that in comoving charts the geodesic trajectories are rectilinear only in the charts with conformal space coordinates, namely the conformal and FLRW ones. In the dSP charts these trajectories are, in general, curvilinear as in the right panel of Fig. 1, becoming rectilinear, along the momentum direction, only when A = O since then L = 0. Otherwise, the geodesic trajectory is approaching to a rectilinear one only in the ultrarelativistic regime, for P m. An example is the geodesic d on the right panel of Fig. 1.
Geodesics in static charts
The above investigation cannot be extended to other types of charts since the geodesic equations x = x(t) are not invariant under general diffeomorphisms involving simultaneously the time and space coordinates. Therefore we must complete our study considering, in addition, parametric geodesic equations.
Let us first turn back to the conformal chart where we introduce the new parameter which increases monotonously when t c decreases. With its help we derive the parametric geodesic equations, x c (λ) = n c⊥ L P observing that the point A correspond to the value λ A = 1 since t c (λ)| λ=1 = t c⊥ and x c (λ)| λ=1 = x cA . Similarly, we obtain the geodesic equations in the dSP coordinates as it results by inverting Eqs. (33). However, apart from the above simple examples our principal goal here is to obtain the geodesic equations in the static charts with the static time and different types of Cartesian space coordinates. Using the same parameter λ and Eqs, (18), (20) and (22) we find the following parametric equation and the static time when the particle reaches the point A.
The usual static chart {t s , x} has the dSP Cartesian space coordinates and the line element, In this chart the parametric geodesic equations are (46) and (44) giving similar trajectories as in the right panel of Fig. 1. Moreover, the parameter λ can be eliminated but the closed form we obtain is complicated being useless in current applications. For this reason, we look for other Cartesian coordinates in which the geodesic equations become simpler. We observe that there is an useful identity which suggests us to use the special Cartesian coordinates of Refs. [12,13] that read giving the new line element In this chart the position of the point A is given by the coordinates (x sA , 0) where as it results from Eqs. (38) and (50).
Figure 2:
The geodesic in the special static chart for ω = 0.1 and the initial condition t s0 = 0 is a hyperbola whose principal axes are rotated with the angle α = 1 2 angle( P , Q) = − π 8 with respect to the axes (x s⊥ , x s ).
The final task is to put the geodesic equation in a closed form eliminating the parameter λ that can be deduced from Eq. (46) as Then, according to Eqs. (44) and (49), we obtain the desired form of the geodesic equation in the chart {t s , x s } that reads x s (t s ) = n ⊥ L P e ωts + n P (E 2 − m 2 − ω 2 L 2 )e ωts − P 2 e −ωts 2ωEP .
This result is remarkable since Eq. (28) allows us to rewrite it simply as or in the equivalent form, resulted from Eqs. (9). We obtain thus the Cartesian version of our previous result obtained recently in spherical coordinates [7] according to which the geodesics in the chart {t s , x s } are hyperbolas whose asymptotes are in the directions of − P and Q as in Fig. 2.
Null cones and horizons
On dS spacetime the null (or light) cones are important since the Killing vector k (04) giving the conserved energy is time-like only inside the null cone. Outside the light cone this can be space-like which means that energy cannot be correctly defined in this domain [10]. However this is not an impediment since the physical observation can be done only inside the null cones where we meet the timelike worldines of massive particles or on the null cone which is defined by the wordlines of massless particles. In Fig. 3 we give the example of wordlines of a massive and a massless particle in the static and special static charts. For analysing the form of the null cones, we focus first on the null geodesics setting t c⊥ = − 1 ω since then the energy takes the familiar form of special relativity E = √ m 2 + P 2 . In this case the initial conditions in the charts we considered here become with x c0 = x 0 (as in Fig. 1). Under such circumstances, the null geodesics with m = 0 but arbitrary L = 0 can be written as
x s (t s ) = n ⊥ L P e ωts + n P 1 2ω Notice that these equations are obtained directly from the geodesc equations in closed form (31), (35), (39) and (54) apart from the static chart where we have to use the parametric equations Eqs. (46) and (44) that for m = 0 yield Hereby we may deduce the form of the null cones in the charts under consideration focusing only on the rectilinear geodesics that are passing through origin, having L = 0, Q = P and standard initial conditions Then we find that the corresponding worldlines have the following equations Chart where x c , x and x s are space coordinates along the geodesic direction x P . These equations give only one of the intersections of the null cone with the plane (t, x) while the second one can be obtained by applying the parity transformation along the direction n P .
The null cones in different charts have specific forms depending on the coordinates we use as in Figs. 4-6. As observed before, the dSP chart is the 'physical' one whose space coordinates measure the physical distances. In this chart we observe the event horizon of radius 1 ω which is the limit of the space domain from which the past events can be observed [11]. On the static chart, the condition | x| ≤ 1 ω gives an event horizon and, in addition, restricts the future motions up to a border which plays the role of an 'expectation' horizon ( Fig. 6 left), indicating the limit that can be reached by a light beam emitted in origin when t → ∞. Another horizon of this type we meet in the case of the FLRW charts (Fig. 5 left).
Concluding remarks
We studied the geodesic equations in terms of conserved quantities using exclusively Cartesian space coordinates x ∝ z since in this case the SO (3) symmetry becomes global such that the Cartesian coordinates and all the conserved vectors we met here transform alike under rotations. The principal advantage of this choice is that we can write the geodesic equations in intuitive simple forms as, for example, Eq. (55) which lays out explicitly the positions of the symptotes of the geodesic trajectory in the special static charts.
Technically speaking, the global SO(3) symmetry helps us to introduce associated spherical coordinates, x → (r, θ, φ), with r = | x|, whose angular variables are the same in all the charts we met here. This means that the changes of variables will involve only the pair (t, r) which, according to Eq. (10), transform as r c = re −ωt = r s e −ωts .
Another advantage is that in all these charts we may consider the same 3-dimensional othonormal Cartesian basis as in M 5 , which will help us to control the position of the plane ( P , Q) of the geodesic trajectory. We specify that this basis must not be confused with that of the tetrad vector fields whose orthogonality is defined with respect the dS metric.
Finally, we note that all the new results presented here were obtained without using the covariant formalism of general relativity. In other words, when we have conserved quantities we can proceed as in classical mechanics, exploiting prime integrals instead of integrating geodesic equations in covariant form. | 2017-12-19T15:46:06.000Z | 2017-11-08T00:00:00.000 | {
"year": 2017,
"sha1": "3851383a5d79d0c4265de71ef46a06faeca467ab",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1711.02956",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3d120879cfc66d27066a02e55f3b5f86d2bd8e80",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15449720 | pes2o/s2orc | v3-fos-license | Heavy Baryons and electromagnetic decays
In this talk I review the theory of electromagnetic decays of the ground state baryon multiplets with oneheavy quark, calculated using Heavy Hadron Chiral Perturbation Theory. The M1 and E2 amplitudes for (S^{*}->S gamma), (S^{*} ->T gamma) and (S ->T gamma)are separately analyzed. All M1 transitions are calculated up to O(1/\Lambda_\chi^2). The E2 amplitudes contribute at the same order for (S^{*}->S gamma), while for (S^{*} ->T gamma) they first appear at O(1/(m_Q \Lambda_\chi^2))and for (S ->T gamma) are completely negligible. Once the loop contributions is considered, relations among different decay amplitudes are derived. Furthermore, one can obtain an absolute prediction for the widths of Xi^{0'(*)}_c->Xi^{0}_c gamma and Xi^{-'(*)}_b->Xi^{-}_b gamma.
Introduction
In Heavy Hadron Chiral Perturbation Theory (HHCPT) one constructs an effective Lagrangian whose basic fields are heavy hadrons and light mesons [2]- [5]. In ref. [6], the formalism is extended to include also electromagnetism. In this talk I describe how, using this formalism, one can calculate the electromagnetic decay width of some baryons containing a c or a b quark. The details of this computation are reported in ref. [1] and here I limit myself to trace its guidelines. In order to classify these baryons one observe that the light degrees of freedom in the ground state of a baryon with one heavy quark can be either in a s l = 0 or in a s l = 1 configuration. The first one corresponds to J P = 1 2 + baryons, which are annihilated by T i (v) fields which transform as a3 under the chiral SU (3) L+R and as a doublet under the HQET SU (2) v . In the second case, s l = 1, the spin of the heavy quark and the light degrees of freedom combine together to form J P = 3/2 + and J = 1/2 + baryons which are degenerate in * Talk presented at 4 th International Conference Hyperons, Charm and Beauty Hadrons Conference, Valencia June 2000. I thank M.C. Bañuls and A. Pich for collaboration. This work has been supported in part by the European Union TMR Network "EURODAPHNE" (Contract No. ERBFMX-CT98-0169). Report:IFIC/00-64. mass in the m Q → ∞ limit. The spin-3 2 ones are annihilated by the Rarita-Schwinger field S * ij µ (v) while the spin-1 2 baryons are destroyed by the Dirac field S ij (v). They transform as a 6 under SU (3) L+R and as a doublet under SU (2) v and are symmetric in the i, j indices. I consider the decays S * → Sγ and S ( * ) → T γ. For most of these decays the available phase space is small, so that the emission of a pion is suppressed or even forbidden and the electromagnetic process becomes relevant. Moreover these kinds of decays are getting measured [7]. In the case of S * → Sγ all contributions up to order O(1/Λ 2 χ ) are calculated for M1 and E2 transitions. All divergences and scale dependence can be absorbed in the redefinition of one O(1/Λ χ ) coupling for each type of process (M1, E2). Eliminating the unknown constants it is possible to find relations among the amplitudes which are valid up to the considered order. An analogous calculation can be performed for S * → T γ. In this case, the E2 contribution has to be computed up to order O(1/m Q Λ 2 χ ), implying the intervention of two new constants. Finally for S → T γ the M1 amplitude is calculated up to order O(1/Λ 2 χ ), while the E2 contribution is found to be extremely suppressed. In the case S ( * ) → T γ it exists a process which do not receive any contribution from local terms in the La-grangian and therefore its width is described by a finite chiral loop calculation: In the following I comment these results and I refer to ref. [1] for the formalism and for a more complete comparison with other results existing in literature. A similar formalism can be applied to the study of the magnetic moments of the same baryons [8].
Results for S * → Sγ decays
The decay amplitudes are decomposed by where the corresponding M1 and E2 operators are defined by The resulting M1 amplitudes can be written as: Table 1 Contributions to M1 amplitudes for S * → Sγ. The values of a g3 can be deduced from the ones of a g2 with the substitution I i → m i /m K (i = π, K).
In Table 1 we show the values of the coefficients a i (B * ) for the decays of baryons containing one charm or bottom quark. In the table, where ∆ ST is the mass difference between S and T -baryons. Due to flavor symmetry, all contributions are equal for charm and bottom baryons, with the only exception of the term proportional to the heavy quark electric charge (Q c = +2/3, The main things to be observed are the following: • the corrections proportional to g 2 2 are obtained performing a one-loop integral ( fig. 1 with an S baryon running in the loop) that has to be renormalized. It can be demonstrated [1] that the scale µ dependence of the loop integrals is exactly canceled by the corresponding dependence of the coefficient c S (µ); • the contribution proportional to g 2 3 involves a loop integral with a baryon of the T mul-tiplet running in the loop. Since the Lagrangian does not have any mass term for T baryons, the result of the integral is convergent and proportional to the mass of the light mesons.
Looking at table 1 one sees that relations among the decay amplitudes in which all unknown constants are eliminated can be easily found. A complete list of them is reported in ref. [1].
The M1 and E2 amplitudes have identical SU (3) structure. The only difference is that there are no 1/m Q terms contributing to E2. Therefore, one can construct for the E2 amplitudes exactly the same relations as in the M1 case.
The E2 amplitudes come at higher chiral order with respect to the M1 ones. Therefore, the E2 contribution to the total width is suppressed by a factor (E γ /Λ χ ) 2 ∼ 5%. In principle, it should be possible to determine experimentally the ratio A E2 /A M1 by studying the angular distribution of photons from the decay of polarized baryons [9][10][11]. The Fermilab E-791 experiment has reported [12] a significant polarization effect on the production of Λ c baryons, which perhaps could be useful in future measurements of these electromagnetic decays. In ref. [1] it has been observed also that the loop contribution can strongly enhance the decay widths. In other words the coupling of the photon to light meson can give the main contribution to the decay widths.
Results for S * → T γ decays
The M1 and E2 operators for these decays are defined as in Eq. (2). Similarly to what we have done in the previous paragraph, we write the M1 amplitude for S * → T γ decays as The value of the parameters entering this equation can be found in ref. [1]. The final result do not depend on the heavy quark mass or charge. All constants can be eliminated in the relations does not depend on c ST . Since at O(1/Λ 2 χ ) this decay does not get any contribution from local terms, its M1 amplitude results from a finite chiral loop calculation (it cannot be divergent because there is no possible counter-term to renormalize it), so that we have an absolute prediction for its value in terms of g 2 and g 3 . Using the experimental value of g 3 [13,15] and the corresponding value of g 2 [16] derivable from the quark model, one finds (see also ref. [17]) where the dominant error come from the uncertainty on g 2,3 . The E2 amplitude in S * → T γ is suppressed by an extra power of 1/m Q . The first non-zero contributions comes at O(1/m Q Λ 2 χ ). It is important to note that at this order it appears an operator, which break spin symmetry, which gives rise to divergent loop diagrams. Moreover finite contributions of the same order come from Both the contributions coming from eq. 8-9 where not considered before in literature. By eliminating the unknown coupling constants, one can deduce the relation The same relation holds for the corresponding b baryons, since The decays Ξ 0 * c → Ξ 0 c γ and Ξ − * b → Ξ − b γ do not get any contribution from the local term proportional to c E2 T ; their O(1/m Q Λ 2 χ ) E2 amplitude is also given by a finite loop calculation. Unfortunately, since the coupling g ′ is not known, there is no absolute prediction in this case. An experimental measurement of these E2 amplitudes would provide a direct estimate of g ′ .
Results for S → T γ
The calculation of the M1 amplitude for S → T γ decays is analogous to that of the previous section. Now the M1 operator is defined as (12) and the corresponding amplitude can be written in the form where the coefficients satisfy a χ (B) = a χ (B * ), a g (B) = a g (B * ) . (14) Therefore, the relation (6) is also valid in this case. The widths of the decays Ξ 0 ′ c → Ξ 0 c γ and Ξ − ′ b → Ξ − b γ can be predicted through a finite loop calculation. From we find Again the dominant error in Eq. (16) is given by the uncertainty of g 2, 3 .
For these decays the E2 amplitude is further suppressed than in the previous cases. The lowest-order contribution appears at O(1/m 3 Q Λ 2 χ ) and, therefore, can be neglected. | 2014-10-01T00:00:00.000Z | 2000-10-25T00:00:00.000 | {
"year": 2000,
"sha1": "27dbec3930d3a5084199a0d6fc6f9c29d03be7a3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0010285",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "27dbec3930d3a5084199a0d6fc6f9c29d03be7a3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
249303312 | pes2o/s2orc | v3-fos-license | Impact of Elevated Levels of Dissolved CO2 on Performance and Proteome Response of an Industrial 2′-Fucosyllactose Producing Escherichia coli Strain
Large-scale microbial industrial fermentations have significantly higher absolute pressure and dissolved CO2 concentrations than otherwise comparable laboratory-scale processes. Yet the effect of increased dissolved CO2 (dCO2) levels is rarely addressed in the literature. In the current work, we have investigated the impact of industrial levels of dCO2 (measured as the partial pressure of CO2, pCO2) in an Escherichia coli-based fed-batch process producing the human milk oligosaccharide 2′-fucosyllactose (2′-FL). The study evaluated the effect of high pCO2 levels in both carbon-limited (C-limited) and carbon/nitrogen-limited (C/N-limited) fed-batch processes. High-cell density cultures were sparged with 10%, 15%, 20%, or 30% CO2 in the inlet air to cover and exceed the levels observed in the industrial scale process. While the 10% enrichment was estimated to achieve similar or higher pCO2 levels as the large-scale fermentation it did not impact the performance of the process. The product and biomass yields started being affected above 15% CO2 enrichment, while 30% impaired the cultures completely. Quantitative proteomics analysis of the C-limited process showed that 15% CO2 enrichment affected the culture on the protein level, but to a much smaller degree than expected. A more significant impact was seen in the dual C/N limited process, which likely stemmed from the effect pCO2 had on nitrogen availability. The results demonstrated that microbial cultures can be seriously affected by elevated CO2 levels, albeit at higher levels than expected.
Introduction
Human milk oligosaccharides (HMOs) constitute important and highly abundant components of mother's milk that provide many health benefits to the neonate including the growth of beneficial gut bacteria and the improved function of the intestinal barrier [1][2][3]. Out of the HMOs in mother's milk, 2 -fucosyllactose (2 -FL) is the most abundant [4] and therefore the most interesting from a commercial point of view. Today, 2 -FL is almost exclusively produced by fermentation where it is formed in vivo by the decoration of lactose with fucose through the action of a heterologous fucosyl transferase (Figure 1). E. coli has been the organism of choice for 2 -FL biosynthesis from the very beginning. In addition to being a well-known and easily modifiable workhorse in industrial biotechnology, it has the advantages of having a native lactose uptake system and a native colanic acid pathway to produce the activated GDP-L-fucose required for the fucosyl transferase reaction. Using E. coli as production host fermentations with 2 -FL titers of up to 180 g/L has been reported [5]. Simplified overview of the 2 -FL pathway in the E. coli production strain. The general modifications required for efficient 2 -FL synthesis include the expression of a heterologous fucosyltransferase (encoded by futC), lacZ deletion to avoid breakdown of lactose, overexpression of the colanic acid pathway genes for efficient production of GDP-L-fucose, and the deletion of wcaJ to avoid further conversion of GDP-L-fucose to colanic acid. In addition to 2 -FL, the byproduct difucosyllactose (DFL) can be formed by the addition of a second fucose unit. Its formation rate is 2 -FL relative and dependent on the kinetics and rates of the reactions described above.
Fermentation based biomanufacturing has enabled 2 -FL production in E. coli at large industrial scales and it is now routinely produced in fermentation vessels of 200-400 m 3 [5,6], where scale-dependent parameters play an important role and often create unexpected challenges. The long mixing times resulting from such enormous scales can be in excess of 60 s and lead to the formation of gradients in the substrate [7], dissolved oxygen [8,9] and pH [10], which in turn can affect the overall performance of the process [11]. In addition, the hydrostatic pressure together with a typically increased operating pressure increases the solubility of gasses such as O 2 and CO 2 [12]. Of these gasses, CO 2 , which has a relatively high solubility, is known to have negative effects on the stability, yield, and productivity of microbial processes when accumulating to high levels [13,14]. The level of a dissolved gas such as CO 2 is often quantified by its partial pressure above the liquid (pCO 2 ). This measurement is an approximation based on a proportional relation between the partial pressure of the gas and its dissolved level as described by Henry's law [15].
While many studies have addressed the effect of the various gradients arising from the size of industrial vessels focusing both on appropriate scale-down model development and physiological characterization [8][9][10]16,17], only a handful have dealt with the impact of high pCO 2 levels in E. coli processes [18][19][20][21]. Among these, only Knoll et al. have reported on a substrate-limited fed-batch process [21], which is the preferred mode of operation in fermentation-based manufacturing. The underlying mechanisms of how CO 2 impacts fermentation performance are manifold. The kinetics of fundamental carboxylase and decarboxylase reactions that interconnect cellular anabolism, catabolism, and energy metabolism are directly affected by pCO 2 (Figure 2). Direct toxic effects on membranes, cell structures, and proteins have also been reported [22][23][24][25], and since dissolved CO 2 is in equilibrium with carbonic acid and bicarbonate it also acidifies the broth. CO 2 thereby affects both osmolarity and pH since it triggers the addition of a titrant in pHcontrolled fermentations. Increased pCO 2 levels can therefore result in physiological effects stemming from osmotic pressure changes, pH changes, or by the direct impacts of pCO 2 itself [19,[26][27][28]. Since very little has been published on the impacts of pCO 2 in fed-batch processes, the effects on the performance, physiology, and the proteome resulting from an extended exposure to the pCO 2 levels typically encountered in industry are so far largely unknown or kept secret.
With this study we aimed to characterize the impact of elevated pCO 2 on an E. coli based industrial fermentation process. Product yields were determined at different CO 2 enrichment levels and compared to the performance in large-scale operations. The global proteome levels of the laboratory-scale runs were then studied under selected conditions. In addition, as the industrial process was limited on both carbon and nitrogen, the impact of enriched pCO 2 on a C/N-limited process was evaluated and compared to a C-limited process to isolate the impact of CO 2 from any potential differences in nitrogen limitation. To the best of our knowledge, this work shows the impacts of pCO 2 on process performance and bacterial physiology in a high yielding industrial fed-batch process for the first time.
Strain
The strain used in all experiments was derived from E. coli K12 DH1 with the genotype: F -, L-, gyrA96, recA1, relA1, endA1, thi-1, hsdR17, supE44. Additional modifications were made to generate the Strain 0: (i) deletion of lacZ to abolish β-galactosidase activity and prevent hydrolysis of lactose to glucose and galactose; (ii) deletion of the galactoside O-acetyltransferase gene lacA, which encodes for an enzyme that acetylates the galactose residues of oligosaccharides and would thereby lead to increased carbohydrate-type impurities; (iii) deletion of the wcaJ gene that encodes a lipid carrier transferase involved in colanic acid biosynthesis (colanic acid is an extracellular polysaccharide containing fucose and its overproduction dramatically increases the viscosity of the culture medium and acts as a drain on GDP-L-Fucose); (iv) deletion of the glucan biosynthesis glucosyltransferase H gene mdoH (the MdoH enzyme is involved in the biosynthesis of periplasmic glucans, the presence of which complicates the isolation and purification of targeted oligosaccharides); (v) deletion of the transcriptional repressor glpR to achieve higher expression levels of the genes controlled by the modified PglpF promoter used for 2 -FL synthesis; (vi) deletion of the lactose repressor lacI to remove the need for addition of isopropyl β-D-1-thiogalactopyranoside to induce expression of the Plac promoter controlled lactose permease encoded by lacY. Strain 0 was further engineered to generate the 2 -FL producing strain ( Figure 1) used for the experiments by chromosomally integrating two copies of the alpha-1,2-fucosyltransferase futC from Helicobacter pylori 26,695 (homologous to NCBI Accession nr. WP_080473865.1 with two additional amino acids (LG) at the C-terminus) under the control of a modified PglpF promoter [29], and an additional copy of the colanic acid operon (gmd-wcaG-wcaH-wcaI-manC-manB) under the control of the same modifed PglpF promoter. The modified PglpF promoter and the Plac promoter, sans lacI, were both automatically induced in the absence of catabolite repression. Thus, high level expression of the colanic acid genes, futC and lacY, were initiated when the cultures transitioned from catabolically repressed exponential growth in the batch phase into glucose limited growth in the fed-batch phase. Cryovials containing the strain in 25% (v/v) glycerol solution were stored at −80 • C prior to use.
Precultures
Precultures were prepared in two steps and were cultivated overnight at 33 • C with 200 rpm shaking. The frozen E. coli stock was used to inoculate a preculture with 10 mL minimal glucose media in a 50 mL Falcon tube. The medium was composed of 10 g/L NH 4 H 2 PO 4 , 5 g/L KH 2 PO 4 , 1 g/L citric acid, 2.35 g/L NaOH, 1.65 g/L KOH, 5 g/L K 2 SO 4 , 10 g/L trace metal solution, and finally 1 g/L MgSO 4 .7H 2 O and thiamine solutions, which were sterilized and added separately. A second preculture in a 250 mL baffled shake flask with 50 mL of the same minimal glucose medium was inoculated with the overnight culture to a final optical density at 600 nm (OD 600 ) of 0.25. The shake flask was then incubated at 33 • C and 200 rpm for 6-9 h until a final OD 600 of 3-5 and thereafter used to inoculate the main culture.
Fed-Batch Bioreactor Cultivations
Lab-scale fermentations were carried out in 2 L Sartorius Biostat B fermenters equipped with an MFCS-monitoring system (Sartorius). The fermentations were glucose limited fed-batch processes using the same minimal medium composition and feed profile as the large-scale process (confidential) with a starting mass of 1.2 kg. Main cultures were inoculated with liquid precultures to a 2% (v/v) final ratio. Both the fermentation media and the feed contained glucose and lactose. The DO was controlled by a stirring (700-2000 rpm) and airflow (1-3 VVM) cascade set to 23%. The pH level was kept at 6.8 by NH 4 OH titration.
The system was equipped with pO 2 (Hamilton), pCO 2 (only for a few experiments, iSense 5000i, Mettler and Toledo, Columbus, OH, USA), temperature, and pH (Hamilton) sensors. Statistical analysis of fermentation data was performed in SAS JMP.
CO 2 Enrichment
In the CO 2 enriched fermentations, pure CO 2 was added to the airflow inlet and was initiated when the fed-batch phase started. The CO 2 concentration in the inlet air was kept constant by an airflow controller. The pCO 2 levels were either monitored with a probe or estimated by superimposing the pCO 2 in the inlet gas stream with the pCO 2 levels that were measured in the reference fermentations without enrichment. An overview of all the fermentations performed in this study is presented in Table A1 in the Appendix A.
Calibration of CO 2 Probe
As a proof of concept, the level of pCO 2 was monitored with a probe (i5000 Mettler and Toledo). The probe was autoclaved in a separate vessel with a batch phase fermentation mineral medium, as described above. After sterilization, the pH was adjusted to 6.8 by NH 4 OH, the temperature was set to 33 • C, and the stirring was set to 700 rpm. A two-point calibration was performed by measuring the pCO 2 level after sparging media with gas mixtures of 20/80% CO 2 /N 2 and 8/10/82% CO 2 /O 2 /N 2 . The calibration process was monitored using i5000 software and saturation was assumed when the values became stable. The probe was then moved to the fermenter used for the experiments inside a sterile laminar flow bench to avoid contamination. In the industrial scale fermenter, the same process calibration was performed before the probe was mounted in the fermenter prior to sterilization.
Sampling and Analytical Procedures
The 2 -FL, DFL, and lactose levels in the fermentation broth samples were quantified by HPLC. Samples taken from the vessel were immediately diluted with deionized water and boiled for 20 min. After the heat-treatment, the samples were centrifuged for 3 min at 17,000× g and the resulting supernatant analyzed by HPLC (Dionex Ultimate 3000 RS, Thermo Scientific, Waltham, MA, USA) using a Supelco TSK gel Amide-80 HPLC column with a 68% acetonitril isocratic solvent. The biomass was monitored as bio wet mass (BWM), defined as the weight ratio of the pellet to the pellet and the supernatant after 3 min centrifugation at 17,000× g. The BWM values were converted into dry cell weight (CDW) using a ratio that was determined in previous experiments (data not shown). Samples for acetate measurements were taken by a 3 mL syringe (HENKE-JECT ® , Henke, Sass, Wolf GmbH, Tuttlingen, Germany) and were directly filtered through a 0.45 µm cellulose syringe filter (30 mm diameter, Thermo Fisher, Waltham, MA, USA). NH 4 + and phosphate levels were estimated from supernatant samples by using Quantofix ® .
Proteomics Analysis
Cells were harvested at different time points after feed start in the fed-batch process (6, 30, 80, 120 h). The precise timepoints of each condition are listed in Appendix A, Table A1. Fermentation broth was sampled directly into a syringe filled with ice cold 0.9% NaCl solution which diluted the broth approximately 3-fold. The syringe was measured before and after adding the fermentation broth to calculate the dilution factor and the cell weight. The solution was kept on dry ice until transport to the centrifuge. The solution was centrifuged at 4100× g for 10 min at 4 • C. The pellet was then washed in ice-cold 0.9% NaCl solution and pelleted again by centrifugation for 5 min at 6000× g at 4 • C. The pellet was then immediately placed on dry ice and transferred into a −80 • C freezer where it was stored until the analysis.
Proteome analysis was performed at the DSM Biotechnology Center in Delft, NL. Lysis buffer (PreOmics) was added to the frozen cell pellets and the solutions heated for 15 min at 95 • C. For proteomics analysis, lysates were normalized to an equivalent of 10 mg lysed cells followed by reduction, alkylation, and digestion using trypsin. Samples were analyzed in technical triplicates by liquid chromatography tandem mass spectrometry (LC-MS/MS) using a Vanquish UHPLC coupled to a Q Exactive Plus Orbitrap MS (Thermo Fisher Scientific). Peptides were separated via reverse-phase chromatography using a gradient of water with 0.1% formic acid (solvent A) and 80% acetonitrile with 0.1% formic acid (solvent B) from 5% B to 40% B in 20 min with a flow rate of 400 µL/min. Dataindependent acquisition (DIA) was performed with a resolution setting at 17,500 within the 400-to 1200-m/z range and a maximum injection time of 20 ms, followed by 8 high-energy collision-induced dissociation activated (HCD) MS/MS scans with a resolution setting at 17,500 covering the mass range from 400 to 875 m/z using 60Da collision windows.
Data was analyzed with Spectronaut version 14.10 (Biognosys, Schlieren, Switzerland) [30], using the direct DIA approach with a protein database for the specific strain used in the study allowing Trypsin/P specific peptides including 2 missed cleavages, an oxidation on methionine, carbamidomethylated cysteines, and deamidated asparagine and glutamine. Label-free quantification was performed using the top three unique peptides measured for each protein. Retention time alignment was performed on the most abundant signals obtained from peptides measured in all samples, and results were filtered by FDR of 1% followed by normalization of the result using the median ion intensities measured for each sample.
Data Analysis
Proteomics data analysis and differential expression analysis (DEA) were performed in R, using a custom-made package based on limma in R. The protein counts were centralized, and DEA was used to compare the protein expressions at given time points between control and CO 2 enriched conditions. Three different time points were compared from the C-limited fermentations: 6, 30, and 120 h after feed start and two in the C/N-limited fermentations 30 and 120 h after feed start.
The DEA analysis was performed in R using the limma package with linear models. To determine the differentially expressed proteins, standard filtering conditions based on fold change (FC) and significance were used with the following settings: FC > 1.5, p < 0.05.
pCO 2 Levels in Industrial and Laboratory Scale
The pCO 2 levels of an industrial 2 -FL process performed in a 450 m 3 fermentation vessel were measured by a commercial probe located close to the bottom of the vessel and compared to a corresponding laboratory process. As expected, the measured pCO 2 levels were significantly higher in the large fermenter with a two-three-fold increase and a peak level of 150-160 mbar ( Figure 3). These levels were in a range where previous studies had reported negative effects on E. coli cultures, albeit in a batch growth [19]. Follow-up runs using scale-down models with 15% and 30% CO 2 enrichment in the inlet sparging gas were also measured with the CO 2 probe. In addition, scale-down runs with 10% and 20% enrichment were carried out without a probe but had their pCO 2 levels estimated. The measured and estimated pCO 2 levels can be seen in Figure 3.
Dual Limitation in the Fermentation Process
The fermentation process in this study had an unusual trait. While initially being C-limited, the process naturally became N-limited approximately 30 h into the fed-batch phase, whereafter it settled into an oscillatory state seemingly shifting back and forth between N-limitation coupled with a slight overflow metabolism and pure C-limitation. This behavior was observed in both large-and laboratory-scale processes and its onset could be observed by following the [NH 4 + ], but also by the emergence of obvious oscillations of the on-line parameters such as the pH, CO 2 evolution, and dissolved oxygen levels (see example in Figure 4A). As N-limitation and C-limitation have very different regulations of gene expression, [31] any change to the degree of N-limitation was expected to have a profound impact on the physiology of the E. coli strain. This could include shifts in maintenance requirements, metabolic pathway regulation, and the transcriptome and proteome profiles, which in turn could lead to shifts in biomass and product yields. Since the pCO 2 level affects pH and thereby indirectly the N-level via the pH titrant NH 4 OH, it could potentially reduce or even relieve the impact of N-limitation and thereby obscure other effects caused by increased pCO 2 . To introduce a control for this potential bias, laboratory cultivations with and without pCO 2 enrichment with excess nitrogen were also included in the study. In these fermentations, additional NH 4 + was supplemented via the base titrant in the form of (NH 4 ) 2 SO 4 to keep the [NH 4 + ] between 2-3 g/L in the fermenter (Figure 4). Having extra nitrogen available for growth also led to a higher biomass ( Figure 5) after 50 h of the fed-batch phase. A carbon allocation comparison showed that the carbon from this extra biomass was predominantly taken from the product formation, whereas the CO 2 evolution was similar ( Figure 5). While the allocation to product formation at a large-scale was initially similar to either C-or C/N-limited laboratory-scale fermentations, it became significantly lower and had a slightly lower biomass after the onset of C/N-limitation. Instead, CO 2 evolution was much higher.
Fermentation Performance with and without pCO 2 Enrichment
To investigate the impact of high pCO 2 levels, both C-and C/N-limited fermentations were compared at different levels of CO 2 enrichment in at least two independent fermentations. Product yields were evaluated as the sum of the carbon allocated to the produced HMOs (2 -FL + DFL) per glucose added. The summary of the 15 fermentations performed during this study is presented in Table 1. The fermentation that had the closest pCO 2 level to large-scale fermentations was the 10% CO 2 enrichment, which was approximately similar or higher in pCO 2 . (Figure 3). However, this enrichment did not lead to any observable difference in the performance as measured in the product and biomass yields (Table 1, Figure 6C,D). Increasing the enrichment to 15% pCO 2 led to a slightly increased biomass yield for the C/N limited process and increasing the enrichment further to 20% pCO 2 led to an additional biomass increase (Table 1, Figure 6C,D). The pCO 2 enrichment first had an impact on biomass after 40 h, which corresponded with the onset of N-limitation. In addition to biomass, product yields were also affected at 15% and 20% pCO 2 . Again, this effect started at the onset of N-limitation. A further increase of pCO 2 to 30% caused a marked increase in the base consumption right from the onset of the enrichment and led to a complete loss of culture viability approximately 20 h later ( Figure 6C,D). A sample taken at 25 h of fermentation revealed 32.5 g/L acetic acid and 3.7 g/L glutamic acid, showing that the much larger base pull (Appendix C, Figure A1) was caused by acid accumulation. The 15% pCO 2 enrichment was selected as the focus for the proteomics study and the C-vs. C/N comparison even though it had a higher average pCO 2 level than what was measured in the large-scale fermentation (Figure 3). This decision was taken since it was the lowest pCO 2 level that had a discernible impact on the fermentation. It was therefore considered to have a higher likelihood of undergoing a physiological change that could be resolved in the proteomics data and was still at a level with industrial relevance. In the C-limited study, no impact on biomass yield could be seen with the 15% pCO 2 enrichment ( Figure 6B). The enrichment initially led to a lower product yield, but this difference diminished and eventually disappeared over time ( Figure 6A). It should be noted that the deviation was high between the three replicates in the control group caused by a potential outlier ( Figure 6A, black full diamonds). C-limitation on its own decreased the product yield compared to the regular C/N-limited process with and without CO 2 enrichment. (Table 1).
Proteome Analysis
A timeseries proteomics study was conducted to evaluate the effect of pCO 2 enrichment under C-and C/N limited conditions. The samples from the enriched processes were compared to their respective 0% references to gain an overview of the impact of elevated pCO 2 levels on the bacterial physiology. For the 15% pCO 2 enrichment analysis, cells were harvested from three or four time points in at least duplicate fermentations. The 10% and 20% pCO 2 enriched fermentation data was derived from single experiments; however, the data in general were very reproducible and therefore single determinations were still included in the data analysis. In the study, which was not optimized for membrane proteins, a total of 1546 proteins were detected.
Identification of Differentially Expressed Proteins
The number of significantly differentially expressed (DE) proteins between 0% and 15% CO 2 enriched conditions at given timepoints are presented in Table 2. In general, there were higher numbers of differentially expressed proteins in the C/N limited condition. A functional Gene Ontology (GO) enrichment analysis was performed to find patterns and used to divide these differentially expressed proteins into groups (Figure 7). GO enrichment analysis revealed that the identified functional groups from the middle and late fermentation phases were similar. The highest-ranking groups were related to nitrogen and carbon metabolism and transportation. In the middle-fermentation phase, tricarboxylic acid (TCA) cycle and arginine biosynthesis related proteins were upregulated with CO 2 enrichment, while transport and glutamate related proteins were downregulated. At the late fermentation stage, transport (especially ABC transporters) and glutamate related proteins were downregulated, whereas glycolysis and pyruvate metabolism related proteins were upregulated (Figure 7). Table 2. Summary of differentially expressed proteins at various timepoints. Upregulation means higher expression in the CO 2 enriched condition. Significant differential expression was defined as p < 0.05 and FC > 1.5. After the number of differentially expressed (DE) proteins, the number is specified into numbers of upregulated and downregulated (followed by ↑ and ↓, respectively) Far fewer differentially expressed proteins were identified in the C-limited condition ( Table 2). It was therefore not feasible to quantitatively group the differentially expressed proteins based on their functions. In the mid-fermentation phase, many of the upregulated proteins were related to acid stress response, such as GadA, GlsA, and GadB. In the late fermentation phase, there were only five differentially expressed proteins left. Three pro-teins were upregulated with CO 2 enrichment: Ada (3-fold), DmlA (1.7 fold), and EutM (1.5-fold); two were downregulated FlgM (2.5 fold) and DadA, (4 fold). Interestingly, flagellin synthesis-related proteins were downregulated in all timepoints. On the other hand, CsgD, curli operon transcriptional regulatory protein, LolB, outer membrane lipoprotein, and PspC phage shock proteins were all upregulated. These could potentially serve as a protection from the increased pCO 2 .
C-Limited C/N-Limited
There were only a few identified proteins that were commonly changed for the C-and C/N-samples as a result of pCO 2 enrichment. These proteins were: GlsA, DmlA, GltA and Mdh. Thus, malate dehydrogenase and glutamine/glutamate metabolisms were affected by the increased CO 2 levels regardless of the nitrogen limitation.
Time Course Expression Changes of 2 -FL Production and TCA Related Proteins
To investigate the underlying cause of the lower product yields in the enriched fermentations, the expression profiles of proteins involved in the 2 -FL production pathway, TCA cycle, and carboxylation reactions were compared under the different conditions. The targets are listed in Appendix B, Table A2.
In the proteome data, the C/N limited samples with 15% and 20% CO 2 enrichment grouped together while the 10% samples were closer to the control group (Figures 8-10). . Protein expression level in the TCA cycle under different fermentation conditions: C/N-limited with 0%, 10%, 15%, and 20% pCO 2 enrichment and C-limited with 0% and 15% pCO 2 enrichment. Y-axis = relative expression in log2, X-axis = fermentation age after feed start in hours. AceB
Proteins from the 2 -FL Production Pathways Were Not Significantly Affected by CO 2 Enrichment
In general, the proteins directly involved with 2 -FL production were not drastically affected by the elevated pCO 2 levels, regardless of the limitation state ( Figure 8). In the C/N limited dataset, Fcl (WcaG), ManA, and FutC had a slightly higher abundance under CO 2 enrichment at the late stage of the fermentation (Figure 8). Under C-limitation, a clear difference was observed in Gmd and FutC abundancy, which both had lower expressions throughout the fermentation (Figure 8).
CO 2 Enrichment Increased TCA Cycle Protein Expression
In general, TCA related proteins were stably expressed under all tested conditions, but many of them were observed to be expressed at a higher level under CO 2 enrichment (Figure 9). Although these patterns were highly reproducible, most of the fold changes did not reach the minimum threshold of abs|FC| = 1.5 in the DEA. Therefore, in strict terms, none of the TCA target proteins had a significantly changed expression under the C-limited condition. On the other hand, several TCA enzymes such as GltA, SucABC, and SdhAB were affected in the C/N limited condition.
Enzymes Involved in Carboxylation and Decarboxylation Reactions
It was hypothesized that pCO 2 levels could affect the expression of enzymes in-volved in carboxylation and decarboxylation reactions. Therefore, a total of 11 proteins involved in carboxylation (Ppc and Psd) and decarboxylation (MaeAB, PoxB, SucAB, Icd, AceF, PyrF, Lpd, NadC, and HemE) reactions were specifically examined in the dataset. CO 2 enrichment under C-limited conditions led to higher SucAB expression throughout the fermentation process. However, this increase was also observed for the other proteins from the TCA cycle and is therefore not directly linked to changing decarboxylation kinetics (Figure 10). A similar pattern was observed under C/N limited conditions, but only until the middle part of the fermentation. At the late stage, the expression profiles of the CO 2 enriched groups dropped and became similar to that of the control. On the other hand, PyrF, an orotidine-5 -phosphate decarboxylase catalyzing the last step in the pyrimidine synthesis, started low but had an increasingly higher expression after 30 h ( Figure 10). This behavior was not observed in the C-limited dataset. The expression pattern of other decarboxylases such as Psd, HemE, and NadC were similar to PyrF in the C/N limited dataset but the differences between the conditions were not enhanced to the same extent (Appendix D, Figure A2). Surprisingly, except for Ppc, none of the targeted carboxylases were impacted by the pCO 2 levels under solely C-limitation. For Ppc, the abundancy was higher with CO 2 enrichment under both C and C/N limited conditions ( Figure 10).
Nitrogen Uptake Proteins
As expected, proteins involved in nitrogen assimilation were differentially expressed when comparing the C-and C/N-limited datasets. This was shown in the expression profiles of GlnA, GltB, and GltD (Appendix E, Figure A3). In the C/N limited condition, GlnA expression was increased at the timepoint when the culture reached ammonium limitation approximately 40 h into the fed-batch phase. It was also clear that the high expression of GlnA started later in the CO 2 enriched samples showing that CO 2 enrichment delayed the start of the nitrogen limitation.
Discussion
Based on the results from previous studies [18,19] and observed differences between large-and laboratory-scale fermentations, we expected to see increased pCO 2 levels effect product and biomass yields at a lower level of enrichment than what was actually observed. The absence of any clear impact at 10% enrichment was surprising considering the measured pCO 2 level in the large-scale fermentation was substantially lower than what this enrichment yielded throughout most of the fermentation (Figure 3). However, as the results reported by [18,19] were performed under very different physiological conditions with cultures grown in batch-mode and producing a protein instead of a metabolite, this could indicate that the impact of pCO 2 is different depending on growth rate, and perhaps imposed production demand, medium composition, and nutrient availability. The observed differences in biomass and product yields between factory and laboratory even after CO 2 enrichment thus provided a hint of other scale dependent factors being at work. It should be noted that we were not able to closely replicate the large-scale CO 2 profile in the scale-down reactor as we could only enrich with a fixed percentage in the inlet gas stream. The large-scale vessel would therefore always have a more dynamic CO 2 profile with larger differences between the peaks and troughs (Figure 3). Nonetheless, the pCO 2 level in the large-scale fermentation was within the range encompassed by the 0% and 10% enrichments but for a few hours at the very peak in the early fermentation phase (Figure 3). While the 10% enrichment did not show a significant impact, this could be achieved by increasing the CO 2 enrichment to 15% or 20%. While these enrichment levels resulted in a CO 2 level that was higher than what was observed in our process, they were still within a range that is encountered in industrial operations [19].
A further increase of the CO 2 enrichment up to 30% was also tested. This led to rapid acetic acid accumulation and a loss of culture viability shortly after the enrichment was initiated ( Figure 6C,D). A likely explanation is that the high pCO 2 level impacted the growth rate of the strain. The feeding profile used in this study was designed to avoid an accumulation of acetate from overflow metabolism [32,33]. However, if the µmax was significantly reduced by the elevated pCO 2 levels, the threshold growth rate where acetate accumulation started was likely also reduced. Since the accumulation of acetate also reduces the µmax [32,[34][35][36], this can quickly lead to a negative spiral with ever more acetate formation and growth rate reduction, eventually leading to a complete loss of the culture. This was precisely what was seen with very high base titration indicating that acid accumulation was already at the onset of the CO 2 enrichment which continued to increase until the cultivation collapsed (Appendix C, Figure A1). The formation of high levels of acids was confirmed by an end point measurement of 32.5 g/L (542 mM) acetic acid, a level that is toxic to E. coli and highly inhibitory to growth [36,37]. This behavior also closely mimicked what we have observed when we increased feed rates in the past. These results together with reported results in the literature show that the precise onset of pCO 2 growth inhibition is highly dependent on the organism and the growth conditions. Castan et al. reported a negative impact of 9.75% CO 2 enrichment with E. coli K12, that was further reduced to 19.48%, whereas Baez et al. reported a positive impact at 20 mbar, which turned negative at 70 mbar also using a K12 strain [19]. The negative impact was then magnified when increasing the pCO 2 level furter to 150 mbar and 300 mbar. In contrast, Knoll et al. surprisingly did not report any negative impacts on growth even when CO 2 accumulated to a level of 800 mbar in an aerobic glycerol-limited fed-batch process under highly elevated pressure [21]. This pCO 2 level was much higher than the maximum level of 260 mbar that was measured in the 30% enriched fermentations that led to a rapid culture loss with our process. The growth rate resulting from the feeding profile they used was also significantly higher following a feeding profile corresponding to a µ of 0.153 h −1 , which should make the situation even worse. However, it should be noted that we have observed that the heavy burden of metabolic pathway overexpression and metabolite production can reduce the threshold growth rate where overflow metabolism starts quite substantially and increase the sensitivity towards runaway acetic acid caused culture failures (data not shown). This has necessitated the use of less aggressive feeding profiles in our process. A key difference was also that they used a stepwise increase in pressure and thereby pCO 2 level and that their fermentation was much shorter. Indeed, they did observe a dramatic increase in osmotic pressure after 22 h together with an accumulation of mixed acids indicating that the growth would not be sustainable for long at this pressure and pCO 2 level.
Due to the particular traits of the process, this study also looked into the impact of sole carbon and C-N double limitation and how these limitations interacted with increased pCO 2 levels. The potential for combining C-limitation with another nutrient limitation to redirect part of the carbon and energy consumption from the biomass into product formation is well known and can be an attractive choice for industrial production [38], [39]. Here, C/N-limited conditions were indeed shown to reduce biomass formation and increase product yield compared to C-limitation alone. In contrast to C/N-limitation, under Climitation, no impact on biomass yield could be seen with the 15% pCO 2 enrichment. This implied that the increased biomass in the 15% pCO 2 enriched C/N-limited runs were indeed a result of increased nitrogen availability. The product yields for the C-, C/N-and, large-scale runs were also very close until approximately 30 h, which corresponded with the onset of N-limitation ( Figure 6). After this point, the large-scale fermentation increased its carbon allocation to CO 2 and decreased its allocation to the biomass and products. Thus, maintenance energy requirements were unanticipatedly increased. It is unknown whether this was caused by a change in physiology at the onset of C/N-limitation to one that was less well suited to the large-scale environment or if it coincided with a change in the mixing regime resulting in increased gradients in the large vessel as the volume increased and different impellers were engaged. In light of this result, it would be interesting to see if a relief of N-limitation could improve the yields post 30 h in large-scale fermentations.
The proteome analysis revealed that high pCO 2 levels induced a greater number of differentially expressed proteins under C/N-limited conditions than C-limited. This was no surprise considering that the elevated CO 2 level indirectly affected the degree of Nlimitation by increasing the NH 4 OH titration, which was expected to have a major impact on the physiology. We did observe an increased GadBCE and GlsA glutaminase expression under both C-limited and C/N-limited conditions. Though the differential expression did not show up after filtering in the C-limited samples and only after 120 h in the C/N-limited fermentations. This response suggested that the intracellular pH was acidified by dissolved CO 2 . This has also been reported in other studies where CO 2 triggered an acid response [40] or an increased GadABC expression level [18]. A general trend of higher expression values for TCA related proteins when exposed to CO 2 enrichment was also observed for both C-and C/N-limited cultures. Since CO 2 enrichment did not result in increased biomass formation under C-limitation, the higher TCA expression of especially GltA, Ppc, SdhAB, and SucABC was not likely a result of increased anaplerosis. However, it must be noted that reactions affected by high pCO 2 levels would not necessarily lead to enzyme level changes.
A general trend of higher expression values for TCA related proteins when exposed to CO 2 enrichment was also observed for both C-and C/N-limited cultures. This observation was the opposite of what Baez et al. found in their study, which showed lower carbon flux to TCA and reduced biomass yield under batch conditions. This was not unexpected as the growth rate in our fed-batch fermentations was much lower than the non-limited growth rate under batch conditions and was therefore not exhibiting overflow metabolism.
Interestingly, the C-limited samples showed a higher expression of most TCA enzymes even in the control condition at all timepoints. A higher expression of SucABC in the CO 2 enriched samples also suggested a higher flux in the TCA cycle and/or increased nitrogen assimilation. A higher TCA cycle flux could be related to increased Ppc (phosphoenolpyruvate carboxylase) activity, which has been observed to be upregulated in Saccharomyces cerevisiae under high CO 2 concentrations [41]. Ppc fixates CO 2 by carboxylation of the less reactive bicarbonate anion (HCO 3 -) in the cytoplasm to form oxaloacetate from phosphoenolpyruvate [42]. In addition to being upregulated in our dataset, its reaction would be favored by higher pCO 2 levels and increase the supply of oxaloacetate to the TCA cycle.
Surprisingly, along with the upregulation of the L-malate dehydrogenase Mdh in the TCA cycle, a decarboxylating D-malate dehydrogenase DmlA was also upregulated, whereas MaeAB, a decarboxylating L-malate dehydrogenase, did not change its expression. The DmlA enzyme reduces D-malate to pyruvate and CO 2 under anaerobic conditions and it is also involved in L-leucine biosynthesis [43]. Lukas et al. found that DmlA was essential for growth on D-malate under aerobic conditions and other C4-dicarboxylates were also seen to induce dmlA such as L-and meso-tartrate, while succinate did not trigger the expression [43]. From the data we have, we could not find an explanation for why it was induced under the tested conditions.
Conclusions
This study was designed to test how elevated pCO 2 levels affect E. coli physiology and whether high pCO 2 concentration observed in a very large industrial fed-batch process could account for the reduced product yield compared to the same process at the laboratoryscale. While it was not possible to obtain a close mimic of the pCO 2 profile in the laboratory, it was observed that a process with 10% enrichment, which produced an equal or higher pCO 2 level (of around 110 mbar) compared to the 450 m 3 vessel throughout most of the process, did not affect the product formation. Therefore, other factors, alone or in combination with elevated pCO 2 , are required to account for the observed yield difference between the scales. The main candidate would be chemical gradients formed by the longer mixing times in the large vessel where particular variations in glucose concentration would be a prime suspect. However, increasing the pCO 2 concentration beyond the level that was seen with the 10% enrichment by using 15% enrichment did impact product and biomass yields and increasing it further to 30% caused a full collapse of the culture. While both C and C/N limited cultures saw reductions in product yield, the CO 2 enrichment only affected the biomass yield for the C/N-limited cultures indicating that this effect was due to a reduction of the degree of N-limitation. This was also reflected in the proteomics analysis which revealed surprisingly few changes for the C-limited condition. Here, the major differentially expressed proteins were in the TCA cycle, the two 2 -FL pathway related proteins Gmd and FutC, and the proteins involved in the acid stress response. Of these, changes to the TCA cycle and the 2 -FL pathway could potentially impact yields. However, the C-limited yield difference mainly manifested in the beginning of the fermentation when the proteomics differences in the 2 -FL pathway were very small. Thus, changes to the energetics, in which the TCA cycle plays a part, seems a more likely candidate.
Author Contributions: G.G. and T.J. designed the study and wrote the manuscript. G.G. acquired and analyzed the fermentation data. G.G. acquired and analyzed the proteomics data. A.V. performed proteomics analysis. G.G. performed visualization. P.B., M.K. and A.V. revised the manuscript for scientific content. All authors have read and agreed to the published version of the manuscript.
Data Availability Statement:
Restrictions apply to the availability of these data. The limited dataset that supports the findings of this study are available upon reasonable request from the authors and with the permission of Royal DSM.
Acknowledgments:
We thank our colleagues at DSM Hørsholm and Delft helping with analytical support and fruitful discussions.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A Table A1. Overview of the fermentations in the study. The number of replicate fermentations carried out for each condition is summarized as well as the number of fermentations used for the proteomics study. Sample timepoints for the proteomics study are listed. | 2022-06-03T15:22:51.566Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "57ee0136e3240224b7b377c4f7d62a30a56e6277",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/10/6/1145/pdf?version=1654161699",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9ae2236732dc12bfe780f5fa7e84f9d35992cad1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
240869136 | pes2o/s2orc | v3-fos-license | Stay at Home: Malaysian Youth Perception towards Online Shopping as The New Norms
The main purpose of this study is to examine Malaysian Youth online shopping perception towards their intention to purchase during this trying time of Covid-19 pandemic in Malaysia. We assess four independent variables which are perception of web design, perception of reliability, perception of privacy and perception of customer service and four hypotheses were postulated in this study. To analyze the findings, the researcher applies Partial Least Square (PLS) approach and cross-sectional study were conducted to 198 respondents using judgmental purposive sampling among Malaysian youth. Results showed that all the variables have positive relationship with the Malaysia youths’ purchase intention.
Introduction
World Health Organization's Director General, Dr. Tedros Adhanom Ghebreyesus announced on 10th April 2020 that COVID-19 outbreak has affected 213 countries with 1,524,162 confirmed positive cases and 92,941 deaths. This virus started rapidly spread among residents of Wuhan City, Hubei province, China. As for Malaysia, the government has announced a Movement Control Order 1.0 (MCO) started from 18th March 2020 until 31st August 2020. The virus was first identified spread in Malaysia on 25 January 2020 (Borneo Post, 2020).
During the MCO 1.0, all the businesses need to be closed and this has affected the sales activities so that Malaysia will remain at home. Working from home has become a new norm for all employment. Due to this, more businesses start to sell their products and services through online. Malaysia's online retail has recorded an increase in April 2020 which is 28.9% (The Star, 2020). This showed that most of consumer started to move from physical store to online store. It was added that said the demand for online shopping has kept increasing and most of their consumer has started to buy groceries item like food staples and personal care and household product through online.
According to Tung (2012), most of sellers believe that online shopping will become the effective methods to growth their business and have high potential to attract more customer. The connection between advanced online shopping with globalization, technology and internet become the new trend among consumer (Pappas et al, 2014). Most of the companies are implementing online retail to reach consumer across the globe through online to buy products (Gehrt et al, 2012). This shows that internet is offering innovative methods for businesses in a way to manage information and better serve their customers (Okasha, 2019). There are successful companies that shift the model of their business from brick-and-mortar to brick-and-click such as Alibaba, Tenecent, Amazon and Groupon (Lim, Osman, Salahuddin, Romle, & Abdullah, 2016). Additionally, the perception of young consumer is has emerged towards online shopping (Malviya & Sawant, 2014). Online retailers have high interest to market their website and product among young consumer since they hold a big chunk of shares in online shopping (Shah Alam, Bakar, Ismail, & Ahsan, 2008). Hence, realizing the drastic increment and opportunity in online shopping during MCO 1.0, the researcher wanted to identify the Malaysian youth perception on their intention to shop.
Literature Review Purchase Intention
Purchase intention can be defined as the consumer preference to purchase the product or service in the future after acknowledging about them initially (Sheng & Kim, 2019). Internet China (2017) defined purchase intention as the willingness of the consumer to buy products via internet or by using online technology. Online shopping intention also related to the desire to take part in online transactions within a website (Octavia and Tamerlane, 2017). The individual intention can be influenced by many factors and can change at any time (Wang et al, 2007). One of the factors is related to the website design or characteristics that able to affect the purchase intention (Doulatabadi & Sheng, 2020). The others factor is perceived value that also able to influence the consumer's decisions (Thomas et al., 2018). The consideration of the brand image and product features also influence the purchase intention (Liu et al., 2016).
Perception of Web Design
In online business, it is important for the seller to focus on a website design because as it may be one of the most important factors to attract the customer's attention with good contain and images (Dang & Pham, 2018). Consumer perception of web design also affected by two factors which are "ease of use" and "information content" (Demangeot & Broderick, 2010). A good quality website has good features and characteristic as it considers the needs and wants of the consumers (Al-Debei, Akroush, & Ashouri, 2014). In terms of the functionality of the website, it is determined by browsing, ordering and information locating and the speed for the activities to be end the transaction. The findings from Dong and Seon (2010) in the Republic of Korea showed that features that includes graphics and colors will provide better shopping experience for the consumers and directly affect the purchase intention (Mansori, Liat, & Shan, 2012). Also, informative website contributes to the ease of use for the consumer to make comparison and increase their satisfaction when purchasing the products. If the consumer feels happy to use the website, they will likely shop at the website again (Jie, Peiji, & Jiaming, 2007).
Perception of Reliability
Reliability refers to the ability of the seller to build trust to the consumer on their products or services (Dang & Pham, 2018). Reliability can be related to dependability of seller or retailer to attract the consumer to trust online website and have confidence while using it (Mittal & Agrawal, 2016). Online shopping is now the new tools to purchase compared to physical stores. (Dang & Pham, 2018). Unfortunately, online shopping can be a challenge as consumers are unable to feel the product physically (Dang & Pham, 2018). Most of consumers choose to purchase through online is because of their beliefs on the reliability of online shop (Liu & Arnett, 2000). Furthermore, reliability is a significant factor of consumer trust towards websites' quality (Ha & Stoel, 2009).
Perception of Privacy
Security and privacy are two factors that concerns consumer when using online shopping platform and hinders them from continue to use it (Levy & Weitz, 2016). Privacy concerns is also related with perceived risk, attitude toward online purchase and consumer behavior (Dang & Pham, 2018). When, consumers buy something through online they will feel worry if personal information were exposed without their consent (Kotler & Armstrong, 20016). As for online shopping, since most of the transactions involved online banking or credit card, consumer may fear if their bank information or credit card information will be stolen and used for online shopping fraud (Dang & Pham, 2018). When consumers feel secure to share their personal information and do online transaction it will encourage them to continue to purchase through website (Nasni Naseri, Othman, & Wan Ibrahim, 2020).
Perception of Customer Service
Customer service is an important element in every business sector even it is offline or online business (Eng, 2008). The lack or limited communication with the consumers may affect online sellers severely and it should always be their main priority in businesses (Hamad et al, 2017). Here, customer service also refers to services provided for consumers before, during and after a purchase (Wolfinbarger & Gilly, 2003). A good and efficient customer service will become a competitive advantage for a business to sustain (Dang & Pham, 2018) since customer service will contribute to high satisfaction for consumers when purchasing online (Lee & Lin, 2005). It is important to provide user-friendliness of the site, create an easy to use tools and accessible to help the seller improve their service quality (Lin & Sun, 2009). H3: There is a positive relationship between perception of privacy and purchase intention. H4: There is a positive relationship between perception of customer service and purchase intention.
Methodology
To determine a minimum sample size for this study, the researcher used GPower analysis and it was found that 89 respondents would be adequate. However, for the purpose of this study, 300 questionnaires were distributed, and 198 responses was received. Two filter questions were designed to ensure the criteria of the respondents is fulfilled which are 1) Age of the respondent should be between 21-30 years old 2) Must shop at least ONCE during MCO 1.0. In terms of the demographic, more than half of the survey respondents were from female population (77.7%), aged between 21 to 24 years old 73.6%) and most of them were Malay (93.3%).
Data was collected using a structured questionnaire. The statements measuring these constructs were measured on a five-point Likert scale anchored with, "1=completely disagree" to "5=completely agree" with "3=neutral." Regarding measures, the items for perception of web design, perception of reliability, perception of privacy and perception of customer service were adapted from the work of Chiu et al. (2009). While purchase intention items were adapted from Hsu et al. (2006). The reliability test for all of the variable's ranges were between 0.814 -0.877 which were acceptable.
Results & Findings
In analysing the research model, a partial least square (PLS) analysis using the SmartPLS 3.0 software (Ringle, Wende &Becker, 2015). The two stage analytical procedures were tested which are the measurement model (validity and reliability) and the structural model (hypotheses relationship testing) (Ramayah et al, 2015). Finally, a bootstrapping method was employed (5000 samples was used) to test the path coefficients and loadings for this study (Hair et al., 2014). In terms of the measurement model, the loadings from this study were more than 0.6 and the AVE was also higher than 0.5 which is in accordance with the suggestion by Hair et al., (2014). The discriminant validity of the measure was examined by Fornell-Larker (1981) criterion which looks into the square root of the AVE that exceeds the correlation between all other measures and heteroit-monotrait of correlations (HTMT) which if the value of HTMT is more than 0.90, there is a problem on the discriminant validity. Both measurements were fulfilled in this study and the adequacy of the discriminant validity was verified.
Assessment of Hypotheses using structural model (2014), showed a consistent finding whereby web quality has a significant relationship with consumers attitude towards online shopping. As for the perception of reliability, the hypothesis was supported by Shah Alam, Bakar, Ismail, & Ahsan (2008) which found out that reliability has significant relationship with purchase intention. In terms of the perception of privacy, significant relationship between of privacy and purchase intention proved by Kasuma, Kanyan, Mohd Khairol, Sa'ait, & Panit (2020) which shown the same result. Lastly, the perception of customer service is an important key element to attract consumer to purchase their preferences product through online shop (Dang & Pham, 2018).
The new norms due COVID-19 pandemic has changed every aspects of our lives. Specifically, for this study, researchers were keen to determine factors that attracts young consumers to purchase online. From the results, the perception of web design played the most significant factors in buying decision via internet. Businesses with target markets on the young consumers needs to highlight on this element to attract consumers to repeat purchase in their online shops. The 'brick and mortar' strategies are less relevant during this trying time of pandemic. The study suggests that online shopping is the best option for business to stay relevant and compete. Yet, online stores require its own unique features and this studies youth consumers preferences on online shopping. | 2021-10-15T16:33:46.459Z | 2021-06-16T00:00:00.000 | {
"year": 2021,
"sha1": "a383cfbcce7a0e32b2c8ce8d2e76553da1859a97",
"oa_license": "CCBY",
"oa_url": "https://hrmars.com/papers_submitted/10199/stay-at-home-malaysian-youth-perception-towards-online-shopping-as-the-new-norms.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4cbd37acd460f4a30c8568db742f389a49244bb2",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
263608864 | pes2o/s2orc | v3-fos-license | Algebras and Hilbert spaces from gravitational path integrals: Understanding Ryu-Takayanagi/HRT as entropy without invoking holography
Recent works by Chandrasekaran, Penington, and Witten have shown in various special contexts that the quantum-corrected Ryu-Takayanagi (RT) entropy (or its covariant Hubeny-Rangamani-Takayanagi (HRT) generalization) can be understood as computing an entropy on an algebra of bulk observables. These arguments do not rely on the existence of a holographic dual field theory. We show that analogous-but-stronger results hold in any UV-completion of asymptotically anti-de Sitter quantum gravity with a Euclidean path integral satisfying a simple and familiar set of axioms. We consider a quantum context in which a standard Lorentz-signature classical bulk limit would have Cauchy slices with asymptotic boundaries $B_L \sqcup B_R$ where both $B_L$ and $B_R$ are compact manifolds without boundary. Our main result is then that (the UV-completion of) the quantum gravity path integral defines type I von Neumann algebras ${\cal A}^{B_L}_L$, ${\cal A}^{B_R}_{R}$ of observables acting respectively at $B_L$, $B_R$ such that ${\cal A}^{B_L}_L$, ${\cal A}^{B_R}_{R}$ are commutants. The path integral also defines entropies on ${\cal A}^{B_L}_L, {\cal A}^{B_R}_R$. Positivity of the Hilbert space inner product then turns out to require the entropy of any projection operator to be quantized in the form $\ln N$ for some $N \in {\mathbb Z}^+$ (unless it is infinite). As a result, our entropies can be written in terms of standard density matrices and standard Hilbert space traces. Furthermore, in appropriate semiclassical limits our entropies are computed by the RT-formula with quantum corrections. Our work thus provides a Hilbert space interpretation of the RT entropy. Since our axioms do not severely constrain UV bulk structures, they may be expected to hold equally well for successful formulations of string field theory, spin-foam models, or any other approach to constructing a UV-complete theory of gravity.
Abstract: Recent works by Chandrasekaran, Penington, and Witten have shown in various special contexts that the quantum-corrected Ryu-Takayanagi (RT) entropy (or its covariant Hubeny-Rangamani-Takayanagi (HRT) generalization) can be understood as computing an entropy on an algebra of bulk observables. These arguments do not rely on the existence of a holographic dual field theory. We show that analogous-but-stronger results hold in any UV-completion of asymptotically anti-de Sitter quantum gravity with a Euclidean path integral satisfying a simple and familiar set of axioms. We consider a quantum context in which a standard Lorentz-signature classical bulk limit would have Cauchy slices with asymptotic boundaries B L ⊔ B R where both B L and B R are compact manifolds without boundary. Our main result is then that (the UV-completion of) the quantum gravity path integral defines type I von Neumann algebras A B L L , A B R R of observables acting respectively at B L , B R such that A B L L , A B R R are commutants. The path integral also defines entropies on A B L L , A B R R . Positivity of the Hilbert space inner product then turns out to require the entropy of any projection operator to be quantized in the form ln N for some N ∈ Z + (unless it is infinite). As a result, our entropies can be written in terms of standard density matrices and standard Hilbert space traces. Furthermore, in appropriate semiclassical limits our entropies are computed by the RT-formula with quantum corrections. Our work thus provides a Hilbert space interpretation of the Ryu-Takayanagi entropy. Since our axioms do not severely constrain UV bulk structures, they may be expected to hold equally well for successful formulations of string field theory, spin-foam models, or any other approach to constructing a UV-complete theory of gravity.
Introduction
The last few years have seen significant progress in our understanding of gravitational entropy. Some important steps forward were the discovery of non-trivial quantum-extremal surfaces in the context of black hole evaporation [1,2], as well as the understanding of their relation to gravitational replica calculations [3,4]. These results in turn relied on the general connections between gravitational replicas and (quantum) extremal surfaces derived in [5][6][7]. As is by now well-known, these observations led to gravitational computations consistent with the so-called Page curve [8,9], which is expected from the ideas that black holes are unitary quantum systems with a finite number of internal states and that the number of such states is well-approximated by the exponential of the appropriate Bekenstein-Hawking entropy S BH . The analysis of Hawking radiation is particularly clean in settings where the emitted Hawking radiation is transferred from an asymptotically locally anti-de Sitter (AlAdS) gravitational system to a non-gravitational quantum mechanical system; i.e., to a system which can depend on a metric only as a fixed non-dynamical background. Such systems have often been called 'baths' in the recent literature. In this context, and in appropriate semiclassical limits following [5][6][7], the above results imply that the usual von Neumann entropy of the bath can be studied using quantum extremal surfaces describing what [10] termed 'islands', and that it is given by a formula which is a special case of the quantum-corrected Ryu-Takayanagi/Hubeny-Rangamani-Takayanagi (RT/HRT) formula [11][12][13], with quantum corrections understood in the sense of [14].
While such arguments were motivated by considerations related to the AdS/CFT correspondence [15] (or equivalently from gauge/gravity duality or gravitational holography), the final versions of the arguments rely only on properties of the gravitational path integral. In particular, at least for bath entropies described by the Island Formula, one may safely interpret the result in terms of standard von Neumann entropies without assuming the gravitational bulk system to admit a holographic field theory dual. The only subtlety here is that (see e.g. [16][17][18][19]) the semiclassical bulk gravitational theory appears to allow baby-universe superselection sectors (often called α-sectors) of the form described in [20,21], and that the Island Formula in fact characterizes the von Neumann entropy S α of the bath state ρ α in a typical α-sector [17,18] by describing an average over such bulk α-sectors. This explains the observation of [22] that the computation fails to take the form expected for the von Neumann entropy of the bath computed in the total bath state ρ total (which in the above notation takes the form ρ total = ⊕ α ρ α ).
The fact that purely bulk arguments suffice to safely interpret quantum extremal surface computations for a bath in terms of standard bath entropies suggests that this lesson may also hold more generally. In particular, in order to avoid divergences, let us consider a boundary region B L (in the sense of Ryu and Takayanagi [11,12]) that is both compact and without boundary (∂B L = ∅); see figure 1. Here the notation B L denotes the fact that, in the main text below, we will refer to B L as the 'left' part of the boundary while the complementary boundary region B R will be called the 'right' part of the boundary (which we will also require B L B R Figure 1. We consider boundaries B L , B R that are complete in the sense that ∂B L = ∅ = ∂B R . We also require B L , B R to be compact. to be compact and satisfy ∂B R = ∅). In this context we might expect that purely-bulk arguments can be used to construct a Hilbert space H L associated with B L , or perhaps a set of such Hilbert spaces H µ L (labelled by some index µ), such that the associated RT/HRT formula can be understood in terms of 1 S vN (ρ µ L ) := − Tr µ ρ µ L ln ρ µ L , (1.1) where ρ µ L is the density matrix describing the bulk quantum state on H µ L and Tr µ is the standard Hilbert space trace on H µ L . This is the challenge to be addressed below. In certain limiting cases, related results were recently established by Chandrasekaran, Penington, and Witten [23] (building on [24] and [25,26]), and especially by Penington and Witten [27]; see also [28] and [29]. However, the fact that their von Neumann algebras were type II rather than type I meant that their entropies were not given directly by standard Hilbert space traces. A related comment is that the results of [23,27] were valid only in a bulk semiclassical limit in which Hilbert space densities of states diverge and thus that their entropies correspondingly agree with the quantum-corrected Ryu-Takayanagi formula only up to an additive constant.
In contrast, we wish to consider a context in which Hilbert space densities of states are finite so that the above entropies will not require renormalization. This should allow all constants to be determined. However, it would also require appropriate couplings to be finite. In this finite-coupling regime, a primary question will be to understand how the choice of boundary region B L can define the desired Hilbert spaces H µ L . In particular, we will be far from the semiclassical regime in which entanglement wedges are well-defined; see e.g. the discussion in the final paragraph of [30].
Of course, the bulk path integral at finite coupling is poorly understood. Rather than attempt to find and study a UV-completion for any specific model, we instead proceed by simply supposing that we are given a UV-complete finite-coupling bulk asymptotically-locally-AdS (AlAdS) theory with an object that can be called a 'Euclidean path integral,' and that this path integral satisfies a simple set of axioms 2 : 1. Finiteness: The path integral gives a well-defined map ζ from boundary conditions defined by smooth manifolds to the complex numbers C.
2.
Reality: This ζ is a real function of (possibly complex) boundary conditions.
3. Reflection-Positivity: ζ is reflection-positive. 4. Continuity: ζ satisfies a rather weak continuity condition described in section 2. The first three axioms are commonly assumed for asymptotically-AdS gravitational theories, and were in particular used in [17]. In addition, the continuity axiom will be seen in section 2.2 to be extremely weak. In practice, it seems uncontroversial. The main subtlety is thus that we assume factorization of the path integral over disconnected closed boundaries, which means that any effects due to spacetime wormholes must either lead to the above-mentioned superselection sectors (in which factorization holds sector-by-sector, so that our analysis can still be applied in that sense), or that such effects must be cancelled by other contributions (as in e.g. [34]). Adopting this axiomatic framework allows us to answer the challenge associated with equation (1.1) by constructing von Neumann algebras A B L L , A B R R of observables associated with the boundary region B L and the complementary boundary B R and by then showing these algebras to contain only type I factors. (We will refer to this property by saying that the entire algebra is of type I.) The elements of these algebras may be called 'boundary observables' in the sense of [35], though we again emphasize that they are defined without assuming the existence of a dual field theory. Indeed, it seems natural to expect the required axioms to hold for successful UV-completions of general asymptotically-AdS gravitational systems, whether the completion be called string field theory, spin-foam loop quantum gravity, or by some other name. An important role in our analysis will turn out to be played by the trace inequality recently discussed in [36], which we show to be a consequence of our axioms.
Our construction also leads to associated von Neumann entropies on A B L L , A B R R which can be studied using a standard gravitational replica trick. As usual, when the bulk has an appropriate semiclassical limit that can be described in terms of a local metric theory of gravity, this entropy is given by an RT/HRT-like formula with corrections from both quantum [37] and higher-derivative effects (see e.g. [38][39][40]). Furthermore, since A B L L , A B R R are of type I, they decompose into direct sums/integrals of type I von Neumann factors. As a result, the Hilbert space on which these algebras act must also decompose into a sum/integral of terms H µ (say, labelled by an index µ), each of which is a tensor product H µ L ⊗ H µ R such that A B L L acts only on H L and A B R R acts only on H R . We also show that A B R R and A B L L are commutants of each other. It will then follow that the RT/HRT prescription computes appropriate semiclassical limits of an entropy defined by the S vN (ρ µ L ) of (1.1), a Shannon term built from the probabilities p µ to be in the Hilbert space H µ , and a set of positive constants n µ . Finally, a quantization argument will show that the index µ must be discrete, and that the n µ are positive integers. As a result, the effect of the constants n µ can be absorbed by including certain finite-dimensional 'hidden sectors' in the bulk Hilbert space.
We begin in section 2 with an overview of our axioms for (a UV-completion of) a Euclidean gravitational path integral and the construction of the relevant sectors of the gravitational Hilbert space. The von Neumann algebras A B L L , A B R R are defined in section 3 for the case where B L is chosen for simplicity to be diffeomorphic to B R . The type I structure and the associated decomposition of appropriate sectors of the bulk Hilbert space are then derived in section 4. Some examples are briefly discussed in section 5. We conclude in section 6 by summarizing results, describing potential generalizations, and discussing open issues.
The Path Integral and the Hilbert Space
The goal of this section is to write down a set of axioms for an object that we will call the Euclidean path integral for a UV-completion of an AlAdS theory of gravity, and to then use those axioms to construct the sectors of the Hilbert space that we will study in sections 3 and 4 below. We emphasize that we will require only that such axioms hold, and that any object satisfying the axioms may be called a Euclidean path integral, regardless of whether it is in fact computed as an integral over anything resembling Euclidean geometries. We also emphasize that there may well be many other properties that a good bulk theory should satisfy and which are not captured by our axioms. In other words, we suggest our axioms to be necessary, though probably not at all sufficient, for a theory to be satisfactory. What we find to be of most interest is just how much can be derived from the simple Axioms 1-5 below.
Section 2.1 describes some brief motivations that stem from considering path integrals that might be defined by sums over Euclidean geometries. This discussion is informal, but it inspires us to formulate certain axioms that we record in section 2.2, and which then become the starting point for careful analysis in the remainder of this work. The relevant Hilbert space sectors are then constructed in section 2.3. Much of the analysis below follows [17].
Motivations from sums over geometries
While in the end we will not actually require that our path integral be formulated as a sum over Euclidean geometries, we would like our axioms to apply to any such cases that might exist. We dedicate this section to brief comments on such hypothetical objects, which we take as motivations for axioms to be stated below. We emphasize that this phase of our discussion is informal and, due to the dearth of examples, it is necessarily imprecise. Formal discussion will commence in section 2.2 with the formulation of our axioms, from which the rest of our analysis will follow.
Let us thus briefly consider a path integral that actually integrates over a set of fields, among which is the (Euclidean-signature) metric. The bulk fields may also include scalars, fermions, gauge fields, etc. We will take the above-mentioned sum over metrics to include a sum over all possible topologies. The bulk fields will be collectively denoted ϕ, for which the corresponding Euclidean action will be S[ϕ]. To every smooth closed 3 AlAdS boundary M at which appropriate (potentially complex) boundary conditions are specified, a Euclidean path integral would then assign the complex number (2.1) Here we use the symbol M to denote not just the boundary manifold, but also the relevant boundary conditions for the bulk fields ϕ. The notation ϕ ∼ M in (2.1) indicates that we integrate only over bulk fields ϕ satisfying such conditions. In order to avoid overuse of terms involving the word 'boundary,' we henceforth refer to the boundary conditions on bulk fields as sources, and we refer to M as a (boundary) source manifold to remind the reader of our inclusive terminology. This terminology will seem natural to practitioners of AdS/CFT, though long experience in that context has established that, even without invoking such a duality, the boundary conditions for bulk fields play precisely the same role as sources for familiar non-gravitational quantum field theories. In the AlAdS d context with d even, the appropriate notion of sources/boundary conditions will typically be given by equivalence classes under Weyl transformations.
It is reasonable to expect ζ(M ) to be finite for smooth M , and for ζ(M ) to enjoy some degree of continuity under appropriately-small deformations of the boundary conditions described by M . For the present purposes we allow the sources described by M to be complex, though one can also restrict the discussion to real boundary conditions (or to complex linear combinations thereof). For complex sources, due to the expected reality property [S(ϕ)] * = S(ϕ * ), expression (2.1) suggests that [ζ(M )] * = ζ(M * ) where * denotes complex conjugation and, in particular, M * is the same manifold as M but with complex-conjugated sources.
Let us imagine that we cut the path integral (2.1) into two parts along a slice Σ bulk through the bulk spacetime. By this we mean that we slice each configuration ϕ that enters into the path integral into two parts, and that in all cases we call the cut Σ bulk even though the geometry of Σ bulk , and in fact the topology of Σ bulk , will depend on ϕ. We will, however, require the intersection ∂Σ bulk of Σ bulk with the AlAdS boundary M to be independent of ϕ. In the usual way, it is natural to take each of the two resulting pieces of the path integral to compute the wavefunction (or the complex conjugate of a wavefunction) of a state in a Hilbert space H ∂Σ bulk defined by the choice of ∂Σ bulk . The original (uncut) path integral then computes the inner product in H ∂Σ bulk of the two states thus defined; see figure 2. In particular, when the states are identical, the original uncut path integral computes the norm squared of the state and thus should be required to give a non-negative result. Figure 2. A slice Σ bulk (red) of the path integral intersects the (here, spherical) AlAdS boundary M at a codimension-2 surface ∂Σ bulk (red circle) which splits M into two hemispheres N 1 and N * 2 . Each half of the path integral defines a quantum state |ψ i ⟩ by computing the wavefunction of ψ i on Σ bulk . These wavefunctions can be thought of as the result of Euclidean evolution from the boundary conditions N i , and the full path integral defined by M can then be regarded as computing the inner product ⟨ψ 2 |ψ 1 ⟩.
Furthermore, it is natural to generalize the above discussion by replacing M with a finite formal linear combination of source manifolds for some n ∈ Z + with γ I ∈ C, in which case we simply use linearity to define In this case we also define In particular, such formal sums M can sometimes again be 'sliced' (or, perhaps better, factorized) into two pieces (factors) and, when the two pieces are isomorphic up to the appropriate complex conjugation, we again expect ζ(M ) to compute a non-negative norm squared. Below, we will use the notation X d to denote the set of smooth d-dimensional closed (i.e., compact and without boundary) source manifolds M appropriate to some given theory. We then use the underlined notation X d to denote formal finite linear combinations of such manifolds with coefficients in C as in (2.2) (with M I ∈ X d ). Members of both X d and X d will be denoted M to avoid cumbersome notation. This should not cause confusion since, as above, we will extend any function ζ : X d → C to the domain X d by linearity; i.e., via (2.3).
Some axioms for the UV-completion of a bulk path integral
The above brief discussion motivates the following axioms for the UV-completion of any (d+1)-dimensional AlAdS Euclidean quantum gravity path integral ζ(M ). We also expect our axioms to apply to UV-completions of bulk gravitational theories of spacetimes asymptotic to M d+1 × X where M d+1 is AlAdS d+1 and X is a fixed compact manifold of arbitrary dimension, as well as to other asymptotic structures such as those described in [41]. However, for simplicity of discussion we will refer only to the AlAdS context below. We also emphasize again that we make no attempt to state a complete set of such axioms. Thus, while we expect our axioms to be satisfied in well-behaved contexts, they are almost certainly insufficient to fully characterize the desired UV-completions.
Our first four axioms are as follows: Axiom 1. Finiteness: For some space of d-dimensional closed (and thus compact) source manifolds X d , we are given a function ζ : X d → C; i.e., ζ(M ) is well-defined and finite for every M ∈ X d . Although we do not specify the detailed nature of the allowed sources, the sources should be given by fields (or equivalence classes thereof ) on an underlying manifold. Furthermore, X d should include any smooth closed manifold with smooth source fields of the allowed types. Axiom 3. Reflection Positivity: Suppose for some n ∈ Z + that M ∈ X d can be written in the form M = n I,J=1 γ * I γ J M I,J where γ I ∈ C, γ * I denotes the complex conjugate of γ I , and where each M I,J can be sliced into two parts N * I , N J ; see figure 3. By such a slicing, we mean that there is a smooth codimension-1 hypersurface Σ I,J in M I,J that partitions M I,J into N * I and N J , so that N * I and N J are source manifolds with boundaries. Specifically, the above notation requires that the same source-manifold-with-boundary N * I is obtained from slicing M I,J for each J, and the same source-manifold-with-boundary N J is obtained by slicing M I,J for each I. In particular, slicing the diagonal closed manifold M I,I along Σ I,I yields N * I and N I . The notation N * I indicates that each diagonal source manifold M I,I admits a diffeomorphism ϕ I,I that both acts as a reflection about Σ I,I and complex-conjugates all sources. When these conditions hold, ζ(M ) is a non-negative real number.
Axiom 4. Continuity: Suppose that a source manifold M ∈ X d contains a region diffeomorphic to an (orthogonal) cylinder source-manifold-with-boundary C ϵ 0 of some length ϵ 0 > 0; Figure 3. A reflection-symmetric linear combination M ∈ X d of smooth source manifolds. The representation on the left side of the equality makes reflection-symmetry manifest. On the right side of the equality, the same M is shown as an explicit sum of terms, each proportional to a source manifold M I,J that can be cut into N * I and N J , and with coefficients of form γ * I γ J . Here and in all figures below, we make no attempt to distinguish a given source from its complex conjugate. Thus N * I appears as simply a reflected version of N I . The 'diagonal' manifolds M I,I are individually reflection-symmetric. Axiom 3 requires that such reflection-symmetric M have ζ(M ) ≥ 0. see figure 4. The term "(orthogonal) cylinder source-manifold-with-boundary" indicates that C ϵ 0 is topologically of the form B × [0, ϵ 0 ], and that C ϵ 0 admits a Killing field ξ which generates a local symmetry of C ϵ 0 and the sources that it represents, with ξ orthogonal to ∂C ϵ 0 . By a local symmetry, we mean that at each point in the interior of C ϵ 0 the flow along ξ is well-defined for at least some finite values of the Killing parameter, though of course the boundaries ∂C ϵ 0 prohibit C ϵ 0 from enjoying a translational symmetry along these flows. The statement that C ϵ 0 has length ϵ 0 means that the two copies of B in ∂C ϵ 0 are related by flow through a Killing parameter ϵ 0 , though the actual value of ϵ 0 is not meaningful since we have not fixed a preferred normalization for the Killing field ξ. For simplicity we will drop the qualifier 'orthogonal' when discussing cylinders C ϵ below.
Let us now write the above M as M ϵ 0 and define a related family of manifolds M ϵ by replacing the C ϵ 0 contained in M ϵ 0 with the analogous cylinder C ϵ . The resulting ζ(M ϵ ) is then required to be a continuous function of ϵ at all ϵ > 0. Figure 4. The source manifold M ϵ0 contains a cylinder C ϵ0 of length ϵ 0 . Changing the length of this cylinder to ϵ defines a new source manifold M ϵ . Here and in all figures below, the symbols × denote potential features of smooth sources whose details are not shown but which serve to distinguish certain boundary points. For example, such features might be local peaks in the Kretchman scalar of the boundary metric, extrema of a smooth scalar source, or points where a fermionic source becomes large. The main role of these features in our figures is to provide a simple and clean visualization of effects that arise when boundary-sources break symmetries of the simple cases that we choose to depict.
The reader will note that our continuity condition is extremely weak, and that one generally expects rather stronger continuity conditions to hold. However, Axiom 4 has the benefit of being simple to state for general boundary dimension d, and it will turn out to be sufficient for our purposes below.
Axioms 1, 2, and 3 are requirements explicitly stated in [17]. Continuity was not discussed in [17], but the mild hypothesis stated in Axiom 4 is a natural addition. However, in addition to Axioms 1-3, in its discussion of general theories Ref. [17] also implicitly used additional assumptions to deal with spacetime wormholes. In particular, as explained in [17], Axioms 1-3 imply the set of real source manifolds M ∈ X d to be associated with a collection of symmetric operators defined on a common dense domain in a natural quantum gravity Hilbert space. These axioms also further imply that any two such operators commute on this domain. It was then suggested in [17] that each of these symmetric operators should have a self-adjoint extension to the full Hilbert space, and that the physically correct such extensions were again mutually commuting. This outcome will seem natural to many physicists, though the above results are not in fact sufficient to prove that it actually occurs. See in particular [42] for an example where symmetric operators that commute on a common invariant dense domain are essentially self-adjoint (so that they have a unique self-adjoint extension to the full Hilbert space) but where their self-adjoint extensions nevertheless fail to commute. Furthermore, while this issue may seem to some like an abstruse technical concern, it may have some connection to the ambiguity in constructing ensembles dual to Jackiw-Teitelboim (JT) gravity discussed in e.g. [43][44][45][46][47][48][49][50][51]. It would be very interesting to understand the potential instabilities discussed in such works in terms of the algebraic language used here.
Despite the above caveats, if the suggestion of [17] does hold, then the self-adjoint extensions can be simultaneously diagonalized on the full quantum gravity Hilbert space. The resulting simultaneous eigenspaces of these operators are then called "baby universe superselection sectors," and they have the property that any M ∈ X d defines an operator proportional to the identity on each such sector. As a result, one may show (see again [17] as well as the explicit discussion of [19] for JT gravity) that considering any given sector on its own may be thought of as working with a modified path integral that exhibits the factorization property for closed source manifolds M 1 , M 2 ∈ X d . Here the symbol ⊔ denotes the disjoint union of source-manifolds-without-boundary. Such superselection sectors are often called α-sectors, and they play the role of the α-states described in [20,21]. Once this structure is established, it is then sufficient to deal with each baby universe superselection sector individually. It is tempting to expect bulk path integrals of UV-complete theories to be equivalent to collections of such superselection sectors. Here we include the case suggested in [52] where there is only one such superselection sector in the collection, so that each M ∈ X d defines an operator proportional to the identity on the entire Hilbert space. However, we emphasize that in the presence of multiple such superselection sectors, it would be natural to simply work with each such sector separately. Doing so would allow us to frame arguments in terms of path integrals satisfying (2.5). As a result, rather than introduce further complicated axioms which would require the path integral to be a sum over superselection sectors as described above, we will simply assume that we start with a path integral satisfying the factorization property (2.5). In particular, we include the following axiom: Axiom 5. Factorization: For source manifolds M 1 , M 2 ∈ X d , the function ζ satisfies (2.5). Note that this is equivalent to requiring (2.5) to hold for M 1 , M 2 ∈ X d .
We will investigate consequences of our axioms below.
Sectors of the quantum gravity Hilbert space
As noted at the beginning of this section, one expects to be able to obtain states of any quantum gravity theory by 'cutting open' the associated path integral. The associated formal construction from our axioms will be described shortly. This construction is standard, and in particular closely parallels the quantum field theory (QFT) case described in e.g. [53]. As remarked in the introduction, our approach will be to remain agnostic about the inner workings of the path integral, and simply to view it as a function ζ : X d → C satisfying Axioms 1-5.
We refer the reader to the literature for further discussion of what it means to cut open a quantum gravity path integral; see e.g. [17]. However, at an abstract level it is clear that doing so requires that we cut any closed AlAdS boundary M into two pieces N 1 , N 2 with ∂N 1 = ∂N 2 . We should then associate quantum gravity states with these two pieces such that the inner product of the two states is ζ(M ).
However, there are several subtleties in this process that merit discussion. The first such subtlety arises when there are open sets in N 1 , N 2 that contain ∂N 1 = ∂N 2 and which admit non-trivial symmetries. In that case, there is more than one way to glue the pieces N 1 , N 2 back together to obtain a smooth manifold. Furthermore, each such gluing g generally leads to a different closed manifold M g , only one of which can be the original M from which the pieces N 1 , N 2 were cut. As a result, it is not sufficient to think of N 1 , N 2 as diffeomorphism equivalence classes of source manifolds with boundaries. Instead, we see that we should think of the points on ∂N 1 = ∂N 2 as being labelled, so that M is reconstructed by gluing N 1 to N 2 along their boundaries in the manner dictated by matching identical labels. As a result, we will henceforth use the notation N to denote a manifold with boundary ∂N , together with a labelling of points on ∂N .
Before proceeding, it may be useful to illustrate the labelling of points on ∂N with some simple examples. The simplest case occurs for d = 1, where ∂N has dimension d − 1 = 0 and so consists only of discrete points. Each such point must be assigned a distinct label. For example, for a given source-manifold-with-boundary N I , this number might be 2 and N I might simply be a line segment (say, of some fixed length β) with the two points in ∂N I labelled 0 and 1. If N II is a diffeomorphic source-manifold-with-boundary but with boundary points labelled 1, 2, then it is considered to be a different source-manifold-with-boundary and we write N I ̸ = N II . Similarly, suppose that N III , N IV are again line segments of length β with boundary points 0, 1, and that we in fact introduce a coordinate θ on both line segments that measures β −1 times the proper distance from the boundary point 0. Let us also suppose that N III comes equipped with some scalar source ϕ(θ) which increases monotonically for θ ∈ [0, 1], and that N IV has a corresponding scalar source ϕ(1 − θ). Then while N III and N IV are related by a source-preserving diffeomorphism θ → 1 − θ, this diffeomorphism fails to preserve the labelling of points in ∂N III , ∂N IV . We will thus again write N III ̸ = N IV .
Note that we can also upgrade the above examples to d = 2 by replacing any boundarypoint (say, that is labelled by some i above) with a circle, and by labeling points on this circle (i, θ) where θ ∈ [0, 2π) is a standard angular coordinate while i now effectively labels the circle as a whole. We can also use a similar notation to define a disjoint union operation ⊔ that will be of frequent use below. For any boundary ∂N for which its points have labels {I}, we define the disjoint union ∂N ⊔ ∂N to have labels (i, I) where i = 1 on the first copy of ∂N and i = 2 on the second copy, and where the labels I are again assigned just as in ∂N . In particular, if ∂N = S 1 , then we may label S 1 ⊔ S 1 with (i, θ) for i = 1, 2 and θ ∈ [0, 2π) as above.
Returning to our main discussion, let us now suppose that we are given two manifolds N 1 , N 2 with labelled boundaries ∂N 1 , ∂N 2 , such that the boundary labels define a diffeomorphism ϕ : ∂N 1 → ∂N 2 . (Recall that diffeomorphisms are required to be surjective.) We can then use this ϕ to glue N 1 to N 2 to define a closed manifold M without boundary. However, there is no guarantee that the resulting boundary fields on M will be smooth, or indeed even that they will be continuous. As a result, ζ(M ) may not be well-defined.
We will deal with this issue by using the following simple expedient: rather than attempting to construct the entire quantum gravity Hilbert space, we will instead construct only sectors that are associated with certain types of data on the codimension-2 boundaries ∂N . In particular, we will consider only source-manifolds-with-boundary N that are rimmed in the following sense: Definition 1. A source manifold N with boundary ∂N will be said to be rimmed when there is a neighborhood N ϵ of ∂N such that N ϵ is diffeomorphic to some cylinder source manifold C ϵ of the form defined in Axiom 4 and satisfies the reality condition C * ϵ = C ϵ with * defined as in Axiom 3. The region N ϵ is then called a rim of N .
We note that, in order for rimmed manifolds to exist, real cylinders (satisfying C * ϵ = C ϵ ) must also exist. For certain systems this may require specific conventions involving factors of i = √ −1. For example, suppose that our theory requires the source manifolds to be oriented. Then the reflection inherent in * will reverse orientations. In such cases, cylinders satisfying C * ϵ = C ϵ can exist only if we declare the orientation to be an imaginary source. Its sign will then be changed a second time by the complex-conjugation inherent in * , so that C * ϵ and C ϵ have the same orientation as desired.
We also make the following definitions: Definition 2. We will say that two rimmed source-manifolds N 1 , N 2 with boundaries ∂N 1 , ∂N 2 agree on their boundaries when they admit rims N 1ϵ , N 2ϵ that are related by a diffeomorphism that preserves sources and which also preserves the labels on ∂N 1 , ∂N 2 . By the local translation symmetry, the data on all of N 1ϵ , N 2ϵ is determined by data at ∂N 1 , ∂N 2 , so we will write ∂N 1 = ∂N 2 to denote the above agreement on the rims N 1ϵ , N 2ϵ . We will similarly use the symbol ∂N to denote the manifold at the boundary of the source-manifold-with-boundary N together with enough information about the sources on N to reconstruct sufficiently small rims N ϵ .
The utility of restricting to rimmed source-manifolds is that, when two rimmed sourcemanifolds N 1 , N 2 agree at their boundary (in the sense that ∂N 1 = ∂N 2 ), it is then clear that a reflection of N 1 can be glued to N 2 to define a new smooth source-manifold-withoutboundary. However, since N * 1 already incorporates the required reflection of N 1 , and since Definition 1 required sources on the rims to be real, it is natural to simply discuss gluing N * 1 to N 2 . For ∂N 1 = ∂N 2 , we denote the result of this gluing by M N * 1 N 2 . For future use, we note that our gluing operation acts symmetrically on N 1 , N 2 up to the action of * so that we have where "=" means that the two source manifolds are related by a source-preserving diffeomorphism; see figure 5. Due to this symmetry, from Axiom 2 we also have In particular, for a given such choice of ∂N (in the sense of Definition 2) we can define a sector H ∂N of the quantum gravity Hilbert space. To do so, define Y d ∂N as the space of compact rimmed source manifolds N having the given boundary ∂N . From Y d ∂N , we can then Figure 5. Gluing N * 1 to N 2 and gluing N * 2 to N 1 defines source-manifolds-without-boundary M N *
N2
and M N * 2 N1 that are related by a diffeomorphism that complex-conjugates sources. As depicted here, the relevant diffeomorphism acts as a reflection across the shaded plane. Thus (M N * 1 N2 ) * = M N1N * 2 , where "=" means that the two are related by a source-preserving diffeomorphism.
construct the space Y d ∂N of finite formal linear combinations N = n I=1 γ I N I with γ I ∈ C and N I ∈ Y d ∂N . The next step is to associate a (not necessarily distinct) state |N ⟩ with each N ∈ Y d ∂N . Two such states |N 1 ⟩, |N 2 ⟩ are defined to have (pre-)inner product The (pre-)inner product is Hermitian due to (2.7), and is positive semi-definite by Axiom 3. We may then say that (2.8) defines a pre-Hilbert space H ∂N . Taking the quotient by the space N ∂N of null vectors and completing 4 the result then yields a Hilbert space H ∂N that we call the ∂N -sector of the full quantum gravity Hilbert space. Below, we will use the notation |N ⟩ to denote both elements of the pre-Hilbert space H ∂N and the associated equivalence class in H ∂N , though the distinction should always be clear from the context. Indeed, since Y d ∂N allows only finite linear combinations, it may often be the case that N ∂N is empty and the quotient is trivial.
The above expedient will allow us to proceed quickly to constructing and studying algebras of operators on H ∂N without characterizing in detail the degree of differentiability of sources on M that is required for ζ(M ) to be finite, and also without analyzing the manner in which divergences arise when such conditions fail. If our goal is to construct states associated with static Lorentz-signature boundary conditions, then one may expect that our restriction to rimmed surfaces gives the full such Hilbert space. One argument for this comes from AdS/CFT, in which case the rims correspond to insertions of e −ϵH for some ϵ. Since e −ϵH is invertible, even at fixed finite ϵ the rimmed surfaces will generate a complete set of states. However, even without relying on AdS/CFT, since we allow the rim N ϵ to be arbitrarily small, if our path integral is sufficiently continuous in ϵ then rimmed surfaces may still provide full information about the sector of the theory associated with a given ∂N of the type described above 5 . 4 We complete H ∂N /N ∂N in the standard way using equivalence classes of Cauchy sequences {|Nm⟩}, where two such sequences are equivalent if the norm of their difference approaches 0. 5 Note, however, that Axiom 4 requires only continuity when the length ϵ > 0 of some cylinder is slightly Nonetheless, two shortcomings to our approach should be discussed. The first is that we obtain no information about inner products ⟨N 1 |N 2 ⟩ when ∂N 1 ̸ = ∂N 2 . Such inner products do not necessarily vanish, especially in low dimensions. Indeed, for d = 1 both AdS/CFT and the associated semiclassical bulk computations suggest that the inner product can be nonzero even when the sources on M N * 1 N 2 are discontinuous (so long as M N * 1 N 2 is a well-defined topological manifold).
The second shortcoming is that, while we expect the above restrictions to allow us to construct all states associated with static Lorentz-signature boundaries, at least in high dimensions we expect to miss sectors of the quantum gravity Hilbert space associated with non-static boundaries. Based on both the AdS/CFT context and the divergences that manifest themselves in the associated semiclassical bulk computations, we expect this issue to be related to what one finds when studying quantum fields on curved spacetime, where in high dimensions the space of states on a given Cauchy slice Σ (say, specified by the correlation functions of fields and their derivatives on Σ) can depend not only on the metric induced on Σ but also on various normal derivatives of background fields (sources) evaluated at Σ. It would be interesting to return to both of these issues in the future, though a full investigation of the second issue seems likely to require a Lorentz-signature analysis.
Operator Algebras from the Path Integral
We have thus far described how our path integral ζ can be used to construct sectors H ∂N of the quantum gravity Hilbert space. But it can also be used to construct operators, and this construction will be useful in understanding the further structure of H ∂N and the relation to RT entropy. To define such operators, let us again consider the space of compact rimmed surfaces Y d ∂N for some choice of codimension-2 boundary ∂N . We will now further suppose that ∂N is the disjoint union of two pieces, ∂N = B in ⊔ B out , with both B in , B out being compact and closed (in the sense that ∂B in = ∂B out = ∅). Then for any appropriate additional boundary (which for later purposes we call B R ), one sees that any a ∈ Y d ∂N defines an operator from H B in ⊔B R to H Bout⊔B R by gluing surfaces along B in . We may thus construct operators that preserve a sector of the form H B L ⊔B R by considering the case B in = B out = B L . Here we refer to B L as the 'left' part of B L ⊔ B R while B R is the 'right' part.
For any B, we may then endow the surfaces Y d B⊔B with a multiplication operation which promotes the space of formal linear combinations Y d B⊔B to an algebra A B L . In fact, we will also introduce an analogous 'right algebra' A B R in section 3.1 below. Choosing B = B L or B = B R then allows us to define four algebras However, as we will see, only the two algebras A B L L , A B R R have natural actions on H B L ⊔B R . For these algebras, we obtain representations on H B L ⊔B R . Our path integral also defines useful notions of trace on each of these algebras.
deformed and, in particular, it does not necessarily require continuity when the limiting source-manifold no longer contains a cylinder of non-zero length. Figure 6. For two elements a, b ∈ Y d B⊔B as shown in the top row, we define the left and right products a · L b and a · R b by the gluing procedures shown in the bottom row.
We will then show in section 3.3 that the associated representations of the algebras A B L L , A B R R can be extended from the pre-Hilbert space H B L ⊔B R to its Hilbert space completion H B L ⊔B R . In particular, because our axioms turn out to enforce a trace inequality of the form recently discussed in [36], all operators in these representations must be bounded. As a result, we may use the associated representations to construct von Neumann algebras.
We then specialize to the case B L = B R = B, in which case we denote the resulting von Neumann algebras by A B L and A B R . Some key properties of these algebras are then studied in section 3.5. In particular, we show there that the above trace operations can be extended to both A B L and A B R .
Surface algebras
Consider for the moment a given compact closed boundary B (with ∂B = ∅), which might represent either B L or B R above. For each such B we will define two algebras, A B L and A B R . If we thus allow a choice of B = B L or B = B R , we could in fact define four such algebras, though only the two choices A B L L and A B R R will play an important role in our construction below.
Let us now consider general such B. To understand the difference between A B L and A B R , recall that points on B ⊔ B are labelled, which in particular means that the two copies of B are distinguished. We will refer to the first copy as the 'left boundary' and the second copy as the 'right boundary. ' On the set Y d B⊔B we may define the left product (· L ) as the operation that takes as input an ordered pair of rimmed surfaces a and b, and which constructs the surface a· L b that results from gluing the left boundary of b to the right boundary of a (see figure 6).
For simplicity, we will adopt the notation We similarly define the right product (· R ) as the operation that, given an ordered pair of surfaces a and b, glues the right boundary of b to the left boundary of a. Note that we have We can also extend this product to linear combinations a, b ∈ Y d B⊔B by defining it to satisfy the distributive law. The set Y d B⊔B equipped with the left product then forms an algebra A B L which we call the left B-surface algebra, or simply the left surface algebra where confusion will not arise. Similarly, the right product on Y d B⊔B leads to the right B-surface algebra A B R . Since every element of Y d B⊔B has a finite rim at each boundary, gluing two surfaces a, b together always results in a surface larger than either a or b, so that neither of these algebras can contain an identity element.
However, the algebras A B L and A B R do admit a natural involution ⋆ satisfying so that ⋆ defines an anti-linear isomorphism between the left and right algebras. To define the operation ⋆, recall that Axiom 3 introduced a complex conjugation operation * (which is different from the ⋆ that we are about to define) on N ∈ Y d B⊔B . In particular, N * was defined so that M N * N has a reflection symmetry that complex conjugates all sources. This means that N * is the same manifold as N (with the same labels on ∂N ), and that * acts on scalar sources by standard complex-conjugation (though the operation on vector, tensor, and spinor sources is more complicated due to the reflection). In addition, Y d B⊔B admits a natural transpose operation t that simply swaps the labels 'left' and 'right' attached to the boundaries of any N ∈ Y d B⊔B while preserving all sources and leaving the labels on ∂N otherwise unchanged. The transpose and complex conjugation operations commute, and for any a in either algebra we may then define Due to the inclusion of the transpose operation, we then immediately find (3.3).
A trace and a trace inequality for surface algebras
An important consequence of the labelling of points on B is that, by writing ∂N = B ⊔ B, we also mean that the labels on the two copies of B agree up to the distinction between the left and right boundaries. To be precise, we mean that these labels define a diffeomorphism ϕ LR from the left boundary to the right boundary that preserves enough information about sources near each boundary to reconstruct sufficiently small rims at each B. This ϕ LR can then be used to identify the left boundary of any a ∈ Y d B⊔B with its right boundary, and thus to define a closed source manifold 6 Figure 7. For a, b ∈ Y d B⊔B , we can construct a ⋆ and compute tr(a ⋆ b) as shown in the lower panel. Note that tr(a ⋆ b) = tr(ba ⋆ ), and that this relation is equivalent to (3.6) with a replaced by a ⋆ .
We can also extend this operation to linear combinations a ∈ Y d B⊔B by linearity, so that we then find M (a) ∈ X d .
This observation allows the path integral to define a useful trace operation tr on both and similarly for the right product; see figure 7. While the trace operation is defined directly for any a ∈ Y d B⊔B (without using properties of either the left or right algebras), the result (3.6) makes it reasonable to refer to this operation as a trace on both A B L and A B R . Before proceeding to the next step of our analysis, it will be useful to note that, as also shown in figure 7 and thus using (2.8) we find This relation will be used to translate certain Hilbert space statements into operator statements and vice versa. In particular, for where we remind the reader that the inequality on the right follows from Axiom 3 (reflection positivity). The inequality (3.9) turns out to have interesting and important consequences when we apply it to Hilbert space sectors defined by boundaries with multiple connected components.
We will come back to this idea several times during our work, but we start with a simple case that involves choosing two sums-of-surfaces a, b ∈ Y d B⊔B . Note that such sums can be used to define an element a ⊔ b ∈ Y d (B⊔B)⊔(B⊔B) . Here the parentheses in the subscript on Y d (B⊔B)⊔(B⊔B) are intended to indicate that points of ∂a are labelled to match the first pair of boundaries (B ⊔ B), while points of ∂b are labelled to match the second pair of boundaries (B ⊔ B). We might also write such an a ⊔ b using the more explicit notation , which indicates that the left and right boundaries of a are associated with the first two copies of B (in the specified order) while the boundaries of b are associated with the second two copies of B. In writing Y d we have replaced the usual disjoint union symbols ⊔ by commas for notational simplicity.
Note in particular that there are distinct states |a above; see the lower panel of figure 8. These states are related by the action of the 'swap' operator S L 1 ,L 2 that exchanges the labels L 1 , L 2 on the relevant two copies of B. Since the (pre-)inner product on H B L 1 ,B R 1 ,B L 2 ,B R 2 is defined by identifying corresponding boundaries in the bra-and ket-surfaces, as shown in figure 8 the norms of these states are computed by path integrals defined by the disconnected source manifold M (a ⋆ a) ⊔ M (b ⋆ b). We thus find (3.10) In contrast, the (pre-)inner product between |a Here the first equality shows that (3.11) is real and non-negative (since the second form shown is a norm squared), which then implies the final equality (since the final form is manifestly the complex conjugate of the left-hand side). On the other hand, the Cauchy-Schwarz inequality requires where |ψ⟩ := ⟨ψ|ψ⟩ is the norm. Combining (3.10) and (3.11) with (3.12) immediately yields which is the trace inequality recently discussed in [36]. Here we see that it holds for our surface algebras as a consequence of Axioms 1-5. The inequality (3.12) will play a key role in our discussion below. We will also return to higher analogues of this inequality in section . As shown, the two norms agree. Lower panel: Tracing the diagram verifies that this inner product is equal to both tr(a ⋆ bb ⋆ a) = tr(aa ⋆ bb ⋆ ) and ⟨b ⋆ a|b ⋆ a⟩. Since the latter is manifestly real (and non-negative), this diagram also computes ⟨a
Representation of the surface algebras on H B L ⊔B R
So far we have defined the surface algebras as abstract vector spaces equipped with multiplication; now we define how they act on states. In particular, the operations on surfaces described above can now be used to define a representation B L ⊔B R L of the surface algebra A B L L that acts on the Hilbert space H B L ⊔B R . In the notation of sections 3.1-3.2 we now specialize to the case B = B L . Though the notation B L ⊔B R L is somewhat awkward, it emphasizes the important point that this representation depends both on the choice of B L and on the choice of B R , even though A B L L was defined by B L alone. This point will be discussed further in section 3.4.
The first steps of our construction are to consider a ∈ Y d B L ⊔B L and to define an associated operatorâ L that acts on the pre-Hilbert space (3.14) see figure 9. Here we have used the condensed notation ab := a · L b defined in section 3.1 above. When a is a simple surface a ∈ Y d B L ⊔B L , the associatedâ L acts on |b⟩ ∈ H B L ⊔B R by just gluing the surface a to the left boundary of b. Note that this action is a representation where (ab) L is the operator associated to (ab). The next step in showing that our representation acts on the Hilbert space H B L ⊔B R is to establish thatâ L preserves any null space N B L ⊔B R of pre-Hilbert space states with vanishing norm, so thatâ L yields a well-defined operator on H B L ⊔B R /N B L ⊔B R . We will also need to extend the definition ofâ L to the full Hilbert space H B L ⊔B R in a manner consistent with (3.15). Both of these steps are straightforward due to the trace inequality. To see this, recall In the second step of (3.16) we have used (3.8) with a and b both replaced by ab. The third step used cyclicity of the trace (3.6), and the fourth and fifth steps then follow directly from (3.13) and another use of (3.8). The result is thatâ L is bounded by √ tr a ⋆ a on H B L ⊔B R . In particular, if |b⟩ ∈ N B L ⊔B R then ⟨b|b⟩ = 0. The result (3.16) then clearly requireŝ a L |b⟩ to have zero norm as well. Thusâ L preserves N B L ⊔B R and induces an operator on the quotient It thus admits a unique continuous extension to the entire space H B L ⊔B R , which is again bounded by √ tr a ⋆ a; see e.g. [42]. We will continue to use the symbolâ L for this extension; thus we write the bound on its operator norm as Continuity implies that such extensions also satisfy (3.15), which makes clear that we have constructed a representation of A B L L on H B L ⊔B R as desired. We call this representation or, in what we hope is an obvious shorthand, we say that we have constructed a representation L := B L ⊔B R L .
Since the operators in L are bounded, it is easy to discuss their adjoints (which must exist and are also bounded). Any such operator is defined by some Here the 3rd step follows from (3.8) and the relation (⟨c|d⟩) * = ⟨d|c⟩ with d = ab. Since (3.18) holds on a dense set of states, and since both a ⋆ L andâ † L are bounded, we must in fact have L is not necessarily faithful. This is precisely characterized by a null space N L consisting of all a ∈ A B L L whose associatedâ L is the zero operator. Thus, when we construct L from A B L L , we have effectively taken a quotient A B L L /N L , in the sense that L is isomorphic to (and a faithful representation of) As required for a representation of A R , this action satisfies The extension to the full Hilbert space H B L ⊔B R then proceeds precisely as above. The discussion of adjoints is analogous to the left case and we again find a ⋆ R = a † R . Perhaps the most interesting point to mention concerning R is that its operators commute with operators in the left-representation L . In particular, for a ∈ A B R R , b ∈ A B L L , and c in the pre-Hilbert space H B L ⊔B R we clearly havê a RbL |c⟩ = |bca⟩ =b LâR |c⟩. (3.21) Furthermore, the operatorsâ R ,b L are bounded (and thus continuous) on H B L ⊔B R and the above states |c⟩ are dense in H B L ⊔B R . We may thus take limits to conclude that (3.21) in fact holds for all |c⟩ ∈ H B L ⊔B R .
Diagonal sectors are special
We now restrict to the "diagonal" special case B L = B R , in which context we introduce the shorthand notation H LR = H B L ⊔B R and similarly for Y d LR , the pre-Hilbert space H LR , and so forth. It will also be useful to note that our trace operation tr on A B L L defines a trace on the representation L . We of course wish to declare that trâ L := tr a. (3.22) The important property of the definition (3.22) (which we show below) is that it is well-defined on L in the sense that it satisfies tr a = tr b wheneverâ L =b L on H LR . This is equivalent to saying that tr a = 0 whenâ L = 0 on H LR . Note that this property is non-trivial because the representation L is not necessarily faithful; i.e., whenâ L = 0, the element a could be non-zero, but nonetheless we claim that tr a = 0. Note also that this property relies on the action on a particular Hilbert space, so the condition that an analogous trace be well-defined on the representation of the left-algebra on some other H B L ⊔B R is generally quite distinct.
In particular, we make no claim that the trace is well-defined on representations defined by For a diagonal Hilbert space H LR with B L = B R , the desired property can be established using the continuity axiom 4. In particular, for any a ∈ Y d LR , let C β ∈ Y d LR be the cylinder of length β defined by B L = B R as in Axiom 4, and consider aC 2β ∈ Y d LR . Since C 2β = C β C β , and since we have restricted to B L = B R for which cylinders are real (C * β = C β ), we have Clearly the right-hand side vanishes for all β ifâ L = 0. However, Axiom 4 requires the β → 0 limit of (3.23) to give tr(a): where the notation β ↓ 0 emphasizes that C β is defined only for β > 0 so that the limit is necessarily taken from above. Thus, as desired, we find tr(a) = 0 whenâ L = 0. The trace is of course also defined on R where its properties are analogous. We can also easily establish the converse of the above property for manifestly positive 7 operatorsâ L of the formâ L =γ † Lγ L withγ L ∈ L ; i.e., if tr(a) = 0 thenâ L = 0. This follows immediately from the operator norm bound (3.17): if 0 = tr(a) = tr(γ † Lγ L ) = tr (γ ⋆ γ) L = tr(γ ⋆ γ), then the operator norm ofγ L is bounded by tr(γ ⋆ γ) = 0 so thatγ L = 0 and thuŝ a L = 0. In a slight abuse of terminology, we will refer to this property by saying that our trace is faithful on the representation L . The usual definition of the term faithful would require this property to hold for all positive operators (and not just those whose positivity is manifest). This stronger notion of faithfulness will also turn out to hold, though we defer its discussion to section 4.1.
The argument above used the fact that H B L ⊔B R for B L = B R contains cylinder states |C β ⟩ of the form that enter into the continuity axiom. We have no analogous construction for B L ̸ = B R , so our trace may not be well-defined on other representations. At the level of the full von Neumann algebra defined below, it seems natural for the associated left algebras defined by B L ̸ = B R to be projections of the left algebra defined by the diagonal case B L = B R (and similarly for the right algebras). However, we reserve this analysis for future work [54].
The von Neumann algebras
We are now ready to define a von Neumann algebra A B L using the representation L := B⊔B L on a diagonal sector of the Hilbert space with B L = B R = B (and to similarly construct a related von Neumann algebra A B R from the right representation R := B⊔B R ). Although it is straightforward to define analogous von Neumann algebras using non-diagonal representationŝ we leave the investigation of such algebras for future work; see again the comment at the end of the previous subsection.
We define the von Neumann algebra A B L to be the closure of L within B(H LR ) in the weak operator topology or, in what is known to be equivalent in the present context, the strong operator topology 8 . Here B(H LR ) denotes the algebra of bounded operators on our Hilbert space. Note that the identity operator 1 lies in the closure due to Corollary 4 of appendix A. Due to the von Neumann bicommutant theorem (see e.g. section 0.4 of [57]), we can also equivalently define A B L as the double commutant of L within B(H LR ). This in particular means that each operator in the resulting von Neumann algebra A B L is again bounded. Of course, corresponding statements hold for the right algebras as usual.
For every operator a in a von Neumann algebra, the adjoint a † also lies in the von Neumann algebra. So the adjoint operation continues to act as an involution on A B L . Note that we previously used the symbols a, b, c, . . . to denote elements of Y d B⊔B , but that we henceforth also use them to denote generic operators in A B L/R . We will continue to usê a L ,b L ,ĉ L , . . . to denote operators in L .
We also introduced a trace operation tr on the operators in L in (3.22). In particular, we showed tr to be well-defined and finite on L . We now wish to extend this trace to the A B L . In the theory of von Neumann algebras one generally allows traces of some operators to diverge. Nevertheless, even in this sense, a trace is usually well-defined only on positive elements of the von Neumann algebra, where it takes values in the closed interval [0, +∞]; i.e., allowing +∞. The restriction to positive elements is closely related to the familiar fact that, when an infinite-dimensional square matrix A i j is not positive, the infinite sum of the form i A i i can be oscillatory and need not converge in any sense. In contrast, for positive infinitedimensional matrices A i j , the fact that each diagonal element A i i is non-negative means that if i A i i fails to converge to a finite number, then we may say that it 'converges' to +∞. (Of course, the quantity i A i i is manifestly well-defined for any finite-dimensional square matrix A i j .) We will thus attempt to extend our notion of tr only to positive elements a ∈ A B L , which in this context means that a is a positive operator on H LR . However, we note for future reference that this condition is equivalent to requiring that a be of the form γ † γ for some γ ∈ A B L (where we can in fact take γ to be the positive square root of a, as this operator must also lie in A B L ). To define a useful extension of our trace, we need to find a function mapping the positive elements a ∈ A B L to [0, +∞] that agrees with our previous definition of tr on L and which satisfies other properties to be discussed below. It will thus be productive to consider alternative representations of the operation tr on L . We begin by returning to the relation (3.24), which was argued above to hold for all a ∈ Y d LR . This will turn out to be a step toward the definition of our trace on A B L , though we will now pause briefly to further rewrite the identity (3.24) in order to make certain properties manifest.
It will be convenient to introduce the normalized cylindersC β ∈ Y d LR defined bỹ where ∥C β ∥ denotes the operator norm of C β L on H LR . This norm should be more properly written ∥ C β L ∥, but for simplicity we will use just ∥C β ∥. One may expect that the continuity axiom (Axiom 4) requires ∥C β ∥ → 1 as β → 0 and in fact that ∥C β ∥ = (∥C 1 ∥) β . Both expectations are correct, but the proofs are somewhat technical. We thus relegate them to appendix A. As a further remark, note thatC β is normalized so that C β L has operator norm 1, but that the state |C β ⟩ is typically still not normalized with respect to the Hilbert space inner product. In fact, the norm of |C β ⟩ generally diverges as β → 0; see (3.56). For a ∈ Y d LR , we may use (3.25) and (3.24) to write Here the second step uses the fact that both ∥C β ∥ 2 and ⟨C β |â L |C β ⟩ have finite limits, and that ∥C β ∥ 2 → 1.
The formulation in terms ofC β is useful because the operator norm of C β L is 1 (by construction). We show below that for positiveâ L this requires ⟨C β |â L |C β ⟩ to be a decreasing function of β, which means that for positiveâ L we can also write (3.26) as a supremum over β: As we will see, this is an improvement over (3.24) because two supremum operations always commute (while showing that more general limits commute can be notoriously subtle). To see that ⟨C β |â L |C β ⟩ is a decreasing function of β, note that for β ′ > 0 we havẽ where we have used the relation (∥C β ∥) ∥C β ′ ∥ = ∥C β+β ′ ∥ which follows from Corollary 5 of appendix A. Thus we may write Let us also recall from (3.21) that the right representation C β ′ R ofC β ′ commutes with anŷ a L , and thus in particular with the positiveâ L of interest here. Furthermore, since botĥ R are positive, both operators self-adjoint. We may then use the fact that commuting self-adjoint operators can be diagonalized to introduce a complete set of common eigenstates |λ, κ⟩ where λ ≥ 0 is the eigenvalue of C β ′ R and κ ≥ 0 is the eigenvalue ofâ L . Since the operator norm of C β ′ R is 1, the parameter λ takes values only in the interval [0, 1]. We will also define a measure dµ(λ, κ) that gives a resolution of the identity 1 = dµ(λ, κ)|λ, κ⟩⟨λ, κ|.
The argument is now straightforward as we may use self-adjointness of C β ′ R to write where we pass from the 3rd to the 4th line by using λ 2 ≤ 1. This shows that ⟨C β |â L |C β ⟩ increases monotonically as β decreases, and thus that (3.27) holds for positive elementsâ L of L . We may then extend tr to any positive element in the left von Neumann algebra A B L via the analogous expression and similarly for A B R . In particular, for all positive operators a, the quantity ⟨C β |a|C β ⟩ is non-negative, so that the supremum on the right-hand side must lie in [0, +∞] as desired. It is worth noting that our argument above actually showed that ⟨C β |a|C β ⟩ is a decreasing function of β for all positive a ∈ A B L , and therefore if we wish, we may replace the supremum in (3.31) by a limit and write with the understanding that the limit could be +∞. Now, in the theory of von Neumann algebras, what we have shown thus far is sufficient to qualify the operation tr as what is called a weight on A B L . For tr to qualify as what is usually called a trace requires an additional property, which is that it gives identical results for both a † a and aa † for any a ∈ A B L . This is the form of the familiar cyclic property that is relevant in the context of general von Neumann algebras.
To show this, it will be useful to find yet another characterization of our trace on A B L . We begin by again recalling that C 2β ′ L has operator norm 1, so that 1 − C 2β ′ L is positive and thus a † a − a † C 2β ′ L a is also positive. As a result, for any |b⟩ ∈ H LR we have Taking |b⟩ = |C β ⟩ then gives for all β, β ′ > 0. In particular, taking supremums yields We can in fact show that the inequality in (3.35) is always saturated by using our continuity axiom and the fact that A B L can be characterized as the closure of L in the strong operator topology. This will then give the desired reformulation of our trace that will allow us to prove tr (a † a) = tr (aa † ).
To establish this result, we first note that the characterization of A B L as a strong closure means that for any a ∈ A B L , for fixed β, and for any ϵ > 0 there is an operatorâ L ∈ L such that a|C β ⟩ −â L |C β ⟩ has magnitude less than ϵ. Using ∥C 2β ′ ∥ = 1, a short computation then yields (3.36) Note that this bound also holds if the operators C 2β ′ L are replaced by 1. Moreover, since Axiom 4 requires ⟨C β |â † L C 2β ′ Lâ L |C β ⟩ to be continuous in β ′ , the same continuity holds for Combining (3.36) (as written, and also with the operator C 2β ′ L replaced by 1) with (3.37) for small enough β ′ then yields which clearly vanishes as ϵ → 0. This shows that sup β ′ >0 ⟨C β |a † C 2β ′ L a|C β ⟩ cannot be smaller than ⟨C β |a † a|C β ⟩, and thus that the inequality in (3.35) is saturated. As a result, we have established that for all a ∈ A B L (or correspondingly A B R ) our trace may be written in the form To establish cyclicity, we will now show that for any β, β ′ > 0, we have To show this, we first derive the intermediate result Our first step is to use the fact that a is the strong operator topology limit of some net of operators { a ν L } in L , where ν takes values in some directed index set J (see again footnote 8). This means that the net of states { a ν L |ψ⟩} converges in the Hilbert space norm to a|ψ⟩ for all |ψ⟩, and in particular for the two choices |ψ⟩ := |C β ⟩, |ψ ′ ⟩ := C 2β Lb L |C β ′ ⟩ (defined by the desired β, β ′ , andb L ). For any ϵ > 0, we may then consider the balls B ϵ , B ′ ϵ of radius ϵ in H LR that are respectively centered on the states a|ψ⟩, a|ψ ′ ⟩. Convergence of the nets { a ν L |ψ⟩} and { a ν L |ψ ′ ⟩} to a|ψ⟩ and a|ψ ′ ⟩ means that we can always find a value of ν such that we have both a ν L |ψ⟩ ∈ B ϵ and a ν L |ψ ′ ⟩ ∈ B ′ ϵ . By choosing a sequence (ϵ n ) in R + with ϵ n → 0, we can thus construct a subsequence ( a nL ) of the net { a ν L } for which we have both a nL |ψ⟩ → a|ψ⟩ and a nL |ψ ′ ⟩ → a|ψ ′ ⟩, or more explicitly This is a small extension of the standard argument that every metrizable topology is sequential. The first limit in (3.42) then allows us to write In passing to the second line we have used the fact that bounded operators and normalizable states define continuous functions on the Hilbert space to take the limit outside the inner product. Similarly, the second limit in (3.42) yields Furthermore, (3.44) and (3.46) are equal since (3.47) where the middle step uses cyclicity of the trace (3.6) on L . Thus we have shown the desired intermediate result (3.41).
We are now ready to derive (3.40) from (3.41). In fact, we can derive the stronger result from which (3.40) follows immediately by setting b = a † . To obtain (3.48), we use an argument similar to the one above to find a sequence ( b nL ) in L that satisfies both of the conditions Then (3.48) follows by writing where the middle step follows by applying (3.41) to each b nL . Having established (3.40), we take the supremum over β and β ′ on both sides of this relation and use (3.39) to obtain the desired cyclic identity tr a † a = tr aa † , ∀a ∈ A B L . (3.55) We emphasize that our trace will generally give +∞ for some positive elements of A B L . In particular, according to (3.32) the trace of the identity operator 1 is where in the last step we used lim β↓0 ∥C β ∥ = 1 from Lemma 4 of appendix A. The righthand side is the trace of a cylinder of vanishing length, which certainly diverges in familiar semiclassical theories of gravity.
Type I von Neumann Factors, Hilbert Space Structure, and Entropy
As indicated above, the trace operation tr will turn out to be the key to unlocking the structure of any von Neumann algebra A B L defined as above by a diagonal Hilbert space sector H LR = H B⊔B , as well as to unlocking the structure of H LR = H B⊔B itself. Our work in section 3 established that tr satisfies the following two properties on A B L : 1. Linearity: tr(a + b) = tr(a) + tr(b), and tr(λa) = λtr(a) for any positive a, b ∈ A B L and λ ≥ 0.
2. Cyclicity: tr(aa † ) = tr(a † a). This in particular implies that the trace is invariant under the action of unitaries in the sense that for b, We can also establish three further properties: The faithfulness property was shown to hold on L in section 3.3, but here we wish to show that it holds on the full von Neumann algebra A B L . We will give a similar proof in section 4.1 after showing that the trace inequality also extends to A B L . The proofs of properties 4 and 5 are short, but they are somewhat technical. To avoid distraction from the main results we thus relegate them to appendix B. Of course, each property above has an analogue for A B R . As noted in section 3, properties 1 and 2 are the minimal requirements for the function tr to be called a trace on a von Neumann algebra. The faithfulness property then gives a sense in which our trace is non-degenerate. Semifiniteness guarantees that not all non-zero operators have infinite trace, and the normality condition describes a sense in which the trace is continuous.
These latter properties are important since there is no faithful normal semifinite trace on a type III von Neumann factor. Establishing 3, 4, and 5 above thus tells us that our von Neumann algebra contains only type I and type II factors. Furthermore, for such factors there is a unique faithful, normal, semifinite trace up to an overall factor (about which more will be said below); see e.g. [56].
As noted above, our argument for faithfulness will rely on extending the trace inequality (3.13) to the full von Neumann algebra. It turns out that this can be accomplished by an extension of the argument of section 3.2. This will be done in section 4.1 below. Section 4.2 will then use this result to show that type II factors are excluded (implying that A B L contains only type I factors) and to analyze the implications for the structure of H B⊔B . The entropy defined by tr is then discussed in section 4.4.
The trace inequality on A B L
We wish to extend the argument of section 3.2 to establish the trace inequality on A B L . It will first be useful to establish the following regularized version of the trace inequality, whose derivation will have much in common with the argument of section 3.2.
Lemma 1. For any β, β ′ > 0 and any a, b ∈ A B L , we have L . For later use we note that for any β, β ′ > 0 this implies that the nets of states {â ν,L |C β ⟩} and {b κ,L |C β ′ ⟩} converge respectively to a|C β ⟩ and b|C β ′ ⟩, where here the limits are taken using the standard Hilbert space topology. As in the proof of (3.55), we can then find sequences (â n,L |C β ⟩) and (b m,L |C β ′ ⟩) that also satisfŷ a n,L |C β ⟩ → a|C β ⟩, andb m,L |C β ′ ⟩ → b|C β ′ ⟩. (4.2) In the notation of section 3.2, consider again the '4-boundary' Hilbert space H B L 1 ,B R 1 ,B L 2 ,B R 2 and the associated pre-Hilbert space H B L 1 ,B R 1 ,B L 2 ,B R 2 . Note that these spaces both contain the states (C β ) L 1 ,R 1 , (C β ′ ) L 2 ,R 2 and (C β ) L 2 ,R 1 , (C β ′ ) L 1 ,R 2 . Acting with the sequences (â n,L ) and (b m,L ), we define the states where the operators act at the boundaries indicated by the subscripts L 1 , L 2 . These states again lie in both the Hilbert space H B L 1 ,B R 1 ,B L 2 ,B R 2 and the pre-Hilbert space Note that, as in section 3.2, the two states considered here are related by the action of the 'swap' operator S L 1 ,L 2 that exchanges the labels L 1 , L 2 on the relevant two copies of B; see figure 10.
We will now use (4.2) to show that the associated diagonal sequences {|Ψ 1 (n, n)⟩}, {|Ψ 2 (n, n)⟩} are both Cauchy sequences in H B L 1 ,B R 1 ,B L 2 ,B R 2 , so that their limits define states in H B L 1 ,B R 1 ,B L 2 ,B R 2 that we may call To see that these sequences are Cauchy, we first compute the (pre-)inner products ⟨Ψ 1 (n, m)|Ψ 1 (n ′ , m ′ )⟩ = tr(C β a ⋆ n a n ′C β )tr( = ⟨C β |â † n,Lâ n ′ ,L |C β ⟩⟨C β ′ |b † m,Lb m ′ ,L |C β ′ ⟩. Due to the convergence of (4.2), for any ϵ > 0 there are integers n 0 , m 0 such that for all n, n ′ > n 0 and m, m ′ > m 0 we have There is thus some n 1 such that for all n, m, n ′ , m ′ > n 1 we have The usual computation then shows that we also have In particular, the sequence {|Ψ 1 (n, n)⟩} is Cauchy. The argument for {|Ψ 2 (n, n)⟩} is identical. It is also clear from the work above that we have the norms Note that it was not really necessary to take the limits along the diagonal. The above argument also establishes the relations so that we may take these limits in any order that we like.
Having represented the states |Ψ 1 ⟩, |Ψ 2 ⟩ in terms of the above limits, we may use continuity of the inner product to write ⟨Ψ 1 |Ψ 2 ⟩ as a limit of inner products ⟨Ψ 1 (n, m)|Ψ 2 (n ′ , m ′ )⟩. Moreover, we can use the above freedom to choose the order of limits to first take n, n ′ → ∞ while saving the limit m, m ′ → ∞ for later. Thus we write where the second line can be read off from figure 10. In addition, the final step used (4.2) and the fact that the operatorsb m ′ ,L , C 2β ′ L , andb † m,L are bounded. We may then use (3.48) (with a, b replaced first byb † m,L a, a †b m ′ ,L and then by b † a, a † b to write Note that (4.13) is manifestly real and non-negative since it is the norm squared of C β ′ L b † a|C β . The desired Lemma now follows by applying the Cauchy-Schwarz inequality to |Ψ 1 ⟩, |Ψ 2 ⟩ and using (4.10) and (4.13).
Having derived (4.1) for any β, β ′ > 0, we now take supremums of both sides over both β and β ′ . By (3.39), taking sup β,β ′ >0 on the left-hand side gives tr(a † bb † a). On the right-hand side we simply have with the convention that if one supremum (or trace) is 0 while the other supremum (or trace) is +∞, their product is defined as 0. Thus for all a, b ∈ A B L we obtain tr(a † bb † a) ≤ tr(a † a) tr(b † b). (4.15) This is our trace inequality on the von Neumann algebra A B L . If we were allowed to cyclically permute a † bb † a inside the left trace to aa † bb † , then this would be of the form tr(AB) ≤ tr(A)tr(B) for two positive operators A = aa † , B = bb † . Such a cyclic permutation is not in fact allowed because the trace tr on A B L is defined only on positive elements of A B L . While the operator a † bb † a is positive, the operator AB is known to be positive if and only if A and B commute. Nevertheless, the trace inequality (4.15) will suffice for our purposes below.
Before continuing, however, we pause to note three useful corollaries of the above argument. The first is L with tr(a † a) finite, the limit lim Proof. Note that for β ′ > β we have commutes with any operator in the left algebra A B L . The right-hand side approaches a finite limit tr(a † a) as β, β ′ → 0, and thus we may use steps much like those above to find that any sequence β n → 0 defines a Cauchy sequence a|C βn ⟩ in H B⊔B , and that the limit is the same for all such sequences. Calling the limit |a⟩ and using (3.32) then immediately gives (4.16).
We also have Proof. Together with Lemma 4 from appendix A, the continuity axiom (Axiom 4) implies that for any rimmed surface a ∈ Y d LR the statesâ L |C β ⟩ = |aC β ⟩ converge in the Hilbert space norm to |a⟩ as β → 0. Since any b ∈ A B L is bounded, we also find b †â where in the last step we have used the trace inequality (4.15). It follows that if tr(bb † ) vanishes for any b, then (4.18) requires b † |a⟩ to vanish for all a ∈ Y d LR . But b † is bounded, and the states |a⟩ define a dense subspace of the Hilbert space, so we must have bb † = 0. This establishes faithfulness on A B L , and the argument on A B L is identical.
Finally, we have Proof. We first prove this for n = 2. We wish to show that any |a⟩ ⊗ |b⟩ ∈ H B ′ ⊗ H B ′ is naturally mapped to a state in H B ′ ⊔B ′ . Since |a⟩, |b⟩ ∈ H B ′ , we can find sequences |a m ⟩, |b m ⟩ in H B ′ that converge to |a⟩, |b⟩, respectively. Using steps much like the ones used above in showing |Ψ 1 (m, m)⟩ to be Cauchy, we find that |a m ⊔ b m ⟩ is a Cauchy sequence in H B ′ ⊔B ′ . Moreover, its limit in H B ′ ⊔B ′ is independent of the choices for the sequences |a m ⟩, |b m ⟩ so long as they converge to |a⟩, |b⟩. Thus we may call this limit |a ⊔ b⟩, and so we have defined a natural map from H B ′ ⊗ H B ′ to H B ′ ⊔B ′ by mapping |a⟩ ⊗ |b⟩ to |a ⊔ b⟩. Moreover, this map is linear and preserves the inner product. Therefore, it provides a natural isomorphism between H B ′ ⊗ H B ′ and a subspace of H B ′ ⊔B ′ . This argument clearly generalizes to all n ∈ Z + , thus establishing this corollary.
This corollary allows us to embed H ⊗n B ′ into H ⊔ n i=1 B ′ as a subspace. Thus, for any operator acting on any one of the n tensor factors of H B ′ , we can now also allow it to act on this subspace of H ⊔ n i=1 B ′ . For the case of n = 2 and B ′ = B ⊔ B, we can use this fact to write |Ψ 1 ⟩ from (4.5) in the form: where we have used (4.2) and (4.3). In the limit β, β ′ → 1, |Ψ 1 ⟩ converges to |a L 1 ,R 1 , b L 2 ,R 2 ⟩ (called |a ⊔ b⟩ in the previous paragraph) and the inner product (4.13) becomes Note, however, that Corollary 3 states only that H ⊗n B ′ may well be strictly larger than ⊗ n i=1 H B ′ . This is in particular true for the topological model of [17] without end-of-the-world branes, as well as for models with end-of-the-world branes studied in [17] when considered in baby universe superselection sectors where the partition function is larger than the number of flavors of such branes. As noted in [17], this discussion is directly analogous to the considerations of [58] (see also [59]), so the issue may be called the 'Harlow factorization question'. When such extra states exist, and if one wishes to insist that there be a dual formulation as a standard non-gravitating quantum field theory (for which locality would strictly require , one might wish to call this phenomenon the 'Harlow factorization problem'.
A B
L and A B R contain only type I factors We can now say much more about the structure of the von Neumann algebras A B L and A B R defined by a diagonal Hilbert space H B⊔B . This is the part of the paper where we have developed enough control over our algebras, and in particular over our trace tr, to reach into the mathematics literature and make use of powerful results (even if they are nevertheless elementary by the standards of theorems about von Neumann algebras). It turns out that much of the study of a von Neumann algebra A can be reduced to the study of projections P ∈ A. Here as usual a projection is defined as an operator that satisfies P = P † and P 2 = P . It will thus be useful to better understand the implications of our results for such P .
Let us in particular apply our von Neumann algebra trace inequality (4.15) to the case a = b = P ∈ A B L . Since a † bb † a = P 4 = P , we quickly obtain trP ≤ (trP ) 2 . (4.21) Since tr is faithful, unless P is the trivial projection P = 0 we must have trP ≥ 1. Thus the trace of any non-zero projection in A B L is bounded below by 1. Now, any von Neumann algebra A can be decomposed as the direct sum/integral of socalled von Neumann factors A µ which are just von Neumann algebras with trivial centers. Furthermore, any faithful normal semifinite trace on A induces a faithful normal semifinite trace on every factor A µ . Since it is known that any von Neumann factor is of type I, II, or III, and since there is no such trace on any type III factor, all factors of our A B L must be of type I or type II; see e.g. [56]. (As usual, we should in principle allow for exceptions on sets of measure zero. However, Lemma 2 below will show that A B L is a discrete direct sum of factors so that no interesting such exceptions can arise.) However, (4.22) quickly leads to a much stronger result. The crucial point is that for any faithful normal semifinite trace on a non-trivial type II von Neumann factor, there is decreasing family of non-zero projections P λ with λ ∈ R + such that tr(P λ ) → 0 as λ → 0; see e.g. proposition 8.5.5 of [60]. But for small enough λ such P λ clearly violate (4.22). We thus see that A B L must contain only type I factors. Such von Neumann algebras are said to be of type I. This is the first key result of this paper.
Another important result from the literature is the so-called commutation theorem for semifinite traces; see e.g. theorem 2.22 of [56]. This theorem states that a von Neumann algebra A with a semifinite trace tr is the commutant of its opposite algebra A op (A with reversed multiplication rule) when acting on the Hilbert space H = a ∈ A : tr(a † a) < ∞ . Here the two algebras act by left and right multiplicationb L |a⟩ = |ba⟩ andb R |a⟩ = |ab⟩, and the above notation means that a dense subspace of H is defined by operators with finite tr(a † a), and that in this subspace we have ⟨a|a⟩ = tr(a † a) as in (4.16). This is precisely the structure of any diagonal Hilbert space H B⊔B , on which the algebras A B L and A B R act as opposites. It thus follows that A B L and A B R are commutants on H B⊔B . Alternately, without using (4.16) in Corollary 1, one can check that our algebras satisfy the conditions for the commutation theorems in [61] The above observations now tell us much about the structure of a diagonal Hilbert space sector H B⊔B . In analyzing this structure, it is useful to consider the center Z B L of A B L , which is defined to the subalgebra of operators in A B L that commute with all operators in A B L ; i.e., Z B L := {a| a ∈ A B L , ab = ba ∀b ∈ A B L }. In particular, any z ∈ Z B L commutes with its adjoint z † . This means that central operators are normal and can be diagonalized on H B⊔B . In fact, since all elements of Z B L commute with each other, we can simultaneously diagonalize all operators in Z B L on the Hilbert space H B⊔B . Interestingly, we can use the bound (4.22) to show that any central operator z ∈ Z B L has a purely discrete spectrum. For clarity, we state this as the following lemma: Lemma 2. The spectrum of any z ∈ Z B L is purely discrete in the sense that H B⊔B is the closure of the linear span of all normalizable eigenstates of z.
Proof. Without loss of generality, let us take z to be self-adjoint. To establish the desired result, let us first write H B⊔B = H D B⊔B ⊕ H ⊥D B⊔B where H D B⊔B with superscript D for "discrete spectrum") is the closure of the linear span of all normalizable eigenstates of z. Note that the projection P z 0 onto normalizable states with eigenvalue z 0 also defines a central element of our von Neumann algebra A B L . Summing over all such z 0 then shows that the projection P D onto H D B⊔B again lies in Z B L , so that the complementary projection P ⊥ D = 1 − P D onto H ⊥D B⊔B must lie in Z B L as well. Now assume that P ⊥ D is not the zero operator. The semifinite property of our trace then implies that there is some non-zero positive operator a ∈ A B L with finite trace such that P ⊥ D − a is positive, and thus in particular for which a annihilates all states in H D B⊔B . Consider now the projection P a>ϵ onto the part of the spectrum of a with eigenvalues greater than ϵ for some ϵ > 0 and note that P a>ϵ ∈ A B L . Since a has finite trace and a − ϵP a>ϵ is positive, the normality property of our trace then requires that tr(P a>ϵ ) is again finite. Furthermore, since a is not the zero operator, the spectral theorem (say, in the form of theorem 5.2.2 of [55]) implies that P a>ϵ must be non-vanishing for some ϵ > 0.
It will also be useful to construct the operator zP a>ϵ . Since z commutes with P a>ϵ , the operator zP a>ϵ is self-adjoint and so can be diagonalized in the sense of the spectral theorem (see again theorem 5.2.2 of [55]). Furthermore, zP a>ϵ can have no normalizable eigenvector |ψ⟩ with non-zero eigenvalue z 0 since then P a>ϵ |ψ⟩ would be a (necessarily nonvanishing) normalizable eigenvector of z in H ⊥D B⊔B . Let us now define λ max := ∥zP a>ϵ ∥ to be the operator norm of zP a>ϵ . Note that λ max cannot be zero since then zP a>ϵ would vanish and any |ψ⟩ ∈ H B⊔B not annihilated by P a>ϵ would define a normalizable eigenvector P a>ϵ |ψ⟩ of z in H ⊥D B⊔B with eigenvalue 0. Moreover, at least one of λ max or −λ max is a spectral value of zP a>ϵ (see e.g. proposition 3.2.15 of [55]). Without loss of generality, we assume that λ max is a spectral value of zP a>ϵ (if not, simply replace zP a>ϵ by −zP a>ϵ below). Then since zP a>ϵ has no normalizable eigenvector of eigenvalue λ max , the above spectral theorem implies that for any λ 0 < λ max and any positive integer n ∈ Z + there are real numbers λ 1 , λ 2 , . . . , λ n with λ 0 < λ 1 < λ 2 < · · · < λ n < λ max for which the projections P [λ i−1 ,λ i ] onto the spectral intervals [λ i−1 , λ i ] of zP a>ϵ are non-vanishing for i = 1, . . . n. If we choose λ 0 > 0, we have P [λ 0 ,λmax] ≤ P a>ϵ (as any state annihilated by P a>ϵ must be annihilated by zP a>ϵ , and thus also by P [λ 0 ,λmax] ). Since tr(P a>ϵ ) is finite, normality of our trace again requires tr(P [λ 0 ,λmax] ) to be finite, which yields the bound (4.23) But since all of these traces are positive (and, in particular, non-zero since the projections are non-trivial), for n > tr(P [λ 0 ,λmax] ) some P [λ i−1 ,λ i ] must have trace less than 1. This contradicts the bound (4.22), so that P ⊥ D must in fact vanish. Thus z has purely discrete spectrum in the sense that H B⊔B is the closure of the linear span of all normalizable eigenstates of z.
The analogous statement will again hold when we simultaneously diagonalize all central operators in Z B L . Let us denote the simultaneous eigenspaces by H µ B⊔B for µ in some index set I. In particular, for each µ ∈ I there is a set of complex numbers z µ such that the states |ψ⟩ in H µ B⊔B are precisely the set of states for which z|ψ⟩ = z µ |ψ⟩ for any z ∈ Z B L . The Hilbert space H B⊔B then decomposes as a direct sum (not a more general integral) over such eigenspaces: There is of course a corresponding resolution of the identity on H B⊔B in terms of orthogonal projections P µ onto H µ B⊔B : where P µ P ν = P µ δ µν and P † µ = P µ . The fact that A B L is weakly closed means that the projections P µ lie in A B L , and thus in fact also lie in the center Z B L . As a result, the decomposition (4.24) also has an analogue at the level of the von Neumann algebra A B L . To see this, simply note that for each µ ∈ I we can define a subalgebra A B L,µ of operators of the form P µ a for a ∈ A B L . We may thus use the resolution of the identity (4.25) to write It is also clear that each A B L,µ annihilates any H ν B⊔B with µ ̸ = ν, and that the subalgebras A B L,µ are von Neumann algebras in their own right (acting on H µ B⊔B ). Furthermore, consider any operator z µ in the center of A B L,µ . Then since P µ projects the full von Neumann algebra onto A B L,µ , the operator P µ z µ will commute with all a in the full von Neumann algebra A B L and thus lies in the original center. But we have already diagonalized all operators in the center of A B L . So, on the subspace H µ B⊔B , the operator P µ z µ must act as a multiple of the identity 1 µ . But on H µ B⊔B the operator P µ is already proportional to 1 µ (with non-zero coefficient), so this must be true of our z µ as well. It follows that each A B L,µ is a von Neumann algebra with trivial center.
A von Neumann algebra with trivial center is known as a von Neumann factor. Such factors can be classified as being of type I, II, or III, and our arguments above showed that each A B L,µ must be of type I. There is, of course, also a corresponding decomposition of A B R . In fact, since A B L and A B R are commutants of each other, their central subalgebras must define the same set of operators on H B⊔B ; i.e., any operator on H B⊔B that lies in the center of A B L must also lie in the center of A B R . The decomposition of A B R into A B R,µ thus uses precisely the same index set I, and it is associated with the identical decomposition of the Hilbert space (4.24). Furthermore, the subalgebras A B L,µ , A B R,µ are both type I von Neumann factors that are commutants of each other on the corresponding Hilbert space H µ B⊔B . Now, it is also known that any type I von Neumann factor is isomorphic to the algebra B(H) of all bounded operators on some Hilbert space H. Since this is true of both A B L,µ and A B R,µ , the fact that these two algebras are commutants on H µ B⊔B can be used to show that this Hilbert space admits a factorization such that the action of every a ∈ A B L,µ on (4.27) is of the form a L ⊗ 1 µ,R , where 1 µ,R is the identity on H µ B⊔B,R . Furthermore, any bounded operator on (4.27) of the form a L ⊗ 1 µ,R is a member of A B L,µ . The operators a ∈ A B R,µ on (4.27) are analogously the set of operators of the form 1 µ,L ⊗ a R . Since we can extend (3.3) to show that † defines an anti-linear isomorphism between A B L,µ and A B R,µ , it follows that the Hilbert space factor H µ B⊔B,L is similarly isomorphic to H µ B⊔B,R . Nevertheless, we maintain the labels L, R for clarity below. This is precisely the structure advertised in the introduction. In particular, by restricting it to H B⊔B , any density matrix on the quantum gravity Hilbert space clearly defines a density matrix ρ B⊔B on the sector H B⊔B . For convenience, let us suppose that ρ B⊔B is normalized in the sense that it gives an expectation value of 1 for the identity on H B⊔B . This is equivalent to saying that the standard Hilbert space trace of ρ B⊔B (defined by summing diagonal matrix elements of ρ B⊔B over an orthonormal basis of the Hilbert space H B⊔B ) yields 1. This ρ B⊔B then defines density matrices ρ µ for which where the p µ are probabilities given by the expectation values of the operators P µ in the state ρ B⊔B and the ρ µ are normalized density matrices on H µ B⊔B . The key point is that each such ρ µ now induces normalized density matrices ρ µ L , ρ µ R on the Hilbert space factors H µ B⊔B,L , H µ B⊔B,R . If one thinks of density matrices as positive linear functionals on the algebra of observables then ρ µ L , ρ µ R are the restrictions of ρ µ to the left and right von Neumann algebras A B L,µ , A B R,µ . But one may equivalently think of ρ µ L as the trace of ρ µ over H µ B⊔B,R , and ρ µ R is similarly the trace of ρ µ over H µ B⊔B,L . This structure will allow us to discuss entropies in what one might call "standard physics terms" in section 4.4 below. However, it will simplify our discussion of entropies if we first make a brief digression (section 4.3) in order to more carefully analyze the normalization of the trace tr.
The normalization of tr
We saw in section 4.2 that the von Neumann algebra A B L defined by a diagonal Hilbert space sector H B⊔B can be written in terms of type I von Neumann factors A B L,µ . Since the operators in A B L,µ are just operators in A B L of the form P µ a, the trace tr on A B L is also defined on positive operators in A B L,µ . However, as noted above, our A B L,µ can be thought of as the algebra of all bounded operators on the Hilbert space factor H µ B⊔B,L . Faithful, normal, semifinite traces on such algebras are known to be unique up to an overall normalization constant. In particular, there must be real numbers C µ > 0 such that for all positive a ∈ A B L,µ we have where |i⟩ L is a (discrete) orthonormal basis 9 for H µ B⊔B,L , and where the right-hand side defines the operation Tr µ on our von Neumann algebra. Similarly, for all positive a ∈ A B R,µ we have where |i⟩ R is a (discrete) orthonormal basis for H µ B⊔B,R . In particular, since the left and right algebras and Hilbert spaces are isomorphic, and since the trace on positive operators is invariant under this isomorphism (i.e., the trace on the right algebra acts in just the same way as the trace on the left algebra), the constants C µ in (4.29) are identical to the constants C µ in (4.30).
The trace inequality (4.15) can be seen to constrain the values of the constants C µ . In particular, let P be a one-dimensional projection onto a state in H µ B⊔B,L . Then Tr µ (P ) = 1, so tr(P ) = 1/C µ . But we saw in section 4.2 that any non-zero projection must have tr(P ) ≥ 1. Thus we see that the constants C µ satisfy C µ ≤ 1, ∀µ. (4.31) However, there are also further constraints on the constants C µ . To see this, recall that section 4.1 derived the trace inequality implying (4.31) by considering a standard consequence (the Cauchy-Schwarz inequality) of positivity of the inner product on the Hilbert space sector H B⊔B⊔B⊔B associated with 4 copies of the boundary B. It is thus natural to ask if further information about the C µ can be obtained by considering Hilbert spaces associated with even more copies of B.
This turns out to be the case. To proceed, note that by Corollaries 1 and 3 from section 4.1, for any allowed B and any operators a 1 , . . . , a n ∈ A B L with finite tr(a † i a i ) there is a state that we may call |a 1 , . . . , a n ⟩ in the Hilbert space H ⊔ 2n i=1 B associated with n copies of the boundary B ⊔ B. Furthermore, the Cauchy-Schwarz inequality used to derive the trace inequality (4.15) follows in the standard way from positivity of the norm squared of the state One would thus like to investigate the positivity of analogous totally anti-symmetric combinations of states for general n.
For simplicity, we do so here only for the simple case when all operators a 1 , . . . , a n agree and where they are equal to a finite-dimensional projection P . In particular, we will establish the following lemma which strengthens the bound (4.22): Lemma 3. For any non-zero finite-dimensional projection P ∈ A B L , the trace tr(P ) is a positive integer.
Proof. To begin, recall that our trace is normal and semifinite. These properties imply that for any non-zero positive operator (and thus any non-zero projection P ) there is some non-zero positive operator Q with P − Q positive such that Q has finite non-zero trace. In particular, for any one-dimensional projection P on a Hilbert space, any Q with P − Q positive must annihilate states orthogonal to the image of P . As a result, Q = αP for some real number α with 1 ≥ α > 0. As a result, tr(P ) = α −1 tr(Q) must be finite, showing that any onedimensional projection does indeed have a finite trace. The same is then necessarily true for any finite-dimensional projection P by linearity.
It is then useful to define the notation |P ⊗n ⟩ := |P L 1 R 1 , . . . , P LnRn ⟩. For any permutation π on n labels we may also define the (left) n-boundary swap operator S L π which acts on H ⊔ 2n i=1 B by permuting the labels of the n left boundaries as dictated by π. We then wish to consider the norm of the state |P ∧n ⟩ := π (−1) π S L π |P ⊗n ⟩, (4.33) where the sum is over all n-object permutations and (−1) π is +1 (−1) for even (odd) permutations π.
To compute this norm, it will be useful to write (4.33) in the form where δ i,1 is a Kronecker δ symbol and, in analogy with section 4.1, we have defined the operators S L 1 ,L i that swap the labels on the 1st and ith left boundaries (so that, in particular, S L 1 ,L 1 is the identity). The norm-squared of (4.34) is where we pass from the first to the second line by using that, when i, j, and 1 are all distinct, we may write Here in going to the 3rd line, we used that P ∧(n−1) L 2 ,R 2 ,...,Ln,Rn is odd under S L i ,L j when i ̸ = j. The 4th line then follows from a derivation similar to how we derived (4.20); see again figure 10. In particular, the swap operator S L 1 ,L i on the 3rd line effectively pulls the P L 1 ,R 1 in the bra and ket into the middle of the 4th line, where they now act at L i . In going to the 5th line, we used that P ∧(n−1) L 2 ,R 2 ,...,Ln,Rn is invariant under any P † L i = P L i since P 2 = P . Now (4.35) must be non-negative. But ⟨P ∧(n−1) |P ∧(n−1) ⟩ is also non-negative. Therefore, for any n ∈ Z + with n > 1, unless |P ∧(n−1) ⟩ = 0, we must have n − 1 ≤ ⟨P |P ⟩ = tr(P 2 ) = tr(P ). (4.37) But finiteness of tr(P ) means that (4.37) must fail for some n.
The above argument then requires the state |P ∧(n−1) ⟩ to vanish for such n. On the other hand, (4.35) also calculates the norm of |P ∧n ⟩ recursively in terms of the norms of the states |P ∧m ⟩ with m < n. We thus see that this norm can vanish only if tr(P ) = tr(P 2 ) = ⟨P |P ⟩ is a non-negative integer, so that the trace of any non-zero finite-dimensional projection P must be some positive integer n.
We may now apply Lemma 3 to a one-dimensional projection P onto a state in some H µ B⊔B,L . Writing tr(P ) = n, we then have 1 = Tr µ (P ) = C µ tr(P ) = nC µ , (4.38) which requires each constant C µ to be of the form The quantization condition (4.39) allows us to give a particularly nice physical and mathematical description of our trace tr. For n ∈ Z + , let us introduce the n-dimensional Hilbert spaces H n . We may then define the extended Hilbert space factors (4.40) and the modified summed Hilbert space Furthermore, the operatorsã again define a faithful representation of A B L,µ on H µ B⊔B,L . We have thus found a representation of our trace tr in terms of a standard Hilbert space sum over diagonal matrix elements 10 . Physically, one might say that our trace tr gives the Hilbert space trace in a context where there are 'hidden sectors' H nµ on which operators in A B L,µ act as the identity.
We now make a final further comment by again using Corollary 3. In particular, let us consider the tensor product H B ⊗ H B of two Hilbert spaces that are each associated with a single copy of B. Corollary 3 then guarantees that H B ⊗ H B is a subspace of the Hilbert space H B⊔B associated with two copies of B. Furthermore, it is clear that any a ∈ A B L must act on this space as some a L ⊗ 1 H B , and analogously for elements of A B R . As a result, this H B ⊗ H B must be one of the terms H µ B⊔B = H µ B⊔B,L ⊗ H µ B⊔B,R in the decomposition (4.24). Let us denote this term by µ = ⊗. Then in particular we have Assuming that H B is not empty, this observation will also tell us that the trace normalization factor C ⊗ = 1/n ⊗ associated with this subspace must be 1. To see this, consider any a ∈ Y d B (associated with just one copy of B) that is not in the null space N B . This a of course defines a non-zero state |a⟩ ∈ H B . For convenience, let us normalize a so that ⟨a|a⟩ = 1. From this a we can of course construct a ⊔ a * ∈ Y d B ⊗ Y d B ⊂ Y d B⊔B , which then defines an operatorP a,L := (a ⊔ a * ) L ∈Â B⊔B L . We use the notationP a,L because a short computation (see figure 11) yieldsP † a,L =P a,L , andP 2 a,L =P a,L , (4.46) showing that it is a projection. We also find Furthermore, it is clear that for any b ∈ Y d B⊔B (with two B-boundaries), the stateP a,L |b⟩ = |a⟩ ⊗ b t |a * ⟩ lies in H ⊗ B⊔B = H B ⊗ H B ; see again figure 11. Self-adjointness then requires that P a,L annihilate the orthogonal complement of H ⊗ B⊔B . In particular, we see that any such operator is of the formP a,L = (|a⟩⟨a| ⊗ 1 H B ) P ⊗ , (4.48) where P ⊗ is the projection onto the product sector H ⊗ B⊔B = H B ⊗ H B in (4.24). As a result, the trace operation Tr ⊗ defined by H ⊗ B⊔B,L = H B is well-defined onP a,L and we find Tr ⊗ (P a,L ) = ⟨a|a⟩ = tr(P a,L ), (4.49) where the last step is again apparent from figure 11. We must thus also have C ⊗ = 1 = n ⊗ as claimed above.
Hidden sectors and entropy from algebras
As noted above, the extended Hilbert space H B⊔B is mathematically useful. We now turn to the question of whether this space may be physically useful as well.
To do so, let us suppose for the moment that the B ⊔ B sector of the physical Hilbert space of our quantum gravity theory were actually of the form (4.41), and in particular that it contained factors H nµ in some H µ B⊔B such that all of the operators defined by our path integral act trivially on these H nµ . Note that we suppose this to be true not only for operators in our algebras A B L and A B R , but also for any operator on H B⊔B defined by any surface in Y d B⊔B⊔B⊔B (i.e., defined by any surface that has four B-boundaries), and in fact even for operators defined by surfaces in Y d B⊔B⊔B for generalB that map H B⊔B to other sectors of the Hilbert space. So long as these are the only observables to which we have access, we will never know that such 'hidden sectors' actually exist. Furthermore, none of our observables would be able to change the parts of the state associated with such hidden sectors no matter how hard we might try.
Indeed, let us suppose that for each µ ∈ I (i.e., for each term H µ B⊔B in the decomposition (4.24)) there is a preferred (normalized) maximally entangled state |χ µ ⟩ ∈ H nµ ⊗ H nµ . We can use such states to map any state |ψ⟩ ∈ H B⊔B isometrically to the state This is the relation that arises when our original H B⊔B defines a code subspace of H B⊔B for a quantum error correcting code 11 with two-sided recovery associated with the algebras A B L and A B R [30]. In particular, if we call the above isometric embedding χ : H B⊔B → H B⊔B , then χ can be used to translate operators on H B⊔B into operators that act on the image of χ in H B⊔B .
The insertion of the maximally entangled state |χ µ ⟩ will clearly lead to differences in quantitative measures of left-right entanglement as defined by the Hilbert spaces H B⊔B and H B⊔B . The two descriptions of the Hilbert space thus also clearly lead to different notions of entropy. This is in particular familiar from the discussion of [30]. Our setting here is slightly more general than that of [30] (see again footnote 11), so we will postpone writing detailed formulas until we define further notation and terminology.
In the theory of von Neumann algebras one can introduce a notion of entropy whenever one has a faithful normal semifinite trace tr on (positive elements of) a von Neumann algebra A. For positive a ∈ A with tr(a) = 1, one may attempt to define Since a is a positive bounded (and thus self-adjoint) operator on some Hilbert space, the operator a ln a can be defined using the spectral representation of a, and since it is bounded, it must also lie in A. In a truly general von Neumann algebra the operator −a ln a need not be positive, which means that (4.51) is not obviously well-defined for all such positive a. But (4.15) and (4.16) imply that the operator norm ∥a∥ of a is bounded by tr(a † a) = tr(a 2 ) which is in turn bounded by (tr a) 2 = 1, so in our context −a ln a is positive and (4.51) is well-defined so long as we allow it to take the value +∞.
On the other hand, in physics we typically wish to discuss entropies defined by states, which in the present context we take to mean (normalized) pure states |ψ⟩ in some Hilbert space H. The connection to the above entropy for positive elements of a von Neumann algebra A acting on H is of course through the concept of a density matrix which, in the general context, is more properly called a density operator. The point is that any physics encoded in |ψ⟩ that can be extracted using observables in A is described by the expectation values ⟨ψ|a|ψ⟩ for a ∈ A. In particular, since A is closed under the product operation, this includes all possible correlation functions (which are described by the case a = a 1 a 2 . . . a n ). In fact, since any a can be written in terms of its self-adjoint and anti-self-adjoint parts, and since self-adjoint operators can be written in a standard spectral representation in terms of projections onto their spectral intervals, it suffices to know ⟨ψ|a|ψ⟩ for all projections a ∈ A. We will be slightly more general than this below, but it will still be useful to henceforth restrict discussion of such expectation values to positive operators a ∈ A.
Suppose now that we are given a (faithful normal semifinite) trace tr on A. For the purposes of computing such expectation values, we can replace |ψ⟩ with a density operator ρ ψ ∈ A if we can find a positive ρ ψ such that for all positive a ∈ A we have where the positive square root ρ 1/2 ψ of the positive operator ρ ψ can as usual be defined using a spectral decomposition. In familiar physics contexts the cyclic property of the trace would be used to write the right-hand side as tr(ρ ψ a), but our trace does not allow this since it is defined only on positive operators and ρ ψ a need not be positive. When such a ρ ψ exists, we can use (4.51) to define an entropy on A for the state |ψ⟩ ∈ H.
Let us now apply this discussion to states |ψ⟩ in one of our diagonal Hilbert space sectors H B⊔B . Then we in fact have three potentially useful notions of traces on e.g. A B L . The first is the trace tr defined by (3.31). The second is defined by the collection of Hilbert space traces Tr µ . Here we make use of the fact that the decomposition of (4.26) of A B L in terms of factors allows us to analogously decompose any a ∈ A B L as a = ⊕ µ∈I a µ . We may thus define a trace We could also have chosen to insert some additional positive coefficient f (µ) that changes the weights assigned to each µ in (4.53), but we have explicitly chosen not to do so in defining Tr. However, our third trace Tr is equivalent to such a reweighting of (4.53) and is defined by the expression Tr(a) := µ∈I Tr µ (a µ ) = µ∈I n µ Tr µ (a µ ), (4.54) where the n µ are the positive integers defined in section 4.3. As noted in that section, for any a µ we in fact have tr(a µ ) = Tr µ (a µ ), which by the linearity of tr means that our first trace tr is identical on A B L to our third trace Tr. Nevertheless, it will be useful to continue to use both symbols tr and Tr below to allow us to emphasize different conceptual points of view. Now, given a normalized state |ψ⟩ ∈ H B⊔B , we can use (4.24) to write |ψ⟩ = µ∈I |ψ µ ⟩ with |ψ µ ⟩ ∈ H µ B⊔B . Then for any a ∈ A B L we have Let us now ask whether we can construct a density operator ρ ψ such that the expectation values (4.55) are reproduced using the Tr operation through Working in any given µ-sector, the factorization of the Hilbert space H µ B⊔B into H µ B⊔B,L and H µ B⊔B,R , and the fact that A B L,µ is precisely the algebra of bounded operators on H µ B⊔B,L , mean that we can do this via the usual computation that considers the operator |ψ µ ⟩⟨ψ µ | on H µ B⊔B and then traces over H µ B⊔B,R ; i.e., we can define ρ µ ψ as an operator on H µ B⊔B,L by giving a formula for its matrix elements between states |α⟩ L and |β⟩ L in H µ B⊔B,L . To do so, it will be useful to introduce an orthonormal basis |i⟩ R for H µ B⊔B,R , and to use the notation |α, i⟩ LR := |α⟩ L ⊗ |i⟩ R (4.57) for the tensor product of any |α⟩ L ∈ H µ B⊔B,L and the basis state |i⟩ R . We will also introduce p µ = ⟨ψ µ |ψ µ ⟩. Working in those µ sectors where p µ ̸ = 0, we take the matrix elements of ρ µ ψ between the above states to be Normalizability of |ψ µ ⟩ implies this sum to converge, and ρ µ ψ is positive since its expectation value in any state is given by setting α = β in (4.58), in which case the right-hand side is a sum of non-negative terms. We also clearly have Tr µ (ρ µ ψ ) = 1, so ρ µ ψ is bounded. Defining then yields (4.56) in the usual way as desired.
The above construction of ρ ψ ∈ A B L using the Hilbert space trace Tr may seem natural. But one can of course repeat precisely the same construction using the state |ψ⟩ = χ|ψ⟩ in the enlarged Hilbert space H B⊔B that includes the hidden sectors H nµ . In this case we use the trace Tr to write the final result in the form ⟨ψ|a|ψ⟩ = Tr(ρ 1/2 ψ aρ 1/2 ψ ), (4.60) whereρ ψ = ⊕ µ∈I p µρ µ ψ forρ µ ψ defined by where |α⟩ L , |β⟩ L are states in H µ B⊔B,L , the states |ĩ⟩ R form an orthonormal basis for H µ B⊔B,R , and |ψ µ ⟩ is a state in H µ B⊔B defined by the decomposition |ψ⟩ = µ∈I |ψ µ ⟩. In analogy with our previous notation, we have defined |α,ĩ⟩ LR := |α⟩ L ⊗ |ĩ⟩ R . Furthermore, we emphasize that the probabilities are identical to those used in (4.58) due to the fact that our map χ : H B⊔B → H B⊔B is an isometry. Comparing (4.56) and (4.60), and recalling that the traces Tr and Tr do not agree, we find that despite -and one might even say, because of -agreement between the left-hand sides of (4.56) and (4.60), the density operators ρ ψ andρ ψ will generally represent distinct elements of A B L . As a result, using ρ ψ and Tr to define the entropy of |ψ⟩ via (4.51) generally leads to different results than usingρ ψ and Tr. We emphasize here that while the definition ofρ ψ used the state |ψ⟩ as an intermediate step, this |ψ⟩ was constructed from |ψ⟩ using the isometric embedding χ, soρ ψ is still uniquely determined by the original state |ψ⟩.
Of course, since Tr and tr are identical functions on A B L , we may choose to write (4.60) in the form ⟨ψ|a|ψ⟩ = tr(ρ We may correspondingly use (4.51) with the path integral trace tr to define a notion of von Neumann entropy S L vN associated with the left algebra A B L for general (normalized) states |ψ⟩ ∈ H B⊔B . We may also perform the standard computation to relate the total entropy to the average of entropies in each µ-sector: Here the superscript L emphasizes that while the state |ψ⟩ is pure, we are considering a notion of entropy associated only with the left algebra A B L . The important points of our discussion above are that there is aρ ψ that for a ∈ A B L correctly computes expectation values in the original state |ψ⟩, and that (4.64) can be represented as a standard entropy defined by 'tracing out the right tensor factor' of each µ-sector of the Hilbert space H B⊔B . Again, the Hilbert space H B⊔B is not a tensor product of Hilbert spaces, though it could be written as a direct sum of such spaces so that the full density × × a ψ ψ ⋆ Figure 12. When a and ψ are both simple two-boundary surfaces, the path integral shown computes both tr(ψ ⋆ aψ) and ⟨ψ|a|ψ⟩. For such a, ψ, this shows thatρ ψ = (ψψ ⋆ ) L satisfies (4.63) since the former also agrees with tr(ρ matrixρ ψ is defined by a direct sum over theρ µ ψ . As a result, there is a natural 'entropy of mixing' contribution given by the last term on the right-hand side of (4.64).
Having defined the entropy S L vN for general states |ψ⟩, we would now like to discuss this entropy in a theory that admits a bulk semiclassical limit described by a familiar theory of gravity (say, Einstein-Hilbert or Jackiw-Teitelboim plus perturbative corrections). In general, such a limit will give a good description only of appropriately semiclassical states |ψ⟩. With this as motivation, let us thus consider a normalized |ψ⟩ defined by the path integral with boundary conditions given by some single smooth source-manifold-with-boundary ψ in the sense that it is an element of Y d B⊔B multiplied by a normalization constant to ensure ⟨ψ|ψ⟩ = 1. We can of course easily extend the analysis to the case of finite linear combinations described by Y d B⊔B so long as the number of terms and the coefficients remain fixed in the desired semiclassical limit, but we expect general states |ψ⟩ to be more difficult to study using semiclassical techniques.
When |ψ⟩ is defined by such a boundary-source-manifold ψ, we can use (4.63) to show thatρ ψ is the operator in A B L defined by the boundary-source-manifold ψψ ⋆ . In L we might call this operator (ψψ ⋆ ) L , though we will refer to it below as simply ψψ † since we wish to regard it as a member of A B L . This identification will not be a surprise to most readers, as when a ∈ Y d B⊔B the argument in figure 12 establishes (4.63). However, for completeness we defineρ ψ := ψψ † and present the following more general argument that holds for any positive a ∈ A B L (which we write in the form a = bb † ): In this argument, the 1st and 3rd steps use cyclicity of the trace (3.55) on the von Neumann algebra, the 4th step uses (3.32), and the final step uses the facts that the states |ψC β ⟩ = C β R |ψ⟩ converge to |ψ⟩ (according to Lemma 4 and Corollary 4 of appendix A) and that the operator bb † is bounded. Comparing the beginning and end of (4.65) shows thatρ ψ satisfies (4.63) as desired, and thus that it is the correct density operator for the state |ψ⟩.
The important consequence of this observation is that for all n ∈ Z + we may also write where (ψψ ⋆ ) n is just the product (say, using the left product · L ) of n copies of ψψ ⋆ . In this form we see that the traces on the left-hand side of (4.66) are computed by applying what may be called the 'gravitational replica trick' to ψψ ⋆ ; see figure 13. Figure 13. The manifolds M ([ψψ ⋆ ] n ) that define path integrals computing tr(ρ n ψ ) are constructed by cyclically gluing together n alternating copies of ψ and ψ ⋆ . The case shown here has n = 3.
As a result, if such path integrals admit an appropriate semiclassical limit described by saddles of either Einstein-Hilbert or Jackiw-Teitelboim gravity with perturbative corrections, then we may argue as in [5] that in this limit the von Neumann entropy (4.64) is given by the Ryu-Takayanagi entropy A[γ]/4G, where γ is the minimal surface homologous to the left B in the bulk saddle that dominates the path integral defined by M (ψψ ⋆ ). This limiting result then receives perturbative corrections from quantum effects in the bulk as described in [7,14,37], and from higher derivatives terms in the classical action as described in [38][39][40].
This representation of the Ryu-Takayanagi entropy as the semiclassical limit of an entropy defined by standard Hilbert space operations on H B⊔B is the second key result of this work.
Examples
Let us now discuss examples of theories that satisfy the axioms of section 2.2 and which illustrate the above results. A large class of examples is provided by boundary CFTs via the AdS/CFT correspondence; in this context, the CFT partition function provides a ζ(M ) that satisfies all our axioms. However, ζ(M ) defined in this way is generally not written directly in the language of bulk gravitational variables. Thus below we will discuss examples in which ζ(M ) is defined directly in the bulk. We will focus on models that are known to exist, which means in practice that they must be extremely simple. We will thus consider only twodimensional bulk systems 12 , defined either by the topological model of [17] (see section 5.1 below) or by an appropriate completion of Jackiw-Teitelboim (JT) gravity (see section 5.2), perhaps coupled to end-of-the-world (EOW) branes or matter defined by some quantum field theory (or by some proxy for such a QFT; see section 5.3). This discussion will be brief, but it illustrates how the decomposition (4.24) and the associated hidden-sector dimensions n µ can be non-trivial, as well as what the implications might be for understanding semiclassical entropy computations. For such models it will be useful to note that any codimension-2 boundary B is a zero-dimensional manifold, which means that it is a discrete collection of points. We will focus on the case where this collection is finite so that the number of points is some m ∈ Z + which we will call 'the number of boundaries'.
An important point, however, is that taking a strict semiclassical limit of such models generally leads to algebras that are not of type I, or at least that contain continuous spectra for central operators. This indicates that there is some sense in which one can take a class of models that satisfy our axioms for finite values of their couplings and, by taking an appropriate limit, one can nevertheless arrive at models which violate our axioms. The nature of such limits and the manner in which they violate our axioms will be discussed in section 5.4.
2d topological gravity
Let us begin with the topological gravity model of [17], first without EOW-branes. Here the allowed closed source manifolds are disjoint unions of circles, and the topological nature of the model means that the path integral depends on the number of circles but is completely independent of their lengths. The associated source-manifolds-with-boundary are thus unions of line segments. In particular, they always have an even number of boundary points so that the m-boundary sectors with m odd are all empty. This is in particular true for the oneboundary Hilbert space (m = 1). Thus, if we use B to denote a single point, then H B = ∅ and, as a consequence, the product sector H ⊗ B⊔B = H B ⊗ H B is empty as well.
It is important to recall that the framework described above applies separately to each baby universe superselection sector of such models. Recall that for any such α-sector, there is exactly one state in the two-boundary sector H B⊔B . This is the state defined by taking the path integral boundary conditions to be given by a line segment. This state plays a role similar to the cylinders C β discussed above for higher dimensions, so we will denote the line-segment state of this topological model as simply |C⟩. Since the model is topological, changing the length of the line segment does not affect the state. And since there are no matter fields in the model, all line segments of a given length are diffeomorphic to each other. However the norm ⟨C|C⟩ depends on the choice of α-sector, and in fact turns out to fully characterize any α-sector in this simple model [17].
Since the bulk theory has a one-dimensional two-boundary Hilbert space H B⊔B , all operators on this space are proportional to the identity operator. This is in particular true of both the left and right von Neumann algebras A B L and A B R . The Hilbert space H B⊔B ∼ = C then factorizes in a trivial way according to C = C ⊗ C, where A B L and A B R are indeed both isomorphic to the (rather trivial) algebra of all bounded operators on C.
Despite the uniqueness of |C⟩ in H B⊔B ∼ = C, the entropy of |C⟩ defined by (4.64) is non-zero. In the language of section 4.3, we may note that C 2 = C = C † (since all of these are line segments and the length of the segment is irrelevant) so that C is a projection. By the argument of that section we must then have tr(C) = n for some n ∈ Z + , but n is not generally equal to unity 13 . It is thus the rescaled cylinder c = C/n that has unit trace, though we see that c 2 = C 2 /n 2 = c/n is not a projection. In particular, since ln c = ln C − ln n and C ln C = 0 (as is always the case for a projection), the normalized state n −1/2 |C⟩ has left density matrixρ = C/n = c for which the entropy is S vN = tr(−c ln c) = ln n. This entropy can be reproduced by embedding our one-dimensional Hilbert space in H n ⊗ H n by mapping n −1/2 |C⟩ to n −1/2 |C⟩⊗|χ⟩ for some normalized maximally entangled state |χ⟩. In other words, this model provides an explicit example where the hidden sectors of section 4.4 are required to give a strict Hilbert space interpretation of what might here be called the Ryu-Takayanagi entropy.
One way to make the model more interesting is to add some number k of flavors of endof-the-world branes. This case was also discussed in [17]. End-of-the-world branes lead to a non-trivial one-boundary sector H B , where the dimension of H B is min(k, n) with n = tr(C) (this tr(C) was called d in [17]). Now consider the two-boundary sector H B⊔B . For k ≥ n all two-boundary states lie in the tensor product sector: H B⊔B = H B ⊗ H B . But for k < n there is precisely one new state in H B⊔B that does not lie in the tensor product sector. It is given by the part of |C⟩ that is orthogonal to all two-boundary states defined by having two end-of-the-world branes. The decomposition (4.24) thus takes the two-term form with H ⊥ B⊔B ∼ = C. Of course, we can again make the trivial tensor product decomposition H ⊥ B⊔B ∼ = C = C ⊗ C for this sector of the Hilbert space. The story of entropy in this sector is then similar to what occurs in the topological model without EOW branes discussed above, though the hidden sectors are now of dimension n − k.
Pure asymptotically-AdS JT gravity
Another example to consider is the UV-completion of pure JT gravity. Again the allowed closed source manifolds are disjoint unions of circles, though now the path integral does depend on their lengths. There are no one-boundary states in this model, and because JT gravity has no local degrees of freedom there is a basis of 2-boundary states of the form |E, E⟩, so that the left and right energies E necessarily agree in all states. Furthermore, all operators in our algebras are again defined by line segments, though now we find a distinct operator for every possible length of the boundary. For a segment of length β we may call this operator e −βH . As a result, the left von Neumann algebra A B L is now the abelian algebra defined by bounded functions of the Hamiltonian H. This means that the factors in (4.24) are labeled by the allowed energies E, and the algebra contains a separate factor for each value of E. We can thus use the eigenvalues E to label sectors H µ B⊔B in our general decomposition (4.24). We may thus replace all labels µ with µ = E below. Each of the associated factors is again just the trivial algebra C, which is also known as the type I 1 factor. Each H E B⊔B is the correspondingly-trivial one-dimensional Hilbert space of states proportional to |E, E⟩. At finite values of the couplings, we have shown that our axioms require the set of µ labels to be discrete, which means that in any given baby universe α-sector the spectrum of H must be discrete as well. However, this model has a semiclassical limit in which we can compute Ryu-Takayanagilike entropies. Such entropies can be studied in microcanonical thermofield-double-like states defined by some window of energy eigenvalues [E 1 , E 2 ]. Such computations are not semiclassical in the regime where the window contains only one eigenvalue E, but they can be semiclassical when this window is relatively large. As a result, semiclassical methods are not sufficient to compute the integers n E associated with a given value of E. But if we assume these integers to be small (or, at least, not exponentially large), we find that the dominant contribution to a large RT entropy must come from the final entropy-of-mixing term in (4.64). A large RT entropy then indicates that the probabilities p µ are exponentially small, and thus that there is an exponential density of energy states even if hidden sectors are not included in the Hilbert space.
One might also ask what the construction of ensembles dual to JT gravity tells us about potential bulk hidden sectors of the form described in section 4.4. Recall that the present work has focused on a single member of such an ensemble rather than the ensemble as a whole, and that the detailed properties of the model can differ from one member of the ensemble to another. We have in fact already seen this dependence on the member of the ensemble in discussing the topological model of [17] in which the dimension n of the hidden sectors was given by tr C, so that such sectors are trivial (and thus unnecessary) in members of the ensemble with tr C = 1.
To discuss the issue for JT, we should in fact understand that the construction of the dual ensemble may require extra information that is not obviously present in the original bulk theory; see again the discussions in [43][44][45][46][47][48][49][50][51]. It is thus better to simply assume that we are given a dual matrix ensemble and to then attempt to interpret this ensemble in bulk terms.
In fact, for reasons explained above, in the present context we should assume that we are simply given a single dual matrix. Now, as described previously, from the bulk point of view there is only one observable that can be studied in JT gravity. This observable is the energy, which is the bulk dual of the given matrix. But the quantities that can actually be studied in the bulk are the eigenvalues of the matrix so, as was already noted, each such eigenvalue thus uniquely determines a (one-dimensional) Hilbert space sector H E B⊔B (where each B is a single point), or equivalently a unique normalized bulk state |E, E⟩.
On the other hand, a basis for the Hilbert space of the dual matrix theory is given by the full set of eigenvectors of the matrix. For a generic diagonalizable matrix, the eigenvalues can be used to label its eigenvectors so that there is no harm in calling the matrix eigenvectors |E, E⟩ as well (since here the second E in |E, E⟩ is a redundant label that is not even in principle allowed to differ from the first). However, when the matrix has degenerate eigenvalues such a labelling cannot be complete. Instead, we must introduce an additional degeneracy index (say, I E ) for any degenerate eigenvalue, in which case the dual matrix eigenvectors might be denoted |E, E, I E ⟩. Note that on the JT gravity side of the duality there is simply no way to resolve the distinction between states labelled by distinct values of the index I E .
In order to agree with the bulk Gibbons-Hawking entropy computation, such degeneracy indices I E can range only over some finite number of values n E for each E. The addition of the index I E is thus precisely equivalent to saying that the space of matrix eigenvectors with eigenvalue E is given by the tensor productH E B⊔B := H E B⊔B ⊗ H n E of a hidden sector H n E with the bulk sector H E B⊔B constructed by the path integral. It is reasonable to expect that non-trivial hidden sectors often arise through such accidental degeneracies. However, as is true in the current case, when this occurs such degeneracies should arise only in a measure zero subset of the dual ensemble and may thus be safely neglected.
JT gravity with "matter"
Finally, it will be useful to discuss JT gravity coupled to quantum matter fields, or at least a simple toy model thereof. Since simple path integrals for JT coupled to quantum fields do not define finite partition functions [16], we will proceed here by simply writing down by fiat algebras that we deem plausible for UV-completions of such models.
In particular, let us consider a putative UV-complete model in which there are again no one-boundary states, but where the two boundary states are now spanned by |E, E ′ ⟩ for some discrete set of values for (E, E ′ ); note that discreteness is guaranteed by the finiteness of tr(C β ) = Tr(C β ). In particular, E is allowed to differ from E ′ . Let us also assume that the left von Neumann algebra A B L includes all bounded functions of the left Hamiltonian, and that it contains operators that can change any |E 1 , E ′ ⟩ to any |E 2 , E ′ ⟩. We will refer to such operators as matter operators. In this case, we can introduce Hilbert spaces H L , H R which respectively have bases {|E⟩}, {|E ′ ⟩}. These are not the one-boundary Hilbert space H B , as H B has already been declared to be empty. Nevertheless, we see explicitly that We also see that A B L can change any state |α⟩ ⊗ |β⟩ into any other tensor product state |α ′ ⟩ ⊗ |β⟩ with some other |α ′ ⟩ and the same |β⟩. As a result, A B L acts on the entire Hilbert space as the algebra of bounded operators B(H L ) on H L .
Despite the explicit factorization of H B⊔B , this Hilbert space is not what we called the product sector H B ⊗ H B , since the product sector is trivial in this model. As a result, the trace tr(P ) of a one-dimensional projection P can in principle be any positive integer n ∈ Z + , though here it must in fact be the same positive integer n for all such projections P . This integer would need to be computed in any given model, and n > 1 would suggest that the model be augmented by the addition of hidden sectors.
For any such n this model yields a single type I factor. As a result, entropy in (4.64) comes entirely from the first term and does not involve any entropy of mixing. The pure JT case, where (up to contributions from hidden sectors) the entropy was entirely given by the entropy-of-mixing term in (4.64), might thus seem sharply different. But this distinction is not really so large in the sense that (assuming the energy eigenvalues to be non-degenerate and the number of such eigenvalues involved to be large in comparison with the above integer n) in the theory with 'matter' we can replace tr by Tr and then compute this trace in the energy basis to find an entropy-of-mixing-like formula S L vN (ψ) ≈ − E p E ln p E , where p E is the probability of finding the system to have left-energy E. We thus see that while the two terms in (4.64) are distinct in a given model, small changes in the model can move a given physical contribution from one term to another. This should not really be a surprise as the spectrum of µ-sectors is generally defined by the spectra of operators in the center Z of (say) the left algebra, and one might think that -much as in our discussion of degenerate eigenvalues for pure JT gravity -the existence of any non-trivial central operators at all requires a fine tuning to set commutators to zero.
Axiom violations in semiclassical limits
As noted in the introduction to the current section, semiclassical gravity generally leads to algebras that are not of type I, or at least that contain continuous spectra for central operators. Let us suppose that such models arise from limits of UV-complete models at finite couplings, and let us also suppose that such finite-coupling UV-complete models are to satisfy our axioms. Then there must be a sense in which one can take a class of models that satisfy our axioms for finite values of their couplings and, by taking an appropriate limit, one can nevertheless arrive at models which violate our axioms. It is useful to describe how such violations arise in the context of definite simple models.
Let us therefore consider again either pure JT gravity or our imagined UV-complete theory of JT gravity coupled to quantum fields. JT gravity contains a parameter S 0 that controls the semiclassical Gibbons-Hawking entropy of the ground state, and which weights contributions to the path integral by e S 0 χ where χ is the Euler characteristic of the spacetime. Taking the limit S 0 → ∞ thus suppresses contributions from higher topologies. In this limit, in the case with quantum fields, Penington and Witten argued that the left von Neumann algebra on H B⊔B is of type II [27]. For pure JT gravity the von Neumann algebra is an abelian algebra whose factors are necessarily of type I, though in the semiclassical limit the spectrum of the central operators becomes continuous 14 (see again [27]). Either of these results would be forbidden by our analysis, so in both cases the limit must violate at least one of our axioms.
There are in fact at least 4 different ways that one might attempt to discuss the large S 0 limit of a JT-gravity theory. We now discuss each of them in turn, though the discussion of each will be quite short.
The first approach is simply to keep S 0 finite, but to take it to be larger than other quantities of interest. In this case, the results of [27] would tell us only that the algebra when coupled to matter is approximately of type II, or that the spectra for pure JT are approximately continuous in the sense that any spacing between energy levels is much smaller than other parameters of interest. This, of course, does not require any actual violation of our axioms. Instead, it suggests only that there is some 'near violation' whose form will become clear by discussing the other approaches below.
The second approach is to note that discussions of semiclassical physics tend to focus on disk amplitudes, and to recall from the above discussion that the disk amplitude is weighted by e S 0 χ(disk) = e S 0 . To keep the disk amplitude finite as S 0 → ∞, one might thus rescale the gravitational path integral ζ by definingζ 2 = e −S 0 ζ. This rescaling will preserve most of our axioms, though it will necessarily violate the factorization axiom. In particular, since ζ satisfies factorization we havẽ Furthermore, this violation becomes arbitrarily strong in the limit S 0 → ∞. It is thus no surprise that the S 0 → ∞ limit does not have the properties described in this work. A further concern in the second approach just described is that only the single-disk amplitudes are finite in the limit S 0 → ∞. In particular, as one can see from (5.2), amplitudes that involve larger numbers of disks will still diverge. The third approach is designed to remedy this problem and to also maintain factorization. To do so, we rescale the path integral by e −mS 0 where m is the number of circles that define the boundary conditions; i.e., we defineζ 3 (M ) = e −mS 0 ζ(M ) where m is the number of connected components of the closed (and compact) boundary source-manifold M and then consider the limit ofζ 3 as S 0 → +∞.
While this third approach is more satisfactory with regard to both finiteness and factorization, it can run afoul of reflection positivity. In particular, let us recall that the proof of the trace inequality (4.15) involved positive-definiteness of the inner product on the fourboundary Hilbert space. Let us then further recall that the relevant computation turned out to involve both path integrals with what in the current context would be m = 1 boundary circle, as well as path integrals with m = 2 boundary circles. The computation is thus sensitive to the fact that, in definingζ 3 , we have changed the relative weights between these two terms by a factor of e −S 0 .
In fact, for any computation that involves only disks, definingζ 3 (M ) = e −mS 0 ζ(M ) is equivalent to simply using the original path integral ζ with S 0 = 0. On the other hand, performing this rescaling and taking S 0 → ∞ suppresses the contributions of all non-disk topologies, even though these would have made extremely important contributions to the original path integral ζ if we had in fact set S 0 = 0. In this way we see that reflectionpositivity can easily fail for the rescaled path integralζ 3 even if it holds for the original path integral ζ at all finite values of S 0 . And, indeed, the argument leading to (4.15) shows that the failure of the trace inequality for the rescaled path integral is directly equivalent to such reflection-positivity violations.
This then brings us to the fourth and final (and perhaps the most sensible) approach to discussing the large S 0 limit of JT gravity. In the Hilbert space sector with 2k boundaries, if we really wish to keep all amplitudes finite without sacrificing reflection-positivity, then we should simply rescale the path integral by e −kS 0 ; i.e., we might defineζ 4,2k = e −kS 0 ζ. We emphasize here that the rescaling depends on the choice of codimension-2 boundary that defines the Hilbert space sector, and not directly on the number of spacetime boundaries that define the path integral. We also emphasize that, as we saw explicitly in figure 8, computations in a given Hilbert space sector can involve path integrals with varying numbers of codimension-1 boundaries. As a result, as indicated by the notationζ 4,2k , after performing this rescaling we no longer have a single path integral that defines the entire theory. Instead, we have effectively separated the sectors of the Hilbert space associated with different numbers of boundaries and declared each to be its own separate theory. We see that taking S 0 → ∞ will mean that the inner product in the 2k-boundary sector is determined entirely by amplitudes with k disks. In particular, while the one-disk amplitudes will make finite contributions to the inner product in the two-boundary sector, their contributions to the inner product in any higher-boundary sector of the Hilbert space has been set to zero. While this is a natural semiclassical treatment of the system, it clearly violates our first (and from some perspectives most trivial) axiom which simply states that all computations are controlled by the same path integral. It should thus again be no surprise that type II behavior and/or continuous spectra for central operators can arise in the limit S 0 → ∞.
We expect similar comments to apply to the G → 0 limit of higher dimensional UVcomplete theories, though in that case there is no clear analogue of the 2nd approach.
Discussion
For the convenience of the reader, the results of the somewhat lengthy preceding sections will now be briefly summarized in section 6.1 below. We will then provide some further remarks concerning these results in section 6.2. Finally, we will conclude with a short discussion of open issues and future directions in section 6.3.
Summary
The work above considered the possibility that quantum theories of gravity admit UVcompletions associated with objects that can be called 'Euclidean path integrals'. We took such objects to satisfy 5 simple axioms that we call finiteness, reality, reflection positivity, continuity, and factorization. The first of these axioms states that the path integral defines a map ζ to C from the space of smooth closed d-dimensional boundary-source-manifolds for some d. Here the bulk theory is thought of as being of some dimension D > d. The reality, reflection positivity, and factorization axioms were of the standard form. In particular, the factorization axiom required ζ(M 1 ⊔ M 2 ) = ζ(M 1 )ζ(M 2 ) for closed source-manifolds M 1 , M 2 . An interesting path integral that naively violates this assumption may be decomposable into so-called baby universe superselection sectors in which factorization is satisfied, in which case our arguments apply to such theories sector-by-sector. The remaining axiom of continuity was extremely weak and required only continuity in ϵ in contexts where it was possible to insert a 'cylinder' C ϵ of the form B × [0, ϵ] into the boundary source manifold while maintaining smoothness of the path integral boundary conditions. The details of the axioms were described in section 2.2.
Section 2 used these axioms to construct Hilbert space sectors H B associated with closed boundary manifolds B of dimension d − 1. In particular, the reflection positivity axiom implies that the inner product is positive-definite on all such H B for any B (whether or not B is connected). When B is of the form B = B 1 ⊔ B 2 , where B 1 , B 2 are also closed and compact, section 3 then defined a surface algebra of operators A B 1 L acting at B 1 and a second algebra of operators A B 2 R acting at B 2 . The path integral also defined a trace operation on these algebras. Importantly, our axioms imply both algebras to be represented by bounded operators when acting on H B 1 ⊔B 2 . This was shown to be a consequence of positivity of the inner product on the higher-boundary Hilbert spaces H B 1 ⊔B 1 ⊔B 1 ⊔B 1 and H B 2 ⊔B 2 ⊔B 2 ⊔B 2 , which in particular implied the trace inequality (3.13) recently discussed in [36].
Since these representations involved only bounded operators, they could be completed to von Neumann algebras . Although the original algebra A B 1 L was independent of B 2 , its von Neumann algebra completion A B 1 ,B 2 L does generally depend on the B 2 that defines the H B 1 ⊔B 2 used to construct the completion. We analyzed only the diagonal case B 1 = B 2 = B, leaving the more general case for future work. In the diagonal case one can denote the algebras more simply as A B L , A B R . The above trace then admits an extension to a trace tr on the full von Neumann algebras A B L and A B R as shown at the end of section 3. Critically, section 4 showed that this extended trace also satisfies a trace inequality of the form (4.15). Together with the results in appendix B, this implies the extended trace to be faithful, normal, and semifinite. Using the trace inequality again then immediately implied our (diagonal) von Neumann algebras to be of type I, meaning that they are direct sums of type I factors A B L,µ (or A B R,µ on which A B L,µ (A B R,µ ) acts non-trivially only on the left (right) factor. Deriving (6.1) also made additional use of the trace inequality in showing the index set I to be discrete; i.e., in showing that (6.1) contains only a direct sum and not a more general direct integral. The decomposition (6.1) provides at least one sharp sense in which we can show that quantum gravity Hilbert spaces (at least those associated with a given value of µ) factorize into products of Hilbert spaces associated with natural subsystems; see e.g. [22,[64][65][66] for discussion of related issues.
Using positivity of the inner product on the Hilbert space sectors associated with 2n copies of B, section 4.3 generalized the argument that led to the trace inequality (4.15) to show that the path-integral-trace (tr) of any non-zero finite-dimensional projection must be a positive integer. As a result, for some n µ ∈ Z + it agrees on A B L,µ with the standard Hilbert space trace defined by summing diagonal matrix elements over an orthonormal basis in the extended Hilbert space H µ B⊔B,L = H µ B⊔B,L ⊗H nµ , where H nµ is a 'hidden sector' Hilbert space of dimension n µ .
As a result of the type I structure, the trace tr can be used to define a notion of 'left entropy' (or entropy with respect to the left algebra A B L ) on pure states |ψ⟩ ∈ H B⊔B . Furthermore, due to the relation between tr and the Hilbert space traces on both H µ B⊔B,L and the corresponding right extended factor H µ B⊔B,R , this entropy can be interpreted in terms of an entropy of mixing term together with the familiar entropies S µ,L vN := Tr µ (−ρ µ ψ lnρ µ ψ ) defined by considering the projections |ψ µ ⟩ of |ψ⟩ to H µ B⊔B , isometrically embedding a normalized version of |ψ µ ⟩ in the extended Hilbert space H µ B⊔B = H µ B⊔B,L ⊗ H µ B⊔B,R , tracing out the right factor in the usual way to define the density matrixρ µ ψ , and then summing expectation values of −ρ µ ψ lnρ µ ψ over an orthonormal basis of H µ B⊔B,L . The final result for the left entropy of |ψ⟩ then takes the form where p µ := ⟨ψ µ |ψ µ ⟩ is the norm of |ψ µ ⟩ in H µ B⊔B . We then observed at the end of section 4 that, if our theory admits an appropriate limit described by semiclassical bulk Einstein-Hilbert or Jackiw-Teitelboim gravity, the corresponding limit of (6.2) is given by the Ryu-Takayanagi entropy of the left B as defined by the corresponding bulk saddle. Quantum and higher derivative corrections are of course also incorporated in the usual way.
This then provides what one might call a Hilbert space interpretation of the Ryu-Takayanagi formula. We emphasize that it uses the extended Hilbert space 3) The factors H nµ are naturally called 'hidden sectors' since -aside from their connection to the trace tr and the associated entropy -they are invisible to the algebras of observables defined by the original path integral. We emphasize that nowhere in this work did we require the existence of a dual field theory. Of course, the axioms of section 2.2 will be true when such a dual theory exists, but the existence of a dual formulation would also entail much more structure. In particular, none of our axioms require any form of locality for a hypothetical dual formulation (beyond the rather weak constraints implied by the factorization axiom). We would thus expect our axioms to hold for any Euclidean UV-completion of a theory of quantum gravity, whether it be called string field theory, spin-foam loop quantum gravity, or by some other name. What we find interesting about the above construction is just how much structure can be obtained with the simple and limited Axioms 1-5.
The final section (section 5) above discussed both topological and JT gravity examples in order to illustrate general features of our construction. In particular, they provided contexts in which the decomposition (6.1) is non-trivial in the sense that we required more than one value of µ. These examples also featured cases with non-trivial hidden sectors, and the JT example illustrated the idea that such sectors can arise due to accidental degeneracies in the bulk description. We also discussed the fact that defining a semiclassical limit of JT gravity by taking a strict limit S 0 → ∞ leads to violations of our axioms, so that it is no surprise that [27] finds the semiclassical theory to have a type II algebra and/or continuous spectra for central operators.
Remarks
Before concluding, we wish to make a few further remarks. The first of these concerns constraints on the decomposition (6.1) that can be deduced by requiring that the theory admit a familiar semiclassical limit. In particular, at least when the boundary B is a sphere or a torus, and when the system is coupled to an external bath that can absorb radiation, standard semiclassical physics tells us that large black holes can evaporate at least until they are microscopically small (when quantum gravity effects then fail to be under strict control). This is the case even if the original black hole is a two-sided Kruskal extension of e.g. a large AdS-Schwarzschild black hole with its famous Einstein-Rosen bridge connecting two distinct asymptotic regions. By the usual arguments, this should be proportional to the semiclassical limit of the state that we have called |C β ⟩ (or, equivalently, proportional to |C β ⟩) for inverse temperatures β less than the critical value β HP set by the Hawking-Page transition [67].
We now couple one side of our system -say the left side B -to a non-gravitational bath. Thus, time evolution will mix the gravity and bath degrees of freedom and, in this sense, will then change the state of the bulk. However, so long as this coupling is describable by a real-time version of our path integral, the time evolution operator should be a unitary operator built from elements of A B L and operators acting on the bath. Then for any subspace H µ B⊔B , since the projection P µ onto this subspace is in the center and commutes exactly with all elements of A B L , it will also commute with the time evolution operator even in the context of coupling to a bath. Furthermore, since the time evolution is unitary, it will preserve the probability p µ = ⟨ψ|P µ |ψ⟩ associated with any subspace H µ B⊔B , where |ψ⟩ is the normalized state on the entire gravity-with-bath system. The decay of a large black hole to a small entropy object then tells us something about the parameters that describe whatever µ-sectors were present in the original state.
To understand such constraints, we should first consider what values of µ will have nonzero probabilities p µ . Since the probabilities are preserved in time, this can be determined from the initial state N −1 |C β ⟩ where N = ⟨C β |C β ⟩. The question thus reduces to asking when P µ |C β ⟩ is a state of non-zero norm. But faithfulness of the trace (Corollary 2) means that tr(P µ ) ̸ = 0, and since P 2 µ = P µ , the limit lim β↓0 ⟨C β |P µ |C β ⟩ is tr(P µ ) according to (3.32). Thus P µ |C β ⟩ must be non-zero for small enough β. It follows that our black hole evaporation scenario will contain at least some information about all possible subspaces H µ B⊔B . For simplicity, let us focus on the case where B is a sphere. In that context, semiclassical physics suggests that the evaporation continues until the area of the black hole is of order the Planck scale, so that (using the connection to the Ryu-Takayanagi formula described at the end of section 4.4) the entropy S L vN on the left B satisfies S L vN ∼ A/4G = O(1) at this point in the evaporation. This tells us that (4.64) can take values as small as O(1) in states with probabilities p µ determined by the initial gravitational state N −1 |C β ⟩ with small β. Since both of the terms in (4.64) are positive, this must also be true of each term separately. As a result, values of µ for which p µ is exponentially small in 1/G can contribute at most a total probability of order G to the state. Similarly, since for each µ we must have tr(−ρ µ ψ lnρ µ ψ ) ≥ ln n µ , our n µ can be exponentially large in 1/G only in a part of the state that contributes at most a total probability of order G. In this sense one might say that 'typical' values of µ must be associated with values of n µ that are not exponentially large. In such sectors the exact value of n µ would then contribute to only a small part of the Ryu-Takayanagi entropy in standard situations where the entropy is O(1/G). It would thus be interesting to better understand whether similar constraints arise for other choices of the boundary B, or whether in some cases one finds instead that black hole areas are bounded below by a constant greater than zero 15 .
The second remark concerns the quantization condition derived in section 4.3 for the trace tr(P ) of any non-zero finite-dimensional projection P . In general, we might say that the unit-trace operators ρ := P/tr(P ) define various notions of microcanonical ensemble (not necessarily specified by an energy), so that our quantization condition tr(P ) = n ∈ Z + requires the quantity tr(−ρ ln ρ) to take the value ln(n). Note that the quantities tr(−ρ ln ρ) defined by our path integral are then microcanonical and non-perturbative analogues of the semiclassical (canonical ensemble) partition functions studied by Gibbons and Hawking in 15 For example, the black holes with hyperbolic boundaries studied in [68][69][70] have areas that are bounded away from zero. It would be interesting to better understand whether this can be reduced by e.g. turning on scalar sources at the boundary. It would also be interesting to better understand if this phenomenon is related to the instabilities associated with cases where such black holes have finite areas [71].
their classic Euclidean path integral study of black hole entropy [72]. It is thus natural to refer to the above result as 'quantization of the Gibbons-Hawking density of states. ' This quantization allowed us to construct a Hilbert space H B⊔B (via the inclusion of finite-dimensional hidden sectors) on which the entropy tr(−ρ ln ρ) = ln(n) directly measured the rank n of the projection P on appropriate left-factors of the Hilbert space H B⊔B . Since the inclusion of hidden sectors can only add states to the theory, we find that exp (tr(−ρ ln ρ)) = n also bounds the rank of P in the context without hidden sectors; i.e., on the left-factors of the Hilbert space H B⊔B . A similar result was previously derived in section 4.1 of [17] using methods that also involved examining higher-boundary Hilbert spaces. Here we have extended this result by deriving the above quantization condition and thus showing that the appropriate addition of hidden sectors will saturate the bound of [17].
As described in section 4.3, our quantization is a direct result of positivity of the inner product on the gravitational Hilbert space. This should not be a surprise, as classic textbook classifications of e.g. unitary representations of the angular momentum algebra in fact take a similar form. In that context one often begins with what will turn out to be a highest weight state. One then acts repeatedly with lowering operators. Assuming all of the states generated by this process to be non-zero would then lead to the construction of a state with negative norm, so at some point this process must terminate. The condition that this occurs then enforces quantization conditions on the spectra of the relevant operators, and in particular on eigenvalues associated with the original state. This is very much in parallel with our argument, which could be phrased in terms of first supposing some value for tr(P ) and then, starting with the state |P ⟩, constructing more complicated states on Hilbert spaces associated with more and more boundaries. Eventually, at some point determined by tr(P ), one finds that such states must have negative norm unless they are trivial. The required triviality then imposes tr(P ) = n ∈ Z + .
One of the interesting lessons from this work is thus that, in quantum gravity, important such constraints are imposed by considering contexts with large numbers of boundaries. In particular, in a semiclassical limit the operators P that describe interesting microcanonical ensembles would be expected to have entropies of order 1/G, and thus values of tr(P ) that are exponentially large in 1/G. Our work suggests that such values of tr(P ), even if not integers, would nonetheless appear to be consistent unless one performs computations that involve a similarly exponentially large number of boundaries. This would clearly be a monumental task. On the mathematical side, this in particular supports suggestions recently enunciated by Witten [73] that while some notion of analytic continuation of integer n results to non-integer cases should violate positivity, such violations of positivity would be invisible in any notion of an asymptotic expansion of such results at large n. See also [36] for another recently-discussed sense in which constraints from positivity become invisible in the limit of a large density of states.
Future directions
It is traditional to close with a discussion of open issues and future directions and, indeed, it seems that there is still much to explore. One such direction would be to understand the analogues of the arguments given above for Lorentzian path integrals, perhaps allowing special codimension-2 singularities as described in [31]. Since our Axioms 1-5 all appear to admit ready extensions to complex sources (which, from the Euclidean perspective, would include real Lorentzian boundary conditions), the main issue is likely to be how to properly state the sense in which the boundary conditions are required to be "of Lorentz signature" and in particular how to handle any singularities that arise.
Another limitation of the analysis above was that it studied only diagonal Hilbert space sectors H B⊔B for which B is a compact closed manifold (without boundary). It would clearly be of interest to understand the non-diagonal case in more detail. We will return to this issue in future work [54].
A more interesting generalization might be to drop the requirement that B be compact and closed, and to instead investigate the case where some B and its complementB meet at some ∂B = ∂B. If there is a dual CFT, then the usual field theory arguments lead us to expect that any von Neumann algebra associated with B should be of type III. However, it is important to understand how to derive this result from the bulk gravitational path integral. Furthermore, despite the fact that the algebra is expected to be of type III, we would like to show that states on the algebra have a reasonable notion of renormalized von Neumann entropy. We have only just scratched the surface with respect to this issue here, and there is much more to understand.
Another interesting path to explore would be to investigate whether small enlargements of the set of axioms might lead to significant enlargements in the class of results that can be derived. It would be particularly interesting to understand if there are simple axioms (say, regarding spacetime wormholes) that would allow us to take a general non-factorizing path integral and to write it as a direct sum/integral over 'baby universe superselection sectors' as described in section 2.2. It would similarly be interesting to find simple axioms which imply Harlow factorization, meaning that the direct sum (6.1) over µ would reduce to a single term. This term would then necessarily be of the form H B L ⊗ H B R (except in cases where these oneboundary Hilbert spaces are trivial). Developing additional examples would also be useful in this regard, so that the effect of new axioms can be more readily understood.
for interesting discussions. ZW thanks Tom Faulkner and Elliott Gesteau for valuable discussions. EC thanks Alexey Milekhin for many useful conversations. We also thank Xiaoyi Liu and Maciek Kolanowski for conversations that led to key parts of this work, and DM is grateful to the Perimeter Institute for its hospitality during important stages of the project. EC's participation in this project was made possible by a DeBenedictis Postdoctoral Fellowship and through the support of the ID# 62312 grant from the John Templeton Foundation, as part of the "Quantum Information Structure of Spacetime" Project (QISS). The opinions expressed in this project/publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. The work of XD was supported in part by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011702, and by funds from the University of California. The work of DM and ZW was supported by NSF grant PHY-2107939, and by funds from the University of California.
A Properties of the unnormalized cylinder operator C β We now provide the proof of the following lemma: Lemma 4. The operator norm ∥C β ∥ of the (unnormalized) cylinder operator satisfies ∥C β ∥ → 1 as β → 0.
Proof. To show that this is the case, recall that since C β ∈ Y d B⊔B it defines a bounded operator. Furthermore, since C * β = C β and C t β = C β , we have C β L † = C ⋆ β L = C β L , so that C β L is self-adjoint and can be diagonalized. In addition, since C β L = C β/2 † L C β/2 L , the eigenvalues of C β L are non-negative. Now consider the family of operators C β/n L for n ∈ Z + and some fixed β > 0. The norm ∥C β/n ∥ is the supremum of the set of eigenvalues of C β/n L . But the operators C β/n L have a common set of eigenstates |λ⟩ with eigenvalues λ 1/n for some bounded set of non-negative real numbers λ. In particular, we have This establishes that we can find sequences of C β with β → 0 for which ∥C β ∥ → 1. However, it remains to show that this convergence is sufficiently uniform that ∥C β ∥ converges for an arbitrary sequence of C β with β → 0.
The condition ∥C β 0 ∥ > 1 means that there is some state |ψ⟩ for which ⟨ψ| C 2β 0 L |ψ⟩ > ⟨ψ|ψ⟩. Now, recall that states of the form |a⟩ for a ∈ Y d B⊔B are dense in H B⊔B , so that any state |ψ⟩ can be approximated by such |a⟩. Since C 2β 0 L is bounded, the expectation value of C 2β 0 L is a continuous function of |ψ⟩. Thus there must also be some a ∈ Y d B⊔B for which ⟨a| C 2β 0 L |a⟩ = λ⟨a|a⟩ ̸ = 0 with λ > 1.
The above argument also leads to the corollaries below.
Corollary 4. As β → 0, the operators C β L converge in the strong operator topology to the identity 1 on any H B⊔B .
Proof. We wish to show C β L |a⟩ → |a⟩ for all |a⟩ ∈ H B⊔B . Due to Lemma 4, this is equivalent to C β L |a⟩ → |a⟩.
To see this, recall that any state |a⟩ is the n → ∞ limit of states |a n ⟩ for a n ∈ Y d B⊔B . We may thus define |ϵ n ⟩ = |a⟩ − |a n ⟩ and |ε β,n ⟩ = C β L |a n ⟩ − |a n ⟩ to write lim β→0 C β L |a⟩ = lim β→0 C β L |a n ⟩ + C β L |ϵ n ⟩ = |a n ⟩ + lim β→0 |ε β,n ⟩ + C β L |ϵ n ⟩ = |a⟩ − |ϵ n ⟩ + lim β→0 |ε β,n ⟩ + C β L |ϵ n ⟩ = |a⟩ + lim Here the last step used the fact that the norm of |ε β,n ⟩ vanishes for each n in the limit β → 0, due to Lemma 4 and the continuity axiom. Since the operator norm of ( C β L − 1) is bounded by 2 for all β, the norm of the remaining error term lim β→0 ( C β L − 1)|ϵ n ⟩ can be bounded by an arbitrarily small constant at large enough n. We are then free to take the limit n → ∞ to establish Corollary 4. But since m n and m+1 n both approach β, both the upper and lower bounds are (∥C 1 ∥) β . This establishes the desired result.
B The trace is normal and semifinite
This appendix establishes that the traces (3.31) on the von Neumann algebras A B L , A B R are both normal and semifinite. We call these properties Lemmas 5 and 6 below. Recall that normality and semifiniteness were defined in properties 4 and 5 at the beginning of section 4. We will give the proof for A B L . The argument for A B R is directly analogous.
Proof. Consider a bounded increasing net of positive operators a ν ∈ A B L for ν in some directed index set J. Here 'increasing' means that a ν ≤ a ν ′ whenever ν ≤ ν ′ . For each a ν we have the definition tr a ν := sup β>0 ⟨C β |a ν |C β ⟩. (B.1) Furthermore, for an increasing net of positive operators, the expectation value in any state |ψ⟩ is also an increasing net. In particular, a ν ≤ a ν ′ implies ⟨ψ|a ν |ψ⟩ ≤ ⟨ψ|a ν ′ |ψ⟩ for all |ψ⟩. In fact, proposition 4.64 of [74] shows that the above is actually an equality: The key point in (B.4) is that taking the supremum over ν always commutes with taking the supremum over β since taking both supremums (in either order) is equivalent to taking the supremum over all pairs (ν, β). The result (B.4) is the desired normality property.
Lemma 6. The trace tr defined by (3.31) is semifinite on both A B L and A B R .
We will give the proof for A B L . The argument for A B R is directly analogous.
Proof. We need only show that every non-zero positive a ∈ A B L satisfies b ≤ a for some non-zero positive b ∈ A B L with finite trace, where the notation b ≤ a means that a − b is positive.
Let us begin by recalling that the normalized cylinder operator C 2β L was defined to have operator norm 1 (though it is not generally the identity). Thus 1 − C 2β L is positive. It then follows that γ † (1 − C 2β L )γ is also positive for any bounded operator γ, since the expectation value in any state |ψ⟩ will satisfy ⟨ψ|γ † (1 − C 2β L )γ|ψ⟩ ≥ 0. (B.5) The positivity of γ † (1 − C 2β L )γ is then equivalent to the statement Next recall that, since a is positive, it is in fact of the form γ † γ for γ ∈ A B L . The above result then implies that our trace is semifinite if we can show that b := γ † C 2β L γ has finite trace and that b is non-zero for some β > 0. We have In writing (B.7), we have used (3.40) to pass from the first line to the second. The final step follows from (B.5). The right-hand side is clearly finite for any β > 0, so this establishes that our b has finite trace. Furthermore, since a = γ † γ is non-zero, and since Corollary 4 (together with Lemma 4) showed the operatorsC β to converge in the strong operator topology to the identity as β → 0, for small enough β the operator b = γ † C 2β L γ must be non-zero as well. This establishes that tr is semifinite as claimed. | 2023-10-04T06:42:14.236Z | 2023-10-03T00:00:00.000 | {
"year": 2023,
"sha1": "7853d3373bcc6cdc260e84511dd93c0fafd64eb4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7853d3373bcc6cdc260e84511dd93c0fafd64eb4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
59147130 | pes2o/s2orc | v3-fos-license | Pairwise Concurrence in Cyclically Symmetric Quantum States
We provide an initial characterization of pairwise concurrence in quantum states which are invariant under cyclic permutations of party labeling. We prove that maximal entanglement can be entirely described by adjacent pairs, then give explicit descriptions of those states in specific subsets of 4 and 5 qubit states - X states. We also construct a monogamy bound on shared concurrences in the same subsets in 4 and 5 qubits, finding that above non-maximal entanglement thresholds, no other entanglements are possible.
INTRODUCTION
Entanglement in quantum mechanics has been an exciting avenue of research in physics since its discovery. It plays a central role in quantum computing [1][2] and offers meaningful contributions to high energy theory [3] and condensed matter physics [4] [5]. Despite the attention that entanglement has received, its fundamental properties are still not fully understood. Constraints on entanglement are generally challenging to compute due to the fact that many entanglement measures involve extremizations which are difficult to handle analytically. Those measures which do have a closed function on state parameters are difficult to calculate for high dimensional systems and many particles.
A common approach to studying these large Hilbert spaces is to consider entanglement in some smaller subspace which reduces the number of state parameters. Entanglement has been studied in states which are invariant under permutation of party labeling [6], X-states [7], and matrix product states [8] among other subsets. This paper considers the pairwise concurrence entanglement measure, defined in [9], of n qubit states which are invariant under cyclic permutation of party labeling. These cyclically symmetric (CS) states are of significant interest to translation-invariant condensed matter systems [10][11] [12] and 1-D spin chains with periodic boundary conditions [13]. Their SLOCC properties were also examined in [14].
The CS subspace of an n qubit system offers a significant simplification to the entanglement picture by constraining the number of allowable distinct types of entanglement. Any subset or partitioning of parties to calculate entanglement among, no matter the measure of entanglement, would be equated to that of other sets of parties by the cyclic permutation invariance of the state. We narrow this picture by only examining pairwise entanglements as measured by the concurrence, C, which is chosen for its relative analytic symplicity and for its relationship to the entanglment of formation [11]. The cyclic symmetry implies that for any pairwise concurrence C i,j between parties i and j, C i,j = C i+k,j+k , where the party label subscripts are to be evaluated mod n. So each al-lowable pairwise concurrence in CS-states corresponds to the spacing between party labelings. As a point of notation, define C (n) k to be the pairwise concurrence between parties k-away in an n qubit CS-state. Note that k runs from 1 to n 2 as any k > n 2 is equivalent to the n − k spacing. The n 2 distinct C (n) k is reduced from the n 2 distinct pairs in a general n qubit state. The entanglement picture in CS-states is further simplified by the fact that many C (n) k share the same properties. To see this, consider some m which is not a factor of n, and the associated permutation, π ∈ S n , π : i → mi mod n. (1) Note that π is invertible only when m = 1 or m n. Where obvious, we will interchangably use π to denote the permutation on the tensor factors, as well as the associated unitary operator acting on the state. Permuting the party labels of some CS-state, |ψ , according to π −1 will leave the state in some new CS-state, |χ = π −1 |ψ , which obeys C i,j (|ψ ) = C π(i),π(j) (|χ ). This means that any properties of C (n) k will be shared by C (n) mk for each m which is not a factor of n. It then suffices to only examine the constraints on C (n) k for k|n. These simplifications, along with the natural reduction in state parameters, makes an analytic description of the CS entanglement more approachable. This paper makes a preliminary attempt at analyzing the allowed pairwise concurrences in CS-states. First, we prove that maximal entanglement in CS-states can be entirely understood in terms of the maxima of C (n) 1 and explicitly determine the maxima on the X-state subspace for 4 and 5 qubits. We then discuss the bounds on multiple concurrences, again with an analytic description for X-states in 4 and 5 qubits. Due to the extensive nature of the calculations, significant portions of analysis are relegated to the appendices.
MAXIMALLY ENTANGLED STATES
A natural question when examining a subset of quantum states is which states maximize entanglement within arXiv:1802.06877v3 [quant-ph] 18 Sep 2019 that subset, and what is that maximal entanglement? As a result of the discussion in the previous section, we need only examine the maxima of C can be constructed as where {n/k} represents the set of integers from 0 to n/k− 1. These integers, multiplied by k then incremented by i, indicate the party labelings in the overall state.
Proof. Consider some n qubit CS-state, ψ (n) = i∈Z n 2 ψ i |i and some k|n. Examine the reduced density matrix, where a and b indicate basis elements in the parties in k{n/k}, while j indicate basis elements in the remaining n − n/k parties. Notably, this reduced state obeys, by definition, C (n/k) 1 ρ k{n/k} = C (n) k ψ (n) . Now label any π ∈ Z n ⊂ S n as We can then examine that, for any m, π (n/k) = ρ k{n/k} , where the first equality describes the action of a permutation on the parties in k{n/k}, the second extends that permutation to the n parties and rearranges using the sum over j, and the third uses the cyclic symmetry of ψ (n) . And so, for any π ∈ Z n/k , πρ k{n/k} = ρ k{n/k} π = ρ k{n/k} .
Since ρ k{n/k} commutes with π (n/k) 1 , they can be simultaneously diagonalized into a basis {|φ j }. Since π (n/k) 1 is unitary, its eigenvalues associated to each |φ j , can be labeled as λ j = e iφj . We can then examine π (n/k) 1 which, according to equation (9), must be equal to the original ρ k{n/k} . This is only possible if e iφj = 1 for each j, implying that |φ j are each CS-states. Lastly, order the eigenstates to be decreasing in C (n/k) 1 (|φ j ). By the convexity of the pairwise concurrence, it then follows that with the inequality being saturated by the state, (2).
Interestingly, convexity was the only property of the concurrence used in the proof of Theorem 1, meaning that any entanglement measure would obey an analagous statement in CS-states.
Notably, (2) also agrees with the monogamy behavior examined in the next section, as each of C (n) j =k ψ (n) k = 0. As a result of Theorem 1, all that remains is to find C (n) 1 for each n. For n ≤ 3, the CS subspace is equivalent to the totally symmetric one, where the maxima have previously been determined. This leads to max C [20]. Turning to the n ≥ 4 case, some notation needs to be established. Recall the Dicke basis [19] element for totally symmetric states, where the sum runs over all party label permutations. This naturally extends to a CS basis element in the following manner. For any particular computational basis element, a CS-state must have the same coefficient for each cyclic permutation of that basis element. Let a normalized n qubit CS basis element be denoted with an overbrace, where |Z n |i 1 i 2 ...i n | denotes the cardinality of the orbit of |i 1 i 2 ...i n under the action of the Z n cyclic permutation group. For example, consider the 4 qubit basis element, Using this basis notation, an arbitrary 4 qubit CS-state takes the form, where |a| 2 + |b| 2 + |c| 2 + |d| 2 + |e| 2 + |f | 2 = 1. Likewise, an arbitrary 5 qubit CS-state would be with the corresponding normalization. Unfortunately, even calculating C for arbitrary states is analytically challenging, let alone maximizing over that space. Instead, the calculation will be performed on the even-X-state subspaces for n = 4 and n = 5. Even-X-states (abbreviated X-states), introduced in [16], are superpositions of only computational basis elements containing an even number of '1' entries. Notably, the set of CS-states examined in [10] are a subset of the CSXstates. Arbitrary 4 and 5 qubit CSX-states then take the form, The X-state subspace is a useful one as concurrence calculations on the space are rather simple. Two qubit reduced density matrices of X-states were shown in [16] to be of the form The square roots of the eigenvalues of ρρ (as in the concurrence definition) [9] are the following, (24) Either the first or third term is the largest eigenvalue so the X-state concurrence is then k,µ and C (n) k,ν indicate the possible non-zero expressions for CSX concurrence involving µ and ν respectively. Following this notation, the concurrences of arbitrary 4 and 5-qubit CSX-states can be calculated to be, 1,ν = |ac * + cf * | − 1,µ = 2 5 |dc * + cd * | + |d| 2 + |g| 2 − (5|a| 2 + 2|c| 2 + |d| 2 ) (|c| 2 + 3|g| 2 ) (30) In determining the maximum of C over the X-state subspace, the maximization will need to be per-formed over both the µ and ν terms, with the overall maximum being the larger of the two resulting maxima. These maximizations are easily performed after setting all the coefficient phases equal to 0. This phase treatment maximizes each absolute value in equations (13)- (20) and simplifies the maximizations enough to readily calculate. The results are compiled in the table below. The overall maximum of C ≈ 0.468 maximum occurs at a = g = 0 and c ≈ 0.298 d ≈ 0.955. These maxima, while calculated only over the CSX subspace, agree with the apparent maxima in numerical results for general CS-states as shown in Figure 3 in the next section. This C (5) 1 maximum is also a notable improvement over the lower bound established in [10].
For n > 5, the CSX-state concurrences can be calculated, but the spaces prove too large and complicated to maximize over analytically.
CONSTRAINTS ON SHARED ENTANGLEMENT
The space of allowable pairwise concurrences, {C i,j } with i from 1 to n − 1 and j > i, for a general n qubit state is known to be constrained by monogamy relations [15]. The pairs of {C (n) k } for CS-states obey constraints of a similar nature. Shown in Figure 1 are the k = 1 and k = 2 concurrences for 10 5 randomly generated 4 and 5 qubit CS-states. Note that the 5 qubit concurrence space is symmetric due to the permutation properties discussed in the introduction. This first examination demonstrates the peculiar monogamous relationship between pairwise concurrences in CSstates. It appears that for both n = 4 and n = 5, above some threshold concurrence, the other concurrence must be equal to 0. This is differs from typical monogamy relations [15] [17], which also suggest that the maximally entangled states minimize entanglement with other parties, but that states with slightly less entanglement than the maximum may share other entanglements.
The following theorem provides some analytical context to the CS-state monogamy. Proof. Consider the state, The pure 2 qubit states with concurrence equal to 1 are equivalent to each other under local unitaries, so the set of ψ can be determined by exmining those of (34). Now consider altering (34) by some infinitesimal perturbation of the form of (19), = 0 for the above state regardless of the perturbation, we first calculate the reduced density matrix between adjacent parties, It's clear that only the real part of the perturbation will affect the concurrence, so continue assuming the coefficients of the perturbation are real. For simplicity, absorb into the perturbation coefficients. Continuing in the concurrence calculation, (37) The square roots of the eigenvalues of this matrix are all . Therefore, the sum λ 1 − λ 2 − λ 3 − λ 4 will certainly be negative, so the concurrence is 0.
The monogamy of CS-states is more clearly observed by examining the subconcurrence, defined as where λ i are the square roots of the eigenvalues of ρρ in descending magnitude, as in the concurrence definition. More simply, the subconcurrence has the same definition as the concurrence, except it doesn't map negative sums of λ i to 0. The subconcurrences of randomly generated 4 and 5 qubit CS-states are displayed in Figure 2. Figure 2 clearly demonstrates the apparent thresholds in 4 and 5 qubits. For both n = 4 and n = 5, it appears that above some k = 2 subconcurrence, the k = 1 subconcurrence must be negative. Due to the symmetry discussed in the introduction, in 5-qubits, states with k = 1 subconcurrences above the same threshold will have negative k = 2 subconcurrence. For n = 4, however, the totally symmetric state, |W = |0001 has the same sC (4) 1 as (34) while also having sC The analytic description of these monogamy thresholds will again be performed on the X-state subspace, where the calculations are much simpler. Shown in Figure 3 are the subconcurrences of randomly generated CSX-states overlaid on general CS-state subconcurrences. Based on these numeric results, it is apparent that CSX-states share the same monogamy thresholds and maximum concurrences as CS-states, making them a relevant subset for analysis. Looking only at CSX-states, we found the acheivable concurrence boundaries in both 4 and 5 qubits. The full analysis is presented in the appendix, but the boundaries allow for a quick determination of the concurrence thresholds in the X state subspace. The thresholds are compiled in Table II on the next page. Note that the sC (4) 1 threshold only fully holds for CSX-states. Also recall that the concurrence symmetry in 5 qubits implies that sC have the same threshold.
DISCUSSION
In the search for maximally entangled state in n qubit CS-states we have provided a state construction which reduces the problem to finding the states which maximize the concurrence between adjacent parties. Adjacent maxima are well understood in 2 and 3 qubits, and we have calculated the maximum for 4 and 5 qubits in the X-state subspace. Brute force calculations are obviously difficult in larger n, even in the X-state subspace. The development of a generalized basis for large n qubit CSstates, similar to the Dicke basis for totally symmetric states, would possibly enable more general statements without quite as much raw calculation. In addition, a canonical form resulting from local unitaries which leaves the state in some simpler, yet still cyclically symmetric, state would aid in calculation. Presently, no such canonical form is known for CS-states.
The work of this paper would be interesting and simple enough to repeat with alternate pairwise entanglement measures, such as the Negativity. In particular, Theorem 1 would still hold for the Negativity and would make conclusions about adjacent entanglement equally generalizable. It would also be simple enough to extend Theorem 1 and the other entanglement permutation re- lations to non-pairwise measures such as the 3-tangle or bipartite entanglement between bipartitions of the parties in the overall state. This work was supported, in part, by NSF grant PHY-1620846.
Appendix: X State Achievable Subconcurrence Boundaries
To find the boundary of CSX-state subconcurrences, the boundaries of each of the pairs sC need be found, with the overall boundary being a combination of the outermost boundaries from each pairing due to sC To simplify the search for the boundaries, note that for any 4 or 5 qubit CSX-state, the subconcurrence terms (26-33) are strictly increased by setting the coefficient phases to 0. This implies that the boundaries can be searched for among 4 and 5 qubit CSX states with purely real coefficients.
Qubits
Following the methods from the previous section, start by considering an arbitrary 5 qubit CSX state, (22), with real coefficients. The corresponding normalized state, ψ = 1 has larger or equal sC (5) 1,µ , and sC (5) 2,µ , so therefore the boundary of the sC 2,µ pairs can be searched for among states with a = 0. For the other pairs, we will bound their subconcurrences by a sequence of lines which lie within the sC 2,µ boundary. We can now parametrize the remaining coefficients of (50) as {c, d, g} → {sin θ cos φ, sin θ sin φ, cos θ} , and define the map 1,µ , sC 2,µ , according to (30) and (32). By analyzing the boundaries of the domain and the zeroes of the determinant of the Jacobian of this map, three boundaries make up a maximal set, as plotted in Figure 5. These three boundaries are parametrized by θ = π 2 , φ = 0, and φ = π 2 . The exact polynomials in sC (5) 1,X and sC (5) 2,X which describe these boundaries are easily determined by a Gröbner basis calculation performed on (52), but the results are quite lengthy. Turning now to the remaining subconcurrence pairings. It was shown in Table 1 that C 1(2),ν ≤ 0.366. Another simple maximization shows that sC (5) 1,ν + sC (5) 2,ν ≤ 2 5 . These three conditions bound the sC (5) 1,ν , sC (5) 2,ν pairs to a region well within the previous boundary, as shown in Figure 6.
Lastly, the remaining two pairs, sC 1,µ(ν) , sC (5) 2,ν(µ) can be handled together due to the symmetry in 5 qubits. Similarly to the previous pair boundary, we will find a set of lines which bound the sC can again take advantage of sC 2,ν ≤ 0.366, as well as the following two new maximizations, sC (5) 2,ν + sC Note that these conditions on the sC 1,µ and sC (5) 2,ν ≤ 0. But given that sC (5) 2,ν ≤ 0 for that region, the actual concurrences would be mapped to C | 2018-02-19T22:04:03.000Z | 2018-02-19T00:00:00.000 | {
"year": 2018,
"sha1": "0c5844b0e73917a6a5fa29fcc92b02141f6e5948",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1802.06877",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b16a15586b53e0ee4cc10e91f452792e5591afe4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
149125971 | pes2o/s2orc | v3-fos-license | Identifying evidence based teaching strategies that instructors use and practice in their classroom and their relationship with academic performance
The purpose of this study was to identify evidence based teaching strategies that instructors use and practice in their classroom and their relationship with academic performance. A sample of 390 students was selected from among undergraduate students who were enrolled in an introduction to psychology course offered by the department of humanities and social sciences department at the Hashemite University. A questionnaire was developed to examine instructor's teaching practices in classroom settings, The questionnaire consisted of 10 statements and each statement was categorized into “less practiced”, “sometimes practiced”, and “most often”. The findings of this study showed that most of evidence based teaching practiced in classroom settings at scientific and humanitarian faculties at Hashemite University sometimes. It also revealed that the level of academic achievement among undergraduate students at scientific and humanitarian faculties was moderate.
Introduction
Learning is a lifelong process and both individuals and organizations are concerned with evidence for what makes "good learning."Evidence-based learning describes a class of approaches, processes, and strategies that have been empirically demonstrated to produce learning outcomes.Educators are increasingly expected to be responsible not only for helping students to achieve the best possible outcomes, but also for using the most scientifically valid methods to achieve them.Many classroom instructors are faced with challenging students behaviors that impact their ability to facilitate learning in a productive way (Sugi & Horner, 2009 ).Therefore, when a challenging student behaviors encroaches on instruction , instructors and students are in a frustrating situation.Research has shown that instructors can minimize inappropriate and disruptive student behaviors and increase academic engagement through the use of evidence-based classroom management practices (Bergeny & Martens, 2006).According to Killian (2006), there are top ten effective based strategies were proved in enhancing academic outcomes: Strategy No1: Clear Lesson Goals, it is crucial that you are clear about what you want your students to learn during each lesson.Clear lesson goals help you and your students to focus every other aspect of your lesson on what matters most.Hundred of correlational and experimental studies show evidence that setting clear lesson goals, increase success rate in various educational settings (Latham & Locket , 2007).
Strategy No2 :
Tell & Show, telling involves sharing information or knowledge with your students while showing involves modeling how to do something.Large number of studies demonstrate that teacher can motivate their students to perform well in the classroom, if they interact, sharing information or the knowledge with their students (Diedrich , 2010).
Strategy No3 : Questioning to Check for Understanding , Research suggest that instructors should spend a large mount of time asking questions and should always check for understanding before moving onto the next part of their lesson.Good & Brophy (2003), classroom questions are best useful diagnosis tool to help indicator student`s academic progress.Croom and Staire (2005), note that the appropriate questioning is positively associated with reinforcing their understanding.
Strategy No4 : Summarize New Learning
In A Graphical Way , Graphic outlines include things such as mind maps , flow-charts and venn diagrams .you can use them to help students to summarize what they have learned and to understand the interrelationships between the aspects of what you have taught them .Kia ;Alipourt & Ghaderi ( 2009) , indicate that students with visual learning style have the greatest academic achievement .Strategy No5 : Plenty of Practice, practice helps students to retain knowledge and the skills that they have learned while also allowing you another opportunity to check understanding.Ukpong and George (2012), recommended that students should set a study time table long enough time for effective academic exercises for their private study and stack with it.
Strategy No6 : Provide your Students with Feedback, giving feedback involves letting your students know how they have performed on a particular task along with ways they can improve.Ferris (2006), found that feedback has significantly positive effects on students in terms of academic achievement.Ellis (2008), discovered that giving feedback to students on their class assignment produce significantly high results for students .
Strategy No7 : Be flexible About How Long It Takes to Learn, the idea that given enough time , every student can learn is not revolutionary as it sounds.You keep your learning goals the same, but vary the time you give each child to succeed.Turman and Hartly (1996), find that time management skills and academic performance are positively related .
Strategy No 8 : Get Student Working Together ( In Productive Ways ), group work is not new, and you can see it in every classroom.However, productive group work is rare.To increase the productivity of your students groups , you need to be selective about the task you assign to them and the individual role that each group member plays and ensure every group member personally responsible for one step .Many studies have been conducted in different settings of education , using different kinds of cooperative learning techniques indicated that appreciate relationship between the higher cognitive and affective outcomes and cooperative learning (Johnson & Johnson, 2005).
Strategy No 9 : Teach Strategies Not Just Content, you can increase how well your students do in any subject by explicitly teaching them how to use relevant strategies.When teaching them mathematics , you need to teach them problem-solving strategies.Marazona, Pickering and Pollodck (2001), focused their attention on successful instructional strategies and found Twenty-one instructional strategies , that can be useful and beneficial in enhancing student achievement .
Strategy No 10 : Nurture Metacognition, encouraging students to adopt strategies is important, but is not meta-cognition.Meta-cognition involves thinking about your options, your choices and your results.When using meta-cognition your students may think about what strategies you could use before choosing one, and they may think about how effective their choice on their success .Metacognition is important in learning and is a strong predicator of academic success (Dunning, Johnson, Ehrlinger & Krunger, 2003).
Participants
A sample of 390 students was selected from among undergraduate students who were enrolled in an introduction to psychology course offered by the department of humanities and social sciences department at the Hashemite University.Participants represented all faculties at the Hashemite university, with 196 (50%) participants represented all scientific faculties and 194 (49.7%) participants represented all humanities faculties .
Instrumentation
A questionnaire was developed to examine instructor's teaching practices in classroom settings.Almost all strategies are present in the statements of questionnaire.The questionnaire consisted of 10 statements and each statement was categorized into ''less practiced'', ''sometimes practiced'' and ''most often''.
Procedure
The questionnaire was distributed to all participants.The participants were instructed to put a tick mark (√) among three that best describes instructor's teaching practices in their classroom against each statement.Table 1 , demonstrate that the following strategies : ( strategy 1, strategy 2 , strategy 4 , strategy 5 , strategy 6 strategy 7 , strategy 8 , strategy 9 and strategy 10 ) were practiced sometimes in classroom settings in scientific faculties , but the strategy 3 was practiced most often .Table 2 demonstrate that the following strategies : ( strategy 2 , strategy 3 , strategy 4 ,strategy 5 , strategy 6 , strategy 7 , strategy 9 and strategy 10 ) were practiced sometimes in classroom settings in humanitarian faculties , but , the strategy 1 was practiced most often .Finally the strategy 8 was practiced rarely in classroom settings .3 demonstrate that the Mean and Standard Deviation of Grade Point Average (GPA) Among Scientific Faculties is 2.75 for the mean and 0.45 for standard deviation but the humanitarian faculties is 2.86 for the mean and 0.45 for the standard deviation.
Figure 1. Strategy 2 and achievement
Figure 1 demonstrates that students with low academic achievement, show that the percentage of their instructors practicing strategy 1 sometimes in classroom settings in scientific faculties is (30%), but the students with high academic achievement, show that the percentage of their instructors practicing strategy 1 most often in classroom settings in humanitarian faculties is (28%).
Figure 2. Strategy 1 and achievement
Figure 2 demonstrates that students with low academic achievement, show that the percentage of their instructors practicing strategy 2 sometimes in classroom settings in scientific faculties is (35%), but, the students with high academic achievement, show that the percentage of their instructors practicing strategy 2 most often in classroom settings in humanitarian faculties is (28%) .Figure 4 demonstrates that students with low academic achievement, show that the percentage of their instructors practicing strategy 4 sometimes in classroom settings in scientific faculties is (28%), but, the students with moderate academic achievement, show that the percentage of their instructors practicing strategy 4 sometimes in classroom settings in humanitarian faculties is (25%) .Figure 5 demonstrates that students with low academic achievement, show that the percentage of their instructors practicing strategy 5 sometimes in classroom settings in scientific faculties is (32%) , but , the students with high academic achievement , show that the percentage of their instructors practicing strategy 5 sometimes in classroom settings in humanitarian faculties is (28%) .Figure 6 demonstrates that students with moderate academic achievement, show that the percentage of their instructors practicing strategy 6 sometimes in classroom settings in scientific faculties is (30%) , but, the students with high academic achievement , show that the percentage of their instructors practicing strategy 6 sometimes in classroom settings in humanitarian faculties is (28%).Figure 7 demonstrates that students with low academic achievement, show that the percentage of their instructors practicing strategy 7 sometimes in classroom settings in scientific faculties is (30%) , but , the students with high academic achievement, show that the percentage of their instructors practicing strategy 7 rarely in classroom settings in humanitarian faculties is (25%).Figure 9 demonstrates that students with low academic achievement, show that the percentage of their instructors practicing strategy 9 sometimes in classroom settings in scientific faculties is (30%) , but , the students with high academic achievement, show that the percentage of their instructors practicing strategy 9 sometimes in classroom settings in humanitarian faculties is (30%) .Figure 10 demonstrates that students with low academic achievement, show that the percentage of their instructors practicing strategy 10 most often in classroom settings in scientific faculties is (24%), but, the students with high academic achievement, show that the percentage of their instructors practicing strategy 10 rarely in classroom settings in humanitarian faculties is (24%).
Conclusion
The results of this study revealed that a level of academic achievement of Hashemite university students in scientific faculties is below B average because, the degree of using the most of effective evidence based teaching strategies in classroom settings is a moderate .The results also revealed that the most often evidence based teaching strategies using in classroom of scientific faculties is strategy No 3 (Questioning to check for understanding), because, the materials content in scientific faculties are so difficult and interactive.This result is consistent with Good and Brophy (2003), classroom questions are best useful diagnosis tool to help indicator student`s academic progress.The results also indicated that the degree of using the most of evidence based teaching strategies in classroom of humanitarian faculties is medium, as result of this , the level of academic achievement of Hashemite university in humanitarian faculties is below B average.The results also revealed that most often evidence based teaching strategies using in classroom of humanitarian faculties is strategy No 1 (clear the lessons goals).Many of correlational and experimental studies show evidence that setting clear lesson goals, increase success rate in various educational settings (Latham & Locket, 2007).Finally the results also revealed that the most often teaching strategies using in the humanitarian is the strategy No 8 (get students working together).This what, many, studies have been conducted in different settings of education , emphasized that using different kinds of cooperative learning techniques indicated that appreciate relationship between the higher cognitive and affective outcomes and cooperative learning (Johnson & Johnson, 2005).
Figure 3 .
Figure 3. Strategy 3 and achievementFigure3demonstrates that students with high academic achievement, show that the percentage of their instructors practicing strategy 3 sometimes in classroom settings in scientific faculties is (25%),but, the students with high academic achievement, show that the percentage of their instructors practicing strategy 3 sometimes in classroom settings in humanitarian faculties is (31%) .
Figure 8 .Figure 9 .
Figure 8. Strategy 8 and achievementFigure8demonstrates that students with low academic achievement, show that the percentage of their instructors practicing strategy 8 sometimes in classroom settings in scientific faculties is (27%), but, the students with moderate academic achievement, show that the percentage of their instructors practicing strategy 8 sometimes in classroom settings in humanitarian faculties is (25%) .
Table 3 . Descriptive of GPA among Humanitarian and Scientific Faculties
Identifying evidence based teaching strategies that instructors use and practice in their classroom and their relationship with academic performance.New Trends and Issues Proceedings on Humanities and Social Sciences.[Online].4(1), pp 563-573. | 2019-05-11T13:06:32.179Z | 2017-08-26T00:00:00.000 | {
"year": 2017,
"sha1": "3229d07f4c525ee8b5398de5dd0746fd861a5947",
"oa_license": "CCBY",
"oa_url": "https://sproc.org/ojs/index.php/pntsbs/article/download/2302/2462",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "3229d07f4c525ee8b5398de5dd0746fd861a5947",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
252204559 | pes2o/s2orc | v3-fos-license | A bibliometric analysis of linguistic research on COVID-19
Research on COVID-19 has drawn the attention of scholars around the world since the outbreak of the pandemic. Several literature reviews of research topics and themes based on scientometric indicators or bibliometric analyses have already been conducted. However, topics and themes in linguistic-specific research on COVID-19 remain under-studied. With the help of the CiteSpace software, the present study reviewed linguistic research published in SSCI and A&HCI journals to address the identified gap in the literature. The overall performance of the documents was described and document co-citations, keyword co-occurrence, and keyword clusters were visualized via CiteSpace. The main topic areas identified in the reviewed studies ranged from the influences of COVID-19 on language education, and speech-language pathology to crisis communication. The results of the study indicate not only that COVID-19-related linguistic research is topically limited but also that insufficient attention has been accorded by linguistic researchers to Conceptual Metaphor Theory, Critical Discourse Analysis, Pragmatics, and Corpus-based discourse analysis in exploring pandemic discourses and texts.
Introduction
The COVID-19 pandemic has impacted human beings in significant ways, and scientists and researchers have actively responded to the challenges in the post-pandemic era by investigating the phenomenon from the vantage point of their research domains. Since 2020, publications about COVID-19 have proliferated across disciplines. The COVID-19 research literature has also increased in bibliometric and scientometric studies (e.g., Chahrour et al., 2020;Deng et al., 2020;Colavizza et al., 2021), as well as systematic reviews and meta-analyses of a variety of COVID-19 pandemic-related topics, such as the risk factors for critical and fatal COVID-19 cases (Zheng et al., 2020) and considerations of whether asthmatic patients are at higher risk of contracting the virus (e.g., Morais-Almeida et al., 2020).
In response to the pandemic, linguistic researchers have provided multilingual public communication services or other helpful language services (Shen, 2020;Di Carlo et al., 2022). However, at this juncture, a clear need to map the contributions of the linguistic research community to pandemic literature was in evidence. Hence, the present study reviewed the COVID-19-related literature published in SSCI and A&HCI journals on the Web of Science over the past 2 years to address this need. The study used the CiteSpace bibliometric tool to analyze the current state of linguistic research on .
COVID-19.
CiteSpace is a tool for performing a visual analytic examination of the academic literature of a discipline, a research field, or both, referred to as a knowledge domain (Chen, 2004(Chen, , 2006(Chen, , 2020. A bibliometric analysis is significant for recognizing the expansion of literature in linguistics. It can aid scholars in gaining quantitative insights into the rise of linguistic research on the COVID-19 pandemic, taking into account the social impact of the disease. The findings can identify the frontiers and gaps in the linguistic study on COVID-19 and guide future research.
Previous studies
The COVID-19 pandemic has exercised a disruptive and profound impact on every aspect of human life. Scientific research papers concerning this pandemic have been growing exponentially. We searched publications related to this topic with "COVID" as the topic term in the Web of Science core collection and got 69,591 results . To help researchers assess the research trends and topics on this issue, several literature surveys have already been implemented. Based on scientometric indicators or bibliometric analyses, these reviews include a focus on research patterns from publications on COVID-19 (Sahoo and Pandey, 2020), the most productive countries and the international scientific collaboration (Belli et al., 2020), and the current hotspots for the disease and future directions (Zyoud and Al-Jabi, 2020). The majority of these studies, however, have concentrated on the medical elements of COVID-19, while paying little attention to the research in the social sciences.
In this context, a recent review by Liu et al. (2022) based on a scientometric analysis of the performance of social science research on COVID-19, covering the landscape, research fields, and international collaborations, represents a notable departure from the prevalent focus of earlier studies. Representing a linguistic focus, another recent study by Heras-Pedrosa et al. (2022) consisted of a systemic analysis of publications in health communication and COVID-19. It found that, in 2020, concepts related to mental health, mass communication, misinformation, and communication risk were more frequently used, and in the succeeding year (2021), vaccination, infodemic, risk perception, social distancing, and telemedicine were the most prevalent keywords.
Within the linguistic field, literature reviews tend to focus on COVID-19-related language education exist. For instance, Moorhouse and Kohnke (2021) explore the lessons learned from COVID-19, and identify and analyze the primary knowledge produced by the English-language teaching community during the epidemic, also offering recommendations for further research on this particular subject. A systemic literature review of adult online learning during the pandemic by Lu et al.
Data collection
As the study was focused on the linguistic field, we searched the Social Science Citation Index (SSCI) and Arts and Humanities Citation Index (A&HCI) available on the Web of Science (WoS) platform. The data were collected through an advanced search. All collected articles/reviews were written in English, and we retrieved the data using the following fields: 1. Topic = ("covid * " OR " * nCoV" OR "SARS-CoV-2" OR "new coronavirus" OR "coronavirus disease 2019" OR "severe acute respiratory syndrome coronavirus-2" OR "novel coronavirus" OR "coronavirus 19"). These terms were only allowed in the title, abstract, or keywords. 2. Time span = 2020-2022 3. Document type = article OR review (the review articles do not include book reviews) 4. (" * ") is a wildcard in WOS that represents any group of characters, including no character. 5. Research area = "linguistics" Based on the search items listed above, 363 research and review articles were obtained from the Web of Science Core Collection on 25 May 2022. Through manual analysis, the documents completely unrelated to linguistic research, as well as conference abstracts, book reviews, correspondence, and other unrelated documents were excluded. To guarantee the recall ratio, this study used the "remove duplicates (WOS)" function in CiteSpace to filter out duplicated studies from the collected data. After the cleaning procedure, the final dataset contained 355 documents.
Instrument
The instrument deployed in this study was CiteSpace 6.1 R2 developed by Chen (2004) as a bibliometric analysis tool (Chen, 2004(Chen, , 2006(Chen, , 2017Chen et al., 2010). The input in this software is a set of bibliographic data files in the field-tagged Institute for Scientific Information Export Format.
In this study, the files were downloaded from the WoS core collection. We chose "full record and cited references" as the record content and the files can be recognized by CiteSpace software directly. When the files are added to the software, they are subjected to the following procedural steps: time slicing, thresholding, modeling, pruning, merging, and mapping (For more details, please see Chen, 2004). The outputs of this software are visualized co-citation networks which is to say that each of the networks is presented in a separate interactive window interface. It can show the evolution of a knowledge field on a citation network, display the overall state of a certain field, and highlight some important documents in the development of a field. The strength of CiteSpace lies in the analysis and visualization of the thematic structures and research hotspots. It can provide us with co-citation networks among references, authors, and countries which is of pivotal importance given the research questions underpinning the present study. Hence, to locate important references, recognize research trends, and pinpoint research hotspots in the linguistic research on COVID-19, co-citation documents and keyword co-occurring analyses were conducted in this study through this software.
Global distribution of articles on COVID-
The overall distribution characteristics are presented below. Figure 1 displays the number of papers published each month since January 2020 when the World Health Organization formally declared the epidemic a global public health emergency. There was only one article about COVID-19 published in January 2020, whereas the publications show a peak in April 2022 with 30 publications. Overall, the results show that publications on the topic are increasing every month. Therefore, we might conclude that linguistic researchers have begun to be increasingly interested in COVID-19 linguistic research.
Tables 1, 2, respectively, indicate the top 10 most productive countries and institutions for COVID-19 publications. The USA was ranked as the top country in terms of the number of articles related to linguistic publications on COVID-19, with 111 publications in total, followed by China with 57 articles and England with 47 articles (Table 1). In terms of the number of linguistic research publications on COVID-19, Purdue University ranked as the top contributing institution (16 records), followed by the University of London (10 records) and the State University System of Florida (eight publications).
The 355 articles reviewed in the current study were published in 83 journals. The top 10 most productive journals are listed in Table 3. System ranked the top journal in the number of published articles, with 21 publications related to COVID-19, followed by American Journal of Speech Language Pathology and International Journal of Language Communication Disorders, with 20 and 18 publications, respectively. As we can see in Table 3, most of the top 10 journals are related to language education or speech-language pathology.
Based on the Global Citation Score in the WoS, the top 10 most-cited articles contributing to COVID-19 research are listed in Table 4. MacIntyre et al. (2020) ranked as the mostcited article with 127 citations. This article is published in System which is also the most productive journal. The top four articles are all about online language teaching during the COVID-19 period.
Document co-citation analysis
The 355 bibliographic recordings from WoS were visualized and a 1-year time slice was selected for analysis. The size of the node is proportional to the frequency of the cited references. Different colors around nodes represent the frequency of references in different time periods. The labels shown in Figure 2 are all documents with more than three citations, and the connection between nodes shows the co-citation relationship.
The top 50 most cited articles every year were selected. There were 176 individual nodes and 562 links, representing cited articles and co-citation relationships among the whole data set, respectively. The results are illustrated in Figure 2. The results are somewhat different from those obtained from the Global Citation Score in the WoS (Table 4) since the Global Citation Score in the WoS is calculated based on all the citations in WoS, while the document co-citation analysis is based only on the 355 documents retrieved from WoS.
According to the document co-citation analysis, the most cocited article was written by MacIntyre et al. (2020). This study explores the issue of language teachers' coping mechanisms and their correlates in the context of the distinctive stressors .
/fpsyg. . of the COVID-19 pandemic and the educational responses at the global level. It demonstrates how language teachers have faced a variety of challenges as a result of the global response to the COVID-19 outbreak. High levels of stress have been caused by the quick transition to online education, the blending of job and personal life, and the constant worry about personal and familial wellbeing. With the help of a variety of techniques, teachers were found to be dealing as effectively as they could. Coping strategies that are deemed to be more active and approach-oriented, namely ones that more directly addressed the problems brought on by the phenomenon including the emotions evoked, were found to be connected with more favorable outcomes in terms of psychological health and wellbeing. The greater use of avoidant coping mechanisms was linked to worse psychological outcomes. Increased use of avoidant coping, in particular, was linked to higher stress levels and a range of unpleasant feelings (anxiety, anger, sadness, and loneliness). MacIntyre et al. (2020) also found that a variety of particular techniques were employed by the participants within the approach and avoidant categories of coping, and the majority of them produced outcomes consistent with the category in which they appeared. The multidimensional nature of the stressors required multidimensional coping strategies, but it was obvious that some coping strategies were superior to others. This study by MacIntyre et al. (2020) offers insights into the effectiveness of coping strategies used by language teachers during the crisis and their implications for other stressful events and processes such as school transfers, educational reform, or demanding work periods like the end-of-year exam. MacIntyre et al. (2020) suggest that all pre-service and in-service teacher education programs should incorporate stress management as a fundamental professional competence. The second most cited article is written by Gacs et al. (2020), which compares the crisis-prompted online language teaching . /fpsyg. . during the COVID-19 era with well-designed and carefully planned online language education. Due to the 2020 pandemic, many institutions were forced to transition away from face-toface (F2F) teaching to online instruction. The crisis-prompted online language teaching is different from actual planned online language education. This is because in times of pandemic, war, crisis, natural disaster, or extreme weather, neither teachers nor students are prepared for switching over to online education without good technology literacy, access, and infrastructure. Gacs et al. (2020) describe the process of preparing, designing, implementing, and evaluating online language education when adequate time is available and the concessions one has to make as well when adequate time is not a possibility in times of pandemic or in other emergent conditions. This article presents a roadmap for planning, implementing, and evaluating online education in an ideal and crisis contexts. The third most cited article conducted by Gao and Zhang (2020) set up a qualitative inquiry to investigate how EFL teachers perceive online instruction in light of their disrupted lesson plans and how EFL teachers teach during the early-stage COVID-19 outbreak developed their information technology literacy. The findings from this study on teachers' perceptions of online instruction during COVID-19 have theoretical ramifications for studies on both teachers' cognitions and online EFL teaching.
It is evident that the three top-cited articles are on the theme of language education. Therefore, it can be concluded that remote online education during a pandemic crisis is the most studied area from the linguistic perspective.
Keyword co-occurrence
In a way, keywords serve as the central summary of articles and serve to convey their major idea and subject matter. The co-occurrence of keywords in an article indicates the degree of closeness between the keywords and the strength of this relationship. According to common perception, the more strongly related two or more terms are, the more often they are likely to appear together. CiteSpace provides a function called Betweenness Centrality to describe the strength. In other words, if a keyword consistently appears alongside other distinct keywords, it is likely that we will see it even if we talk about other related subjects. As a result, the greater the value of Betweenness Centrality a keyword displays, the more significant a keyword is.
. /fpsyg. . A keyword co-occurrence analysis was conducted in this study to identify the research fields and dominant topics. A term analysis of words extracted from keywords was conducted to identify the words or phrases co-occurring in at least two distinct articles. Terms with high frequency may be treated as indicators of hotspots in a certain research field (Chen, 2004). The top five high-frequency keywords were language, student, communication, discourse, and teacher. The keyword co-occurrence network is shown in Figure 3, and the keywords with frequencies of more than three are displayed in Table 5.
Cluster interpretations
Based on the analysis of the results of keyword cooccurrence, we used CiteSpace to conduct a cluster analysis. The 355 articles generated 20 clusters in total. Labeling clusters with indexing terms and showing clusters by log-likelihood ratio (LLR), Figure 4 shows the eight most important keyword clusters obtained by keyword co-occurrence analysis. Table 6 shows the keywords lists of the seven important clusters in linguistic research on COVID-19. It illustrates an aggregated distribution in which the most colorful areas overlapped, indicating that these clusters share some basic concepts or information (as suggested by Chen, 2004).
Cluster # is labeled as emergence online teaching
Cluster #6 (online learning) and Cluster #7 (distance learning) are closely related to Cluster #0 since both Cluster #6 (online learning) and Cluster #7 (distance learning) fall under the umbrella of online education during a crisis. Emergency online teaching and online/distance learning are clearly shown to be the focus of linguistic research related to COVID-19.
Due to the COVID-19 pandemic, teaching and learning experienced a shift from physical, in-person (or face-to-face) learning environments to virtual, online learning environments. Although online education is well-established, pandemicinitiated online teaching and learning differed from traditional, well-planned online teaching, thus leading to significant difficulties for both language teachers and students. The stakeholders had to quickly adapt to new environments and learning styles while dealing with the pandemic's personal and societal repercussions on their everyday lives and wellbeing (MacIntyre et al., 2020). The online teaching of foreign and .
/fpsyg. . second languages during COVID-19 is referred to as emergency remote teaching (ERT), a term used to describe education temporarily moved online due to unforeseeable events such as natural catastrophes or conflict (Hodges et al., 2020). The difficulties primary school ESOL teachers in the United States encountered as a result of the unexpected instructional adjustments brought on by the COVID-19 epidemic are described by Wong et al. (2022) along with how these difficulties appeared to have impacted the teachers' wellbeing. There are problems unique to language education, even if English language teachers and students have faced many of the same difficulties as their peers in other disciplines. For instance, many people view the interaction between students and teachers as a crucial component of language acquisition (Walsh, 2013), whereas interaction works very differently in the online mode (Payne, 2020). Therefore, to encourage and support engagement during online language lessons, teachers need to showcase certain competencies (Cheung, 2021;. Understandably, the research community has developed a keen interest on how the COVID-19 pandemic has affected language teaching and learning. More attention is directed toward adapting to the COVID-19 pandemic-initiated online education due to the rapid and abrupt switch from classroom instruction to online learning. For instance, how the studentsespecially primary pupils-and the teachers adapt to online teaching is the main topic discussed in a special issue of System (2022, volume 105). The COVID-19 pandemic also changed the in-person and on-campus testing into placement testing. Ockey (2021) provides an overview of COVID-19's impact on English language university admissions and placement tests.
Cluster # is labeled as science communication
During the COVID-19 pandemic period, it has become very crucial for scientists and government politicians to communicate scientific knowledge to the public to limit the spread of COVID-19. Linguistic factors can play an important role in science communication. A study by Schnepf et al. (2021) inquired into whether complex (vs. simple) scientific statements on mask-wearing could lead audiences to distrust the information and its sources, thus obstructing compliance with behavioral measures communicated on evidence-based recommendations. The study found that text complexity affected audiences inclined toward conspiracy theories negatively. Schnepf et al. (2021) provided recommendations for persuading audiences with a high conspiracy mentality, a group known .
/fpsyg. . to be mistrustful of scientific evidence. Janssen et al. (2021) inquired into how the use of lexical hedges (LHs) impacted the trustworthiness ratings of communicators endeavoring to convey the efficacy of mandatory mask-wearing. The study found that scientists were perceived as being more competent and having greater integrity than politicians.
Cluster # is labeled as dysphagia
When a society faces a crisis like the COVID-19 pandemic, the impact of COVID-19 on special needs populations, such as people with dysphagia or aphasia or hearing impairments (Cheng and Cheng, 2022;Mathews et al., 2022), assumes greater importance for the linguistic community. A study by Jayes et al. (2022) described how UK Speech and language therapists (SLTs) supported differently abled individuals with communication disabilities to make decisions and participate in mental capacity assessments, best interest decision-making, and advance care planning during the COVID-19 pandemic. Govender et al. (2021) investigated how people with a total laryngectomy (PTL) were impacted by COVID-19. Feldhege et al. (2021) conducted an observational study on changes in language style and topics in an online Eating Disorder Community at the beginning of the COVID-19 pandemic. Owing to the severity of the pandemic, speech-language pathologists (SLPs) shifted quickly to virtual speech-language services. Thus, telepractice (cluster #4) also becomes one of the important keyword clusters. Telepractice has been used extensively to offer services to people with communication disorders since the global COVID-19 pandemic. Due to physical separation tactics used to contain the COVID-19 outbreak, many SLPs implemented a live, synchronous online distribution of clinical services. However, SLPs have received synchronous telepractice training to equip them for the shift from an in-person service delivery approach. Using synchronous modes of online clinical practice, Knickerbocker et al. (2021) provide an overview of potential causes of phonogenic voice issues among SLPs in telepractice and suggest prospective preventative techniques to maintain ideal vocal health and function.
Cluster #3 is labeled as social media and it is closely related to Cluster #5 (multilingual crisis communication) since social media research is a way to analyze public communication, particularly during a health crisis. Given the physical restrictions during social to maintain contact and share ideas. Many studies have investigated the performances of various types of social media platforms during the pandemic, such as Twitter (Weidner et al., 2021), Weibo (Ho, 2022;Yao and Bik Ngai, 2022), WhatsApp (Pérez-Sabater, 2021), and YouTube (Breazu and Machin, 2022). Weidner et al. (2021) looked at the characteristics of tweets concerning telepractice via the prism of a wellknown framework for using health technology. During the epidemic, there was a surge in telepractice-related tweets.
Although several tweets covered ground that is expected in the application of technology, some covered ground that might be particular to speech-language pathology. Yao and Bik Ngai (2022) investigated how People's Daily communicated COVID-19 messages on Weibo. Its findings contribute to the understanding of how public engagement on social media can be augmented via the use of attitudinal messages in health emergencies. Cluster #5 multilingual crisis communication is mostly studied from the perspective of sociolinguistics.
Contributing to the sociolinguistics of crisis communication, Ahmad and Hillman (2021) examined the communication strategies employed by Qatar's government in dealing with the COVID-19 pandemic. While a study by Gallardo-Pauls (2021) proposed a specifically linguistic/discursive model of risk communication, Tu et al. (2021) inquired into how pronouns "we" and "you" affected the likelihood to stay at home differently. In another study, Tian et al. (2021) investigated the role of pronouns in crafting supportive messages and hope appeals and facilitating people to cope with COVID-19. When a society is faced with a crisis, its language can reflect, reveal, and reinforce societal anarchy and divides. A study by Nagar (2021) examined how minority groups-Muslims and migrant workers-experienced marginalization, oppression, and damage through linguistic mechanisms such .
FIGURE
Cluster view of keyword co-occurrence.
Implications for future study
As a discipline, linguistics has contributed significantly to the literature on COVID-19. Based on the results obtained from the above descriptive statistics and visualizations via Citespace, the study found that linguistic research on COVID-19 hitherto has largely focused on the influences of COVID-19 on language education, speech-language pathology, and crisis communication. Language education is one particular strand of applied linguistics, while speech-language pathology and crisis communication, respectively, comprise interdisciplinary studies of language and pathology, and language and communication.
The present state of linguistic research on COVID-19 reveals that there is a dearth of studies deploying linguistic theories such as Conceptual Metaphor Theory, Critical Discourse Analysis, Pragmatics, and Corpus-based discourse analysis. These theories can serve as important heuristics for exploring COVID-19 discourses. A strand of research from the perspective of these theories has highlighted the problematic nature of COVID-19 discourses.
Following the onset of the COVID-19 pandemic, linguists were concerned about the language regarding COVID-19. The Conceptual Metaphor Theory (Lakoff and Johnson, 1980), as one of the primary theoretical constructs in Cognitive Linguistics, was employed by some scholars to explore the COVID-19 discourse. Through their analysis of the conceptual metaphors in different kinds of COVID-19 discourse, linguistic scholars found that the WAR metaphor dominated the COVID-19 discourse (Bates, 2020;Chapman and Miller, 2020;Isaacs and Priesz, 2021). However, other metaphors such as FIRE remained underexplored concerning the pandemic (Semino, 2021). Although a study by Abdel-Raheem (2021) has explored the multimodal COVID-19 metaphor by examining political cartoons, in general, the multimodal COVID-19 metaphor has not been studied extensively. Further, despite the fact that Preux and Blanco (2021) experimental study explored the influence of the WAR and SPORT domains on emotions and thoughts during the COVID-19 era, the impact of the COVID-19 metaphor on the emotions and mental health of the public has received limited attention.
Critical Discourse Analysis has been deployed by some linguistic researchers. For example, critical discourse analysis was used by Zhang et al. (2021) to compare the reports on COVID-19 and social responsibility expressions in Chinese .
/fpsyg. . Drawing on critical discourse analysis and textual analysis, Zhou (2021) conducted an interdisciplinary study of the semiotic work dedicated to legitimating Traditional Chinese Medicine (TCM) treatment of COVID-19 in the social media account of an official TCM institution. While CDA analysis of COVID-19 discourses has been undertaken, more CDA-led studies need to be undertaken, given the complexity of power and inequities interwoven reflected in the texts and discourses pertaining to the pandemic.
Pragmatics research on COVID-19 is another underexplored area. Ogiermann and Bella (2021) analyze signs displayed on the doors of closed businesses in Athens and London during the first lockdown of the COVID-19 pandemic, providing some new insights into the dual function of expressive speech acts discussed in pragmatic theory. Blitvich (2022) explores the connections between face-threat and identity construction in the on/off line nexus by focusing on a stigmatized social identity (Goffman, 1963), a local ethnographically specific, cultural position (Bucholtz and Hall, 2005) attributed to some American women stereotypically middle-aged and white who are positioned by others as Karens. Thus, a woman who is perceived to be acting inappropriately, harshly, or in an entitled manner is categorized as a Karen. This incorrect behavior is frequently connected to alleged acts of racism toward minorities. The anti-masker Karens also achieved attention during the COVID-19 pandemic. This research offers a multimodal analysis of a sizable corpus, 256 films of persons whose actions and the way they were seen caused them to be positioned as Karens, to advance our knowledge of the Karen identity. More theories of Pragmatics, such as Relevance Theory, can be employed in the study of COVID-19 discourse.
Corpus-based COVID-19 discourse analysis is also deserving of research attention. Mark Davies has built the Coronavirus Corpus (https://www.english-corpora.org/ corona/)-an online collection of news articles in English from around the world from January 2020 onwards. The corpus, which was first released in May 2020, currently has about 1,500 million words in size at the cutoff point (16 May 2022), and it continues to grow by three to four million words each day. It can provide vast original discourse data for researchers. For example, based on a 12.3-million-word corpus, Jiang and Hyland (2022) explore keyword nouns and verbs, and frequent noun phrases to understand the central concerns of the public reflected in its news media. In future, more research can be conducted based on the Coronavirus Corpus.
Conclusion
Human life has been greatly affected and disrupted by the COVID-19 pandemic. Scientists and researchers have actively responded to this pandemic by investigating the phenomenon of COVID-19 from the lens offered by their fields of research, and publications relevant to COVID-19 have proliferated rapidly across disciplines since the beginning of 2020. To investigate contributions made by linguistic researchers to pandemic research, the current study carried out a bibliometric analysis of the relevant and available literature. Three hundred and fiftyfive bibliometric recordings ranging from January 2020 to May 2022 were collected from WoS, and CiteSpace software was adopted to quantitatively and visually review these papers. The study found that there was continued growth in publications between January 2020 to May 2022. USA was found to be the most productive country in terms of contributions to literature contributing 111 publications pertaining to COVID-19, whereas System ranked as the top journal in the number of published articles related to COVID-19 (21 publications). Through the visualizations of keyword co-occurring analysis and cluster interpretation via Citespace, the study also found that linguistic research on COVID-19 focused largely on the influences of COVID-19 on language education, speechlanguage pathology, and crisis communication. However, the present review flags the need for more investigations of COVID-19 texts and discourses deploying the explanatory lens of key linguistics theories such as Conceptual Metaphor Theory, Critical Discourse Analysis, Pragmatics, and Corpus-based Discourse Analysis.
Although within its delineated scope, the present study aspired to be as comprehensive as possible, some limitations were unavoidable. For instance, the study searched documents in the Web of Science alone, not including other data sources such as Scopus, Google Scholar, Index Medicus, or Microsoft Academic Search. Further, only one scientometric tool was employed in this review. Future research may make use of a larger database and different analytical tools.
Nonetheless, this study comprises a pioneering review of linguistic research on COVID-19 and identifies and provides a clear overview of international linguistic research in relation to COVID-19. Hence, it can be used as a useful springboard by linguistic researchers interested in probing COVID-19 discourses and texts through the lens of leading theories in the field, thus not only expanding the topical breadth of linguistic research on the pandemic but also generating valuable insights in areas of pragmatics and metaphor as well as CDA and corpus research. These insights are likely to have theoretical as well as practical implications for the field of linguistics. | 2022-09-13T14:07:06.245Z | 2022-09-13T00:00:00.000 | {
"year": 2022,
"sha1": "610a076e2aa215822d85a154e0003edca646753f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "610a076e2aa215822d85a154e0003edca646753f",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": []
} |
249283691 | pes2o/s2orc | v3-fos-license | Identification of Golovinomyces artemisiae Causing Powdery Mildew, Changes in Chlorophyll Fluorescence Parameters, and Antioxidant Levels in Artemisia selengensis
Artemisia selengensis Turcz. is a valuable edible and medicinal vegetable crop widely cultivated in Northeast China. Powdery mildew (PM) disease occurs during field and greenhouse cultivation, resulting in production losses and quality deterioration. The pathogen in A. selengensis was Golovinomyces artemisiae identified using optical microscopic and scanning electron microscopic observations, morphological identification, and molecular biological analyses. Parameters of chlorophyll fluorescence (ChlF) and antioxidant system responses as well as callose and lignin contents in A. selengensis were analyzed with inoculating G. artemisiae. Obvious of PM-infected leaves were confirmed with significantly lower values in electron transport rate (ETR), non-photochemical quenching (NPQ), photochemical quenching (qP), and actual photochemical efficiency [Y(II)], but higher values in non-adjusting energy dissipation yield [Y(NO)], supposed that maximal photosystem II quantum yield (Fv/Fm) value and images could be used to monitor PM degree on infectedA. selengensis. In addition, malondialdehyde (MDA), superoxide anion (O2–), callose, lignin contents, and peroxidase (POD) activity increased, while superoxide dismutase (SOD) activity, catalase (CAT) activity, and ascorbic acid (AsA) content decreased significantly in infected leaves compared to mock-inoculated leaves, indicated that lignin and protective enzymes are the key indicators for detecting PM resistant in A. selengensis. These results suggest that PM caused by G. artemisiae disrupted the photosynthetic capacity and induced imbalance of antioxidant system inA. selengensis. The findings were of great significance for designing a feasible approach to effectively prevent and control the PM disease in A. selengensis as well as in other vegetable crops.
INTRODUCTION
Artemisia selengensis Turcz. is a perennial plant belonging to the genus Artemisia of the Asteraceae family (Wen et al., 2016). Due to its high nutritional and medicinal value, A. selengensis has been favored as both a kind of vegetable and a herbal medicine in Northeast China for thousands of years (Peng et al., 2009;Wen et al., 2016). However, leaves as the main edible parts of the plant are extremely vulnerable to powdery mildew (PM) disease when the plant is cultivated in field and/or in greenhouse, especially under low air flow and high relative humidity environment in summer and autumn. This has a negative economic impact on the plant production and the overall agricultural industry. Even though PM symptoms can be easily recognized, it is challenging to determine the species assignment (Glawe, 2008). Morphological characteristics and observation of pathogen are crucial for the identification of pathogen at species and prevention of PM. For example, Blumeria graminis (DC.) Speer is unique in forming conidia compared to other species of Erysiphales (Glawe, 2008). Previous studies revealed that the main types of PM pathogens parasitizing Asteraceae are Golovinomyces cichoracearum, Golovinomyces chrysanthemi, and Golovinomyces artemisiae (Matsuda and Takamatsu, 2003;Lebeda et al., 2012;Bradshaw et al., 2017). G. artemisiae is described in Europe with Artemisia vulgaris being a type of host, of which a detailed description has been published by Braun (1995). G. artemisiae in Artemisia annua is also reported and identified using a combination of morphological and internal transcribed spacer (ITS) methods in Korea (Choi et al., 2014). However, the species of pathogen causing PM in A. selengensis remains unclear and phenotypic and physiological changes of A. selengensis plants induced by PM are rarely reported in Northeast China (Lebeda et al., 2020).
When plants are infected with PM, photosynthesis is reduced through a lower supply of light energy because of the leaf surface covered by mycelium (Scott et al., 1996). On the other hand, CO 2 influx is inhibited due to stomata closure (Duniway, 1982;Berger et al., 2007). Previous studies have demonstrated that Erysiphe alphitoides leading to the reduction of foliage photosynthetic activity in pedunculate oak (Quercus robur) (Copolovici et al., 2014). Modern chlorophyll fluorescence (ChlF) technology allows the rapid and nondestructive detection of photosynthetic activity (Kuckenberg et al., 2008). Maximal photosystem II quantum yield (Fv/Fm) was used to diagnose several diseases, including coffee (Coffea arabica L.) infected by Hemileia vastatrix and cedar (Cedrus deodara) infected by Pestalotiopsis spp. (Ning et al., 1995;Honorato Júnior et al., 2015). Meanwhile, the parameter Fv/Fm could distinguish resistant and susceptible lettuce (Lactuca sativa L.) lines against the Bremia lactucae (Bauriegel et al., 2014). In terms of Fv/Fm and effective quantum yield of PSII [Y(II)], leaves infected by Bipolaris sorokiniana were also measured dramatically impaired on the most susceptible cultivar compared to a less susceptible cultivar in wheat (Triticum aestivum L.) (Rios et al., 2017). Reductions in values of Fv/Fm, Y(II), quantum yield of non-regulated energy dissipation [Y(NO)], and photochemical quenching (qP) coefficient are noticeable on necrotic vein tissues induced by Colletotrichum truncatum in contrast to the surrounding leaf tissue in soybean (Glycine max L.) (Dias et al., 2018). The non-photochemical quenching (NPQ) processes increase in Podosphaera xanthiiinfected melon leaves, which constitute a major mechanism for the avoidance of photodamage (Polonio et al., 2019). Furthermore, different fungi have been shown to inhibit photosynthetic electron transfer reactions variably, which are a source of reactive oxygen species (ROS) (Duniway, 1982;Tang et al., 1996;Zhao et al., 2011). Lignin and callose activate the host defense system, giving the host plant time to initiate subsequent defense responses, such as ROS burst and antioxidant enzyme activity regulation (Jacobs et al., 2003;Blumke et al., 2014). Callose was accumulated in Arabidopsis (Arabidopsis thaliana L.) infected with PM, which enhanced its resistance to host (Ellinger et al., 2013). Meanwhile, lignin content was increased to prevent pathogens infection and spread of wheat against PM by causing cell wall suberization (Bhuiyan et al., 2009). Moreover, the increasing of lignin content can significantly improve peroxidase (POD) activity (Lee et al., 2018). To response Glomerella cingulata attack, POD activity was maintained at a higher level, superoxide dismutase (SOD) and catalase (CAT) were inhibited, reducing ROS scavenging capacity in susceptible cultivar compared to that of the resistance cultivar in apple (Malus pumila) . Excess ROS would cause serious damage to plant protein and membrane system. The scavenging of O 2 − depends on the high activities of SOD, POD, and CAT enzymes for rice (Oryza sativa L.) to resist Magnaporthe oryzae infection (Groß et al., 2013;Abdul et al., 2018). Malondialdehyde (MDA) increases twofold in wheat seedlings infected by Fusarium pseudograminearum, which has long been used as a marker of stress tolerance to lipid peroxidation (Boamah et al., 2021). Ascorbic acid (AsA), as the most abundant antioxidant in plant, can directly mitigate the damaging effects of ROS or indirectly as a substrate for the ascorbate peroxidase enzyme (Macknight et al., 2017). AsA deficiency has been found to positively modulate plant's biotic defense cascades leading to better disease resistance response in Arabidopsis to Pseudomonas syringae (Pavet et al., 2005). In this scenario, the antioxidant systems exhibit an ever-increasing importance in the complex process of defense mechanisms against PM in A. selengensis. Nevertheless, detailed study is lacking on these indicators as regulatory mechanisms markers in A. selengensis infected by PM.
In this study, G. artemisiae was characterized using light microscopic and scanning electron microscopic (SEM) observations to investigate the responses of A. selengensis to PM. ITS and 28S ribosomal DNA (rDNA) regions were sequenced for supporting the identification of pathogen. We further determined the physiological and biochemical indicators such as ChlF, lignin, callose, and antioxidant enzymes in A. selengensis leaves infected by G. artemisiae. This study is a pilot study for providing basic knowledge and information for improving PM resistance of A. selengensis and also for other plant species.
Plant Materials and Powdery Mildew Isolation
Artemisia selengensis Turcz. was cultivated in farm field of Northeast Agricultural University, China (45 • 43 55 N, 126 • 43 21 E). Leaves of A. selengensis with typical PM colonies were sampled in September 2021, which were further used for isolating pathogen and inoculation to young seedlings. Seedlings were prepared by sowing seeds for pot culture in greenhouse. Briefly, 10 seeds of A. selengensis were sown in PVC pots with sterile substrate soil for a total of 10 pots in the early August. After the seedlings reached to 15 cm in height (nearly 40 days cultivation), the pathogen inoculation was performed. The individual isolate, which obtained from the farm leaves, was purified by single-colony inoculation on healthy seedlings for five consecutive generations (Wen et al., 2011;Lebeda et al., 2012;Rallos et al., 2016). Controlled growth conditions in greenhouse were set at 20/18 • C (day/night) and 12 h of light (125 µmol m −2 s −1 ).
Morphological Characterization of Golovinomyces artemisiae
Chasmothecia and conidia were removed from G. artemisiaeinfected leaves with a dissecting needle, mounted in water, and observed under optical microscope (Carl Zeiss Model Axioskop 40). Taxonomic characters were examined and recorded, including chasmothecial appendages, number of asci and ascospores, and lengths and widths of conidia and conidiophore foot cells. Fifty or more measurements were made for individual characters from each sample and compared to the species pathogen descriptions by Choi et al. (2014).
Scanning Electron Microscope Observation of Golovinomyces artemisiae
Leaves infected with G. artemisiae were cut into small squares sized 5 mm in length around veins, immediately put in a vial containing 2.5% glutaraldehyde, and fixed with 2 ml of 0.1 mol l −1 phosphate buffer (pH 6.8) for 3 times, 10 min each time. The leaves were gradually dehydrated using 2 ml of 50, 70, and 90% ethanol solutions for 15 min each, respectively. Leaves were transferred to a pure tert-butanol solution and let stand for 20 min and then washed with an equal volume of anhydrous ethanol and tert-butanol once and pure tert-butanol twice, with submergence for 15 min each time. Finally, the samples were put in a freezer at −20 • C for 30 min and transferred into the ES-2030 (HITACHI) freeze dryer for 4 h. Afterward, ice crystals were evaporated and dried in vials and sputtered on a gold-plated film in ion coater, which were then observed and imaged by SEM (Hitachi SU-8010, Tokyo, Japan).
Molecular Identification and Phylogenetic Analyses of Golovinomyces artemisiae
Total genomic DNA was isolated from 100 mg of PM (conidia and mycelia) using the cetyltrimethylammonium bromide (CTAB) method (Johanson et al., 1994). The sequence of ITS and 28S rDNA was amplified using the ITS1/ITS4 (ITS1: 5 -TCCGTAGGTGAACCTGCGG-3 , ITS4: 5 -TCCTCCGCTTATTGATATGC-3 ) and PM3/TW14 (PM3: 5 -GKGCTYTMCGCGTAGT-3 , TW14: 5 -GCTATCCT GAGGGAAACTTC-3 ) primers pair, respectively (White et al., 1990;Mori et al., 2000). The reaction procedure was 94 • C for 10 min; 32 cycles (94 • C for 30 s, 57 • C for 30 s, and 72 • C for 90 s); 72 • C for 5 min; and 4 • C termination. PCR product was purified and ligated to the pEASY-Blunt Zero vector and transformed into Escherichia coli and a positive strain was sequenced. The sequences were uploaded to the National Center for Biotechnology Information (NCBI) database and used as queries in BLAST 1 searches to identify the most similar sequences available in the GenBank.
These sequences were collected and aligned for constructing the phylogenetic tree using ClustalW (Thompson et al., 1994). The maximum likelihood (ML) method was used to generate phylogenetic trees based on tandem sequences of the ITS and 28S rDNA genes using the MEGA version 7.0 software (Kumar et al., 2016). Bootstrap analysis was made using 1,000 replications (Joseph, 1985).
Pathogenicity Assays of Golovinomyces artemisiae
Pathogenicity was verified by inoculating 10 healthy seedlings using the above purified PM pathogen. Different paint brushes were used to dust conidia from one PM patch onto another plant leaves of A. selengensis (Attanayake et al., 2010). Mockinoculated (CK) leaves (i.e., no conidia were attached to the leaf surface) were used as controls to monitor and minimize potential contamination. Leaf symptoms were recorded every 1-2 days. Diseased leaves were collected for microscopic examinations to observe the morphological characteristics of the inoculated pathogens. After 14 days, G. artemisiae inoculation (GI) and CK leaves were used to measure the ChlF and collected immediately stored at −80 • C for the determination of antioxidant-related indexes.
Leaf Chlorophyll Fluorescence
Chlorophyll fluorescence parameters of GI and CK were measured using the Imagining-PAM (MAXI) system (Walz, Germany). The value (Ft) of the selected sample in area of interest (AOI) was set within the range of 0.1-0.2, plant saturation pulse light frequency was set to 20 s/times and the intensity was set to 4,000 µmol m −2 s −1 , and the light intensity for actinic light parameters was set to 86 µmol m −2 s −1 (Shi et al., 2020). The plant samples were treated in darkness for 20 min; minimum fluorescence (Fo) and maximum fluorescence (Fm) of the samples were obtained using the measuring light and saturated pulsed light, respectively. The values and images of NPQ, actual photochemical efficiency [Y(II)], non-adjusting energy dissipation yield [Y(NO)], qP, and electron transport rate (ETR) were then obtained through actinic light measurements. Fv/Fm was calculated as: Fv/Fm = (Fm − Fo) / Fm (Maxwell and Johnson, 2000).
Determination of Callose, Lignin, and Antioxidant-Related Indexes
For the assay of antioxidant-related index, 0.5 g of fresh leaves was homogenized using 2 ml of 50 mM phosphate extraction buffer [phosphate-buffered saline (PBS), pH 7.8] in ice-cold mortar. The mixture was centrifuged at 12,000 g for 15 min at 4 • C for collecting the supernatant. The supernatant was used to determine the content of superoxide anion (O 2 − ) and activities of CAT, POD, callose, and SOD. Callose contents were measured following the method of Khle et al. (1985). A total of 0.2 ml of the supernatant was put into a 1.5-ml centrifuge tube. A total of 0.4 ml aniline blue (0.1%), 0.21 ml HCl (1 mol·l −1 ), and 0.59 ml glycine/NaOH buffer (1 mol·l −1 , pH 9.5) were added in turn, reacted at 50 • C for 20 min. The mixture was cooled to room temperature and measured the fluorescence intensity with fluorescence spectrophotometer. The excitation wavelength of the measurement was 400 nm, the emission wavelength was 500 nm, and the slit width was 5 nm.
Peroxidase was determined spectrophotometrically by monitoring the formation of tetraguaiacol from guaiacol (extinction coefficient at 470 nm) in the presence of hydrogen peroxide (H 2 O 2 ) (Ranieri et al., 2000). The reaction mixtures consisted of 2.9 ml of 50 mM PBS (pH 7.0), 1 ml of 0.3 mM guaiacol, 1 ml of 0.1 mM hydrogen peroxide, and 0.1 ml of supernatant.
Catalase was estimated by the rate of H 2 O 2 decomposition at 240 nm (Havir and McHale, 1989). The reaction mixture contained 0.2 ml of supernatant, 1.5 ml of PBS (PH 7.8), 1 ml of distilled water, and 0.3 ml of 100 mM H 2 O 2 . The absorbance was recorded every 1 min for a total of 4 min.
Superoxide anion content was determined from oxidation of hydroxylamine (Zhou et al., 2004). A total of 0.1 ml of supernatant was incubated at 25 • C for 20 min with a mixture of 0.9 ml of 65 mM phosphate buffer (pH 7.8) and 0.1 ml of 10 mM hydroxylammonium chloride; 0.2 ml of 17 mM sulfanilamide and 0.2 ml of 7 mM α-naphthylamine were then added to the mixture and incubated again at 25 • C for 20 min. An equal volume of chloroform was added. The mixture was centrifuged at 10,000 g for 3 min and absorbance was read at 530 nm.
Referring to the method of Morrison (1972), the lignin content was determined. A total of 0.5 g fresh leaves were ground to a homogenate by adding 95% ethanol in a mortar and the precipitate was collected after centrifugation at 4,500 rpm for 10 min. The pellet was washed three times with an equal volume of a 1:1 95% ethanol and n-hexane solution and precipitate was collected and dried. The dried product was dissolved in 0.5 ml of 25% glacial acetic acid and then set in a water bath at 70 • C for 30 min. Thereafter, 0.9 ml of 2 mol/l NaOH was added to terminate the reaction. A total of 5 ml of glacial acetic acid and 0.1 ml of 7.5 mol/l hydroxylamine hydrochloride were added into mixture. After mixing and centrifugation of the samples at 4,500 rpm for 5 min, 0.1 ml of the supernatant was aspirated and diluted, with 3.0 ml of glacial acetic acid. Absorbance was measured at 280 nm using spectrophotometer.
Ascorbic acid content was measured by following the method of Kampfenkel et al. (1995). About 0.1 g of leaf samples was extracted with 0.5 ml of 6% trichloroacetic acid (TCA) and centrifuged at 12,000 g for 10 min at 4 • C. This assay was based on the reduction of ferric ion (Fe 3+ ) to ferrous ion (Fe 2+ ) with AsA in acid solution, followed by formation of a red chelate between Fe 2+ and 2,2 -dipyridyl. Samples were finally read for absorbance at 525 nm using spectrophotometer.
Malondialdehyde content was performed using the thiobarbituric acid method (Heath and Packer, 1968). The supernatant (1 ml in volume) was mixed with 1 ml of thiobarbituric acid (0.6%) and then maintained in boiling water bath for 15 min. After cooling, the mixture was centrifuged at 4,000 g for 10 min. The absorbance of supernatant was then determined at 450, 532, and 600 nm, respectively.
Statistics and Analysis
All the data were analyzed using the Student's t-test with SPSS version 10.0 software (SPSS Incorporation, Chicago, IL, United States). Figures were plotted using GraphPad Prism version 9.00 (GraphPad Company, San Diego, CA, United States).
Symptom of Powdery Mildew and Morphological Observation
Leaves of A. selengensis were major infected parts of the plant for PM ( Figure 1D). Whitish colonies with abundant spores were observed on both the adaxial and abaxial surfaces of the infected leaves (Figures 1A-E). Gradually, these infected leaves turned yellow and dark brown with spherical chasmothecia formed on the surfaces (Figures 1F,G).
Molecular Phylogenetic Identification of Golovinomyces artemisiae
Determined ITS and 28S rDNA region of this pathogen being 594 and 860 bp were submitted to GenBank (ITS: MZ366322, 28 rDNA: MW989746). Results of the phylogenetic tree constructed by the ML method showed that this pathogen and G. artemisiae belong to the same branch (95% bootstrap support), which was later confirmed by the molecular biosis (Figure 4).
Pathogenicity Identification of Golovinomyces artemisiae
After 8-10 days, the mock-inoculated (CK) leaves remained free of symptoms during the entire period of the experiment in the greenhouse (Supplementary Figure 1A). GI leaves showed typical symptoms, which were consistent with the diseased leaves in field (Supplementary Figure 1B). The experiment was repeated for a few times, which all produced the same results. ITS and 28S rDNA sequences of conidia from the infected leaves further validated the results of the purified G. artemisiae.
Leaf Chlorophyll Fluorescence Performances
Chlorophyll fluorescence information indicated that the Fv/Fm in CK was significantly greater than that in GI. The images of ChlF parameters showed the emergence of local necrosis in GI. At the same time, the photochemical activity was inhibited and photodamage was occurred ( Figure 5A). The value of Fv/Fm for CK was between 0.80 and 0.81 and the value for GI was below 0.80 ( Figure 5B). In terms of parameters related to plant light energy absorption and electron transfer, the values of qP, Y(II), and ETR in CK were 11.4, 10.0, and 8.8% higher than those in GI, respectively (Figures 5C,D,G). Obviously, the occurrence of PM inhibited the photosynthetic capacity in A. selengensis. Some ChlF parameters associated with light energy consumption showed the opposite expression trends of NPQ and Y(NO) in the two comparison groups. The value of Y(NO) was 4.8% higher in GI than in the CK and NPQ in CK was 53.5%, significantly higher than that in GI (Figures 5E,F).
Callose, Lignin, and Antioxidant System
The contents of callose and lignin were significantly increased to 28.0 and 36.9% in GI compared to CK, respectively (Figures 6A,B). MDA content in GI was higher (1.2-fold) than that in CK (Figure 6C), meanwhile, O 2 − content in GI was significantly higher (2.8-fold) ( Figure 6D). In terms of changes in antioxidant enzyme activity, G. artemisiae infection resulted in a reduction of 65.9 and 12.6% of CAT and SOD activities in GI, respectively, compared with CK (Figures 6G,H). While AsA, as a non-enzymatic antioxidant, was 84.8% in GI, it is significantly lower than that in CK ( Figure 6E). POD activity was 143.9% higher in GI relative to CK (Figure 6F).
DISCUSSION
Powdery mildew is one of the most frequently occurred fungal diseases in plants around the world. Considerable efforts and investments have been put for the control of the disease via application of proper fungicides and/or breeding of plant varieties tolerant/resistant to the disease. PM appears to be more diverse and the biology of its pathogen seems to be very complex (Glawe, 2008). A holistic approach of combined studies in morphology and analyses of ITS and 28S rDNA regions can accurately identify its causal fungi at the species level (Cunnington et al., 2003). To the best of our knowledge, the G. artemisiae cluster comprises sequences obtained from PM hosts of the genera Artemisia, Chrysanthemum, and Nipponanthemum (Bradshaw et al., 2017). In this study, we observed typical symptoms of PM on A. selengensis (Figure 1). These symptoms were identical to those previously reported in A. annua in Korea (Choi et al., 2014). However, due to specific geographical and climatic environments in Northeast China, physiological race(s) of G. artemisiae infecting A. selengensis appear to be quite different from those in other regions. Life cycles of PM pathogens can involve both a sexual state (teleomorph) and asexual state (anamorph) or either can be lacking (Glawe, 2008). For example, chasmothecia of Erysiphe berberidis DC. were observed in Europe, but they were unknown in western Washington (Glawe, 2008). In this study, chasmothecia were observed, length of conidiophores was less, and pathogenicity was prolonged than that in Korea (Choi et al., 2014). Meanwhile, ITS sequence analysis reflected obvious base mutations (Choi et al., 2014;Chen et al., 2021). Based on the morphology identification and molecular phylogenetic analysis, this study suggested that the pathogen causing PM on A. selengensis in both the field and glasshouse in Northeast China Values are means ± SE of three biological replicates. Significant differences were calculated using the unpaired Student's t-test (**P ≤ 0.01). is G. artemisiae. As the most basic and important indicators of diseases, comprehensive analysis of antioxidant system and photosynthesis indicators is crucial to reveal the phenotype and physiological changes of A. selengensis infection with PM.
As one of the most important physiological processes in plants, photosynthesis is inhibited by diseases and other stresses (Durian et al., 2016). Fv/Fm parameter is shown to be a sensitive indicator of photosynthetic performance, with optimal values being close to 0.8 for most plant species (Krause and Weis, 1991). The Fv/Fm values obtained in GI were less than 0.8, indicating the damage to the photosynthetic apparatus due to G. artemisiae infection ( Figure 5B). Moreover, ETR was inhibited by PM in GI, leading to further reduction in the degree of openness of PS II reaction center ( Figure 5G). qP was decreased in GI, which was consistent with the decreasing trend in leaves of Brassica juncea with a mosaic virus infection (Guo et al., 2005). The accumulation of reactive intermediates is prevented by increasing the NPQ level in bean (Phaseolus vulgaris), which dissipates excess light energy absorbed by the light-harvesting complex harmlessly (Muller et al., 2001;Tietz et al., 2017). Therefore, the progressively increased Y(NO) values and decreased NPQ values indicated the photooxidative damage in GI (Figures 5E,F). It can be further inferred from those Y(II) value that PM caused a decreased energy used for photochemical reactions in GI (Figure 5B), highlighting the reduction of the photosynthetic rate in A. selengensis following G. artemisiae infection. Early detection of wheat leaves with PM infection by means of fluorescence imaging was 2-3 days before visual symptoms became apparent (Kuckenberg et al., 2009). In this study, the ChlF imaging exhibited the parts of GI leaves infected by PM was different from the surrounding area. The health status of A. selengensis can be determined by monitoring the change of Fv/Fm value. Collectively, ChlF is essential for detecting PM epidemics, examining plant health in a timely manner without causing damage.
Plants respond to pathogen invasion by activating a series of defense responses. The deposition of callose after Colletotrichum gloeosporioides inoculation of Stylosanthes guianensis was associated with cultivar resistance (Sharp et al., 1990). Our results showed that the damage degree of G. artemisiae by PM may be mitigated by the increase of callose content in GI ( Figure 6A). The increase of lignin content enhanced the activity of POD, which was consistent with the results in Arabidopsis (Lee et al., 2018). The synergistic effect of increased lignin content and enhanced POD activity enhanced the resistance of A. selengensis to PM (Figures 6B,F). However, in different mustard (B. juncea L.) cultivars, the lignin content of Erysiphe polygoni DC. in the preinfected stage was higher than that in the diseased stage (Rathod and Chatrabhuji, 2010). Although numerous studies have shown that POD activity is positively correlated with plant disease resistance , POD activity in susceptible cultivars is higher than that in resistant cultivars of pumpkin kernel (Cucurbita pepo L.) . Thus, the most obviously increased POD activity acted essentially in the hydrolysis of H 2 O 2 in GI ( Figure 6F). These results exhibited great difference changes of relevant indexes after the occurrence of diseases in different plant species.
Reactive oxygen species production is one of the earliest cellular responses following successful pathogen recognition (Sharma et al., 2012;Camejo et al., 2019). O 2 − or H 2 O 2 generation in apoplast of Arabidopsis was infected by P. syringae (Grant et al., 2000). In this study, O 2 − content increased by about threefold in GI compared to CK, indicating a serious damage in A. selengensis caused by G. artemisiae infection ( Figure 6D). As another toxic byproduct of ROS metabolism, MDA significantly increased in GI, which was consistent with that in roots of brittle leaf disease-affected date palm (Phoenix dactylifera L.) (Saidi et al., 2012). Increased SOD activity has been pinpointed as the key ROS scavenger in response to Erwinia amylovora in pear (Pyrus communis L.) (Azarabadi et al., 2017). However, a higher potential of CAT activity leads to lower H 2 O 2 accumulation in rice infected with M. oryzae (Hou et al., 2015). Our results showed that the antioxidant capacity was limited due to significantly decreased CAT and SOD enzymes activities in GI (Figures 6G,H). AsA accumulation triggers defense system response in cacao (Theobroma cacao) tissues infected by Moniliophthora perniciosa (Dias et al., 2011). Moreover, the suppression of AsA synthesis affects the photosynthetic electron transport in tomato infected with P. syringae (Yang et al., 2017). In this study, the decreasing AsA content inhibited disease resistance and photosynthesis in GI ( Figure 6E). Previous study showed that inhabitation in photosynthetic electron transport inevitably led to the formation of O 2 − in wheat invaded by pathogens (Yang and Luo, 2021). The levels of antioxidative systems and antioxidants were further increased (Yang and Luo, 2021). Combined with the decreased ETR and significantly increased O 2 − in GI, we speculate that photosynthesis should be affected by fungus earlier than the antioxidant system. In conclusion, the pathogen on A. selengensis leaves with typical PM characteristics was purified. The conidia, conidiophore, and hyphae of the pathogen were observed under light microscope and SEM. In light of the combined data and information of ITS and 28S rDNA sequence, the PM pathogen of A. selengensis was identified as G. artemisiae. GI results in damage to photosynthesis in A. selengensis. ETR, NPQ, qP, and Y(II) significantly decreased, but Y(NO) increased in infected leaves, further reflecting severe photodamage. Fv/Fm value could be used as the indicator to monitor the health status of A. selengensis. In addition, severe stress was reflected due to significant increase in MDA and O 2 − contents in the infected leaves. SOD, CAT activity, and AsA content in GI decreased significantly, with an imbalanced antioxidant system and decreased defense response capacity, while POD activity and lignin contents increased significantly in GI, which are considered to be the key indicators against G. artemisiae. The results may help to design PM control approaches for integrating disease control in A. selengensis and likewise plants.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi.nlm. nih.gov/genbank/ (ITS: MZ366322, 28 rDNA: MW989746).
AUTHOR CONTRIBUTIONS
ZG and XS performed the experiment and data analysis and drafted the manuscript. LD, LX, and LQ helped in collection of data of the experiment. FX contributed to data interpretation and manuscript writing. DQ and YC designed and supervised the experiment. All authors agreed to submit the manuscript for publication. | 2022-06-03T13:34:22.594Z | 2022-05-26T00:00:00.000 | {
"year": 2022,
"sha1": "14cdeb48541aa16c5e2c8a4d1bb28e6dfd132df0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "14cdeb48541aa16c5e2c8a4d1bb28e6dfd132df0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225458729 | pes2o/s2orc | v3-fos-license | Dragging of inertial frames in the composed black-hole-particle system and the weak cosmic censorship conjecture
We analyze a gedanken experiment in which a spinning particle that also possesses an extrinsic orbital angular momentum is captured by a spinning Kerr black hole. The gravitational spin-orbit interaction decreases the energy of the particle, thus allowing one to test the validity of the Penrose weak cosmic censorship conjecture in extreme situations that have not been analyzed thus far. It is explicitly shown that, to leading order in the black-hole-particle interactions, the linearized test particle can over-spin the black hole, thus exposing its inner spacetime singularity to external observers. However, we prove that the general relativistic effect of dragging of inertial frames by the orbiting particle contributes to the energy budget of the system a non-linear black-hole-particle interaction term that ultimately ensures the validity of the Penrose cosmic censorship conjecture in this type of gedanken experiments.
Introduction
Singularities in curved spacetimes represent extreme physical situations in which general relativity, Einstein's theory of gravity, loses its predictive power. In order to preserve the deterministic nature of classical general relativity in the presence of spacetime singularities, Penrose [1] has suggested that spacetime singularities that arise in gravitational collapse are always hidden inside of black holes. This intriguing idea, known as the weak cosmic censorship conjecture, has attracted the attention of physicists and mathematicians over the last five decades (see e.g., [2][3][4][5][6][7][8][9][10][11][12][13][14] and references therein).
The elegant singularity theorems of Hawking and Penrose [15,16] have revealed the physically interesting fact that generic solutions of the Einstein field equations may contain curvature singularities which, according to the cosmic censorship conjecture [1], should be hidden inside of black a e-mail: shaharhod@gmail.com (corresponding author) holes, invisible to distant (external) observers. Physical processes that threat to remove the shieling horizon of a black hole and to expose its inner spacetime singularity to external observers are therefore forbidden by the Penrose weak cosmic censorship conjecture [1]. For the advocates of the cosmic censorship conjecture, the mathematically challenging (and physically interesting) task is to find out how such 'dangerous' physical processes, which threat to violate the weak cosmic censorship conjecture, eventually fail to remove the black-hole horizon [2][3][4][5][6][7][8][9][10][11][12][13][14].
One may try to transform a near-extremal spinning Kerr black hole of mass M and angular momentum per unit mass a, which is characterized by the relation [17] into a naked (horizonless) singularity by sending into the black hole particles that carry large amounts of angular momenta. In this context, it should be remembered that a Kerr spacetime with M 2 − a 2 < 0 does not contain an event horizon and it therefore represents a naked singularity. Thus, the composed black-hole-particle gedanken experiment challenges the integrity of the black-hole shielding horizon. This type of gedanken experiments therefore allow one to test the validity of the celebrated Penrose cosmic censorship conjecture [1].
In the present paper we shall inquire into the physical mechanism that protects the horizon of a spinning Kerr black hole from being eliminated by the absorption of spinning particles that also possess extrinsic orbital angular momenta. Intriguingly, the gravitational spin-orbit interaction between the intrinsic and extrinsic angular momenta of the absorbed particle 2 can decrease the energy which is delivered to the black hole for a given amount of the particle's total angular momentum [see Eq. (3) below]. Thus, as will become evident below, the gedanken experiment that we shall analyze in the present paper poses a physical challenge to the weak cosmic censorship conjecture which is greater than the ones studied in former gedanken experiments of this type [2][3][4][5][6][7][8][9][10][11][12][13][14].
In particular, former studies of the composed black-holeparticle system in the context of the Penrose weak cosmic censorship conjecture have not analyzed the influence of the gravitational spin-orbit interaction 3 on the final outcome of the physical absorption process.
Below we shall explicitly show that linearized test particles that carry orbital angular momenta and intrinsic (spin) angular momenta can over-spin 4 an absorbing Kerr black hole, thus exposing its inner spacetime singularity to external observers. However, we shall then prove that the general relativistic effect of dragging of inertial frames by the orbiting particle (the non-linear interaction between the particle's total angular momentum and the black-hole horizon generators) produces a non-linear black-hole-particle interaction term which, for a given value of the particle's total angular momentum, increases the energy of the absorbed particle. In particular, below we shall explicitly show that this intriguing general relativistic effect ultimately ensures the validity of the Penrose weak cosmic censorship conjecture [1] in the composed black-hole-particle gedanken experiment.
Description of the composed black-hole-particle system
We shall analyze a gedanken experiment in which a spinning particle, which also possesses an extrinsic orbital angular momentum, is captured by a near-extremal rotating Kerr black hole. As mentioned above, and will be evident below, in this composed black-hole-particle system it is essential to take into account the non-linear interaction term between the absorbing black hole and the captured particle. In particular, the intriguing non-linear effect of dragging of inertial frames, which is caused by the orbital motion of the particle in the black-hole spacetime, makes the horizon generators rotate faster as the orbiting particle approaches the black-hole horizon [18]. Below we shall take into account the small nonlinear correction to the angular velocity of the black hole 3 It should be emphasized that, contrary to the gedanken experiment that we shall analyze in the present paper, former studies of the weak cosmic censorship conjecture [2][3][4][5][6][7][8][9][10][11][12][13][14] in the composed black-hole-particle system have not considered captured particles that possess both intrinsic and extrinsic angular momenta. Thus, the spin-orbit interaction, that will play a key role in our analysis [see Eq.
(3) below], was irrelevant in these former studies of the weak cosmic censorship conjecture. 4 That is, the final configuration, after the absorption of the test particle by the black hole, may violate the black-hole condition (1).
[see Eq. (11) below], which reflects the dragging of inertial frames by the orbiting particle. We consider a spinning particle of rest mass μ, total angular momentum J , intrinsic angular momentum (spin) s, and proper cylindrical radius R, which moves in the equatorial plane of a spinning Kerr black-hole spacetime. We shall assume that the intrinsic spin of the particle is orthogonal to the plane of motion. In addition, we shall assume that the physical parameters of the composed black-hole-particle system are characterized by the strong inequalities which imply that the particle has negligible self-gravity and that it is much smaller than the geometric size of the central absorbing black hole. Our main goal is to challenge the validity of the Penrose weak cosmic censorship conjecture [1] in the most extreme physical situation. The greatest challenge to the conjecture is achieved when, for a given value J of the particle's total angular momentum, the energy E(J ) delivered to the black hole by the captured particle is as small as possible. We shall therefore consider a particle which is released to fall freely into the central black hole from a radial turning point of its motion, 5 a proper distance R outside the horizon [19][20][21] 6 The conserved energy E of the spinning particle in the rotating Kerr black-hole spacetime is given by the characteristic quadratic equationαE 2 −2βE+γ = 0, where the mathematically cumbersome coefficientsα,β andγ are explicitly given in [22]. In particular, for a given set {μ, R, J, s} of the particle's physical parameters, the minimum energy (as measured by asymptotic observers) delivered to the rotating black hole by the captured particle is given by the expression [22,23]: where are respectively the radii of the Kerr black-hole horizons and the rationalized surface area of the black hole. The physical parameter c is the characteristic J -dependent angular velocity [18] of the black-hole horizon at the point of capture [see Eq. (11) below]. The last term on the right-hand-side of the energy expression (3) represents the above mentioned gravitational spinorbit interaction between the extrinsic (orbital) angular momentum of the particle and its intrinsic (spin) angular momentum. It is important to stress the fact that, for J ·s > 0, this spin-orbit interaction term decreases the total energy which is delivered to the black hole by the (spinning and orbiting) captured particle. In particular, in the J · s > 0 case, the energy (3) of the spinning particle in the black-hole spacetime is a decreasing function of the particle's spin s. Thus, the energy delivered to the black hole can be minimized by substituting into the energy expression (3) the maximally allowed value [24] s ≤ s max = μR (5) of the particle's intrinsic spin. Interestingly, it has been explicitly proved by Moller [24] that a finite-size spinning particle which respects the weak (positive) energy condition must conform to the upper bound (5).
Taking cognizance Eqs. of (3) and (5), one obtains the expression for the minimum energy of the spinning particle at the point of capture. Interestingly, in the regime of large angular momenta, one finds from (6) that, for a given value J of the particle's total angular momentum, the simple expression 7 provides a lower bound on the energy which is delivered to the spinning Kerr black hole by the captured particle.
Dragging of inertial frames and the Penrose weak cosmic censorship conjecture in the composed Kerr-black-hole-orbiting-particle system
In the present section we shall analyze the validity of the Penrose weak cosmic censorship conjecture [1] in the con- 7 Note that, in the J/Mμ 1 regime, one finds the series of strong inequalities μ 2 R(r + − r − )/J 2 R(r + − r − )/r 2 + 1. Here we have used the inequalities (r + − r − )/r + ≤ 1 and R r + . text of our gedanken experiment, in which a spinning black hole absorbs a spinning and orbiting particle which is characterized by the minimized energy expression (8).
It is important to stress the fact that, to leading order in the black-hole-particle interaction, the linearized particle moves on a fixed (unperturbed) background described by the Kerr spacetime. In particular, the zeroth order angular velocity of the spinning Kerr black hole is given by [17] (0) = a α .
As we shall explicitly show below, in the context of our gedanken experiment, it is also important to take into account higher-order (that is, non-linear) interactions between the central black hole and the orbiting particle. In particular, as discussed above, the intriguing physical mechanism of dragging of inertial frames, which is caused by the angular momentum of the orbiting particle, makes the horizon generators rotate faster as the orbiting particle approaches the black-hole horizon. At the assimilation point of the particle, the black-hole angular velocity has changed from its linearized (unperturbed) value (0) to (0) + c . In particular, in the regime of small angular momenta, 8 one can use the perturbative expansion [18] for the angular velocity of the black-hole horizon at the capture point of the particle, where { i = i (M, a)} are Jindependent dimensionless coefficients. It is interesting to mention that Will [18] has proved that, in a physical system composed of a slowly rotating central black hole coupled to an axisymmetric ring of orbiting particles, the leading-order expansion coefficient in (11) is given by the simple value In the present paper we shall go beyond the linearized test particle approximation by taking into account the leadingorder non-linear contribution to the energy of the captured particle in the composed black-hole-particle system. In particular, taking cognizance of Eqs. (8), (9), and (11), one finds the non-linear expression 9 for the minimum energy which is delivered to the spinning Kerr black hole by the captured particle. The assimilation of a particle with energy E min c [see Eq. (13)] and total angular momentum J by the black hole produces the following changes in the black-hole parameters 10 : Hence, the condition M 2 new − a 2 new ≥ 0 [see (1)] for the black hole to preserve its integrity after the absorption of the particle now reads Taking cognizance of Eqs. (13), (14), and (15), one finds that the black-hole condition can be expressed in the form It is convenient to define which yields the relations 9 It is worth mentioning that there are also non-linear self-interaction contributions of order O(μ 2 /M) to the energy of the particle in the black-hole spacetime. Note, however, that in the regime J/Mμ 1 of large angular momenta [see (7)], this higher-order self-interaction contribution to the energy of the particle is negligible as compared to the non-linear interaction term O(J 2 /M 3 ) which stems from the general relativistic effect of dragging of inertial frames by the angular momentum of the orbiting particle. 10 Note that these characteristic changes in the black-hole physical parameters also imply a → (Ma + J )/(M + E min c ).
Substituting (18) into (16), one can express the black-hole condition in the form 11 We shall henceforth assume that the absorbing Kerr black hole is a near-extremal one with /M 1. Keeping terms up to order O( 2 /M 2 , J /M 3 , J 2 /M 4 ), one obtains from (19) the black-hole condition Inspecting the inequality (20), one immediately realizes that the black-hole condition can be violated by a linearized test particle (for which 1 = 0) [10,11]. We therefore learn from the black-hole condition (20) that one must take into account the non-linear interaction between the angular momentum of the captured particle and the blackhole generators 12 in order to insure the validity of the Penrose weak cosmic censorship conjecture [1] in the present gedanken experiment. In particular, taking cognizance of the dimensionless non-linear expansion coefficient (12), one can express the black-hole condition (20) in the form It is clear that the black-hole condition (21) is respected. We therefore conclude that the non-linear interaction between the angular momentum of the particle and the black-hole generators (namely, the general relativistic effect of dragging of inertial frames by the orbiting particle) is essential for ensuring the validity of the cosmic censorship conjecture in this type of gedanken experiments.
It has been shown that the gravitational spin-orbit interaction experienced by the spinning and orbiting particle in the curved black-hole spacetime [an energy term proportional to J ·s M 3 , see Eq. (3)] can decrease the energy delivered to the black hole by the captured particle. This important physical fact implies that the composed black-hole-particle system studied in the present paper poses a challenge to the weak cosmic censorship conjecture which is greater 13 than the ones considered in former gedanken experiments of this type [2][3][4][5][6][7][8][9][10][11][12][13][14].
Interestingly, it has been demonstrated that, to leading order in the black-hole-particle physical interactions, the linearized test particle can over-spin the spinning Kerr black hole [see Eq. (20)], thus exposing its inner spacetime singularity to external observers.
However, we have pointed out that the intriguing physical mechanism of dragging of inertial frames by the orbiting particle contributes an additional (non-linear) positive term of order O(J 2 /M 3 ) to the energy budget of the particle in the curved black-hole spacetime [see Eq. (13)]. In particular, we have explicitly shown that, by increasing the energy of the captured particle, this general relativistic effect (namely, the non-linear interaction between the angular momentum of the orbiting particle and the black-hole generators) ultimately ensures the validity of the Penrose weak cosmic censorship conjecture [1] in this type of gedanken experiments.
Data Availability Statement
This manuscript has no associated data or the data will not be deposited. [Authors' comment: I would like to emphasize that all relevant mathematical calculations and data are explicitly presented in this paper.] Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permit- | 2020-08-06T09:09:17.803Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "fa803cc0b4af04fd5190ff689b400104beab82f6",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-020-8285-z.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cb3b6e2c2f4230e77250468cf20b6390576b13ae",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
151266658 | pes2o/s2orc | v3-fos-license | Effectiveness of Text Messaging for the Management of Psychological and Somatic Distress in Depressed and Anxious Outpatients
Abstract: Background: Cognitive behavioral group therapy has developed several techniques in order to make the treatment of depressive and anxiety disorders more effective. Particularly, the “homework” is a tool in order to practice therapeutic skills in ecological settings. When working with this aim, it is often necessary to support patient compliance. Researches have shown the efficacy of sending a text to the patients in order to support the patient compliance, but only a few data are available on the effectiveness of sending text in the treatment of depression and anxiety.
INTRODUCTION
The prevalence of Anxiety and Depression Disorders is high; at a global level, it is estimated that the percentage of the world's population suffering from depression is 4.4%, whereas 3.6% suffer from anxiety [1].In Italy, the prevalence at 12 months of each affective disorder is equal to 3.5% (IC 2.9-4.0)therapies are often integrated with pharmacological therapy in the treatment of these conditions [8,9].
Cognitive Behavior Therapy (CBT [9]; is an example of the evidence-based individual psychological intervention for depression and anxiety [10 -12], often used to supplement pharmacotherapy. Even though there is less documentation on the effectiveness of Cognitive Behavioral Group Treatments (CBGT), there are also studies that indicate promising results [13 -16].Some relaxation based cognitive behavioral approaches have also some efficacy across a range of clinical conditions, including anxiety [17] and depression [18,19].CBT for depression and anxiety can easily be applied in group settings, which may prove more cost-effective [19,20].
CBT uses techniques whose effectiveness is verified experimentally.Over time, CBT both individual and as a group-has developed several techniques in order to make the treatment more effective.In particular, the "home-work" is a tool used to practice therapeutic skills in ecological settings.When working with this aim, it is often necessary to support the patient's compliance.
The use of Short Message Service (SMS or text messaging) is a relatively recent possibility to support compliance and in general to support welfare and health promotion treatments [as a matter of fact, the first study on the effectiveness of text messages in the treatment of asthma dates back to 2002 [21] and the first randomized clinical trial on smoking cessation was carried out in 2005 [22].
Delivering strategies via mobile-phone technology is particularly interesting because the use of mobiles is extremely widespread, also among Italians [23].
Various studies have shown the effectiveness of sending a text to patients in order to support their compliance.For instance, Webb,Joseph,Yardley,and Michie [24] highlight that the effectiveness of Internet-based treatments has been enhanced by using additional methods of communication, among which the use of SMS.
The available literature on the effectiveness of the use of SMS is however mostly medical setting focused and only a few data are available on the effectiveness of the SMS in depression and anxiety disorders.
Although text messages can be used for different purposes [25], they have often been used for behavior modification; for instance to promote smoking cessation [26,27], to support physical activity [28,29], send motivational messages [30], provide a cue to action [31], or for improving treatment adherence in schizophrenia [32,33].Webb et al. [24] suggest that personal contact via text message could support behavior change and influence in this way health behavior at any time.
The aim of this study is to verify the effectiveness of the SMS in order to support patient's compliance in exercising relaxation and mindfulness, comparing two outpatient groups who underwent the same treatment (CBGT), but where only one group received a motivational weekly SMS.
After the psychological treatment, a subgroup of 39 participants out of 79 (Yes SMS Group) was reached by a weekly SMS.The mean age of this subgroup was 49.61 (±13.43), with 14 males and 25 females; 16 participants had received a diagnosis of anxiety and 23 of depressive disorder; the second subgroup (No SMS Group) had a mean age of 50.0 (± 11.01), with 10 males and 30 females; 16 participants had received a diagnosis of anxiety and 24 of depressive disorder.
All the psychiatric diagnoses have been made through psychiatric interviews conducted by senior Psychiatrists, who are unrelated to this study, not using psychiatric tests.
Inclusion Criteria
Age between 18 and 65; established the diagnosis of Anxiety or Depression, signed informed consent and partial response to pharmacological treatment (following the guidelines [34 -36].All patients had received at least two cycles of drug treatment with adequate duration and dosage for each cycle as indicated by the guidelines, for a mean period of 3 months prior to their referral to the psychological outpatients Unit.
Exclusion Criteria
Personality disorders, intellectual disability comorbidities, drug/alcohol addiction, schizophrenia and other psychotic disorders, and/or anxiety and depressive disorders due to medical condition.
Symptom Checklist 90 R (SCL-90 R [37])
A 90-item self-report instrument evaluating nine symptom dimensions: Somatization, Obsessive-Compulsive, Interpersonal Sensitivity, Depression, Anxiety, Hostility, Phobic Anxiety, Paranoid Ideation, and Psychoticism.The sum of the 90 items produces the Global Severity Index (GSI), a measure of overall psychological distress.The internal reliability (Cronbach α) of the scales ranges from 0.74 for hostility to 0.97 for the GSI [38].However, factor analytic studies of the Italian version have suggested that the GSI is an optimal measure for the assessment of distress symptoms [39].
Beck Depression Inventory (BDI [40])
A 21 item self-report rating inventory that assesses the clinical symptoms of depression by asking about feelings over the past week.The score is a sum of the positive answers, ranging from 0 to 63, and scores of 10 or greater reflects the presence of some level of depression.The internal reliability (Cronbach α) of the scale is between 0.73 and 0.92, and a concurrent validity between 0.55 and 0.73 for non-psychiatric subjects [41].
Hamilton Depression Rating Scale (HAMD [42])
A 21 item clinician-administered questionnaire used to indicate depression and evaluate recovery in adults.Scores of 8 or higher indicate depression, and a non-clinical Italian sample has been found to have a mean score of 3.5 [43].The scale has an internal reliability range of 0.46-0.97[44].
Hamilton Anxiety Rating Scale (HAMA [45])
A 14 item clinician-administered questionnaire to indicate adult anxiety and recovery.Scores of 8 or higher indicate anxiety [46], and a nonclinical Italian sample has been found to have a mean score of 3.6 [43].The scale has an internal reliability range of 0.74-0.96[47].
Self-rating Anxiety Scale (SAS [48])
A 20-item self-report scale that assesses primarily somatic symptoms associated with anxiety symptoms.The respondent indicates how often (s)he has experienced each symptom on a 4-point Likert scale consisting of "none or a little of the time" (coded as 1), "some of the time" (coded as 2), "good part of the time" (coded as 3), and "most or all of the time" (coded as 4).The raw total score range is 0-80.In a clinical sample, the testretest reliability ranges between .81 and .84over a period of 1 to 16 weeks [49].
The State-Trait Anxiety Inventory (STAI [50])
A 40 item self-report measure of anxiety.All items are rated on a 4 point scale (e.g., from "Almost Never" to "Almost Always").Internal consistency coefficients for the scale have ranged from 0.86 to 0.95; test-retest reliability coefficients have ranged from 0.65 to 0.75 over a 2-month interval [50].Evidence attests to the construct and concurrent validity of the scale [51].
Procedure
On referral to the Psychological Unit the patients have all been given information on the treatment and the current study.All outpatients have been enrolled for an 8-weekly sessions group treatment (2-hour group-based session a week) while following their pharmacological TAU (Treatment As Usual).Patients have been treated with conventional doses of medication, mainly those recommended for the treatment and, during the group treatment, there have been no major changes in the pharmacotherapy.Each session has been run by two cotherapists: a psychotherapist and a psychologist.
All the 79 participants have been assessed for the overall psychopathological symptoms, depression, and anxiety before and after the group treatment, and at the 3-months follow-up.
Treatment
The program has been modeled after the clinical programs of the Benson-Henry Institute for Mind Body Medicine at the Massachusetts General Hospital [52,53].The training has been designed to provide tools for symptom management in outpatients.In the program, patients have been taught a variety of techniques aimed at helping them with their psychological symptoms and as a self-regulatory integrated approach to stress reduction and emotion management including: psychoeducation on different topics, from stress to lifestyle well-being; relaxation techniques; mindfulness techniques; and cognitive restructuring techniques.
The treatment is described in detail in a study by Truzoli,et al.,[16].
Upon completion of the treatment, at the end of the 8 expected weeks, a subgroup of 39 participants out of 79 (Yes SMS Group) was reached by a weekly SMS for the whole 3 month-period between the end of the treatment and the scheduled follow-up session.The Yes SMS Group was not selected randomly, but according to the date of arrival to the Psychology Unit after being referred by psychiatrists.
The text sent by SMS was: "Vi incoraggiamo a continuare gli esercizi anche questa settimana.Praticare il rilassamento e la mindfulness migliora il vostro benessere" (translation: We encourage you to continue the exercises also this week.Practicing relaxation and mindfulness increases your well-being.)The text message was sent at 5.00 pm every Tuesday.
Statistical Analysis
To compare the two subgroup the Mann-Whitney test has been used.The effect sizes (d [54];) for post-treatment differences between the phases has been calculated in line with Cochrane recommendations [55]; in our case, pre-test and three-month test data have been used.
RESULTS
All participants concluded the treatment and participated in the follow-up.
The pre-test between the Yes SMS Anxiety and the No SMS Anxiety subgroups showed no significant differences in all tests (Mann-Whitney Test: all ps >.06); in the same way, by comparing the Yes SMS Depression and the No SMS Depression subgroups, no significant differences have emerged in all tests (Mann-Whitney Test: all ps >.55).
In addition, in the pre-test between the Yes SMS Anxiety and the Yes SMS Depression subgroups, and between the No SMS Anxiety and the No SMS Depression subgroups, there have been no significant differences in all tests.(Mann-Whitney Test: all ps >.10).
In pre-test in Yes SMS group and No SMS group, the mean of all the used scales falls within a clinical range.phases (pre, post and follow-up), the percentages of improvement between pre-test and follow-up, and the 'd' values (effect size) comparing the pre-test and the follow-up.
As indicated in Legenda (Table 1), comparing the Yes SMS with the No SMS groups using the Mann-Whitney test, significant differences have arisen for the HAMD test followup (z = -2.35); the HAMA post-test (z = -2.26)and follow-up (z = -4.02).
As described in Participants section, Yes SMS group and No SMS group substantially consist of the same number of participants diagnosed with depression or with anxiety; in addition to this, in pre-test the mean of all the used scales fall within a clinical range.Thus, the number of participants in each diagnostic class of the two groups and the initial clinical situation of the participants are unlikely to have a significant influence on the results of the comparison.
Regarding the percentage of improvement, both groups have improved, showing a greater improvement for the Yes SMS group for almost all the tests used.It should be noted that the improvement is the result of the comparison between pretest and follow-up.As a consequence, the outcome reflects the improvement produced by the treatment combined with SMS.
The 'd' values are medium for SCL -90 R, around medium for BDI, SAS and STAI; large for HAMD and HAMA, but larger for Yes SMS Group.
Table 2 for the Yes SMS Depression and No SMS Depression subgroups shows the means and standard deviations of all the tests used for the three phases.
Comparing the Yes SMS Depression with No SMS Depression subgroups using the Mann-Whitney test, significant differences have emerged for the HAMA test at follow-up (z = -2.46).Regarding the assessment of depression, it should be noted that each of the two groups improves from pre-test to follow up, but no significant differences emerge between the two groups.This result will be analyzed in detail in the Discussion.Reference will also be made to the type of text message sent, and to the differences between self-report and clinician-rated scales.
Table 3 shows the means and standard deviations of all the tests used for the three phases for the subgroups Yes SMS Anxiety and No SMS Anxiety.
Comparing the Yes SMS Anxiety with No SMS Anxiety subgroups using the Mann-Whitney test, significant differences have arisen for the HAMD test at follow-up (z = -2.04),and for the HAMA post-test (z = -2.01),and for the HAMA follow-up (z = -3.47).
DISCUSSION
The results have demonstrated that the treatment has had good patient acceptability, where no participant dropped out of the cognitive behavioral group treatment.
Both groups (Yes SMS and No SMS) have improved from pre to follow-up treatment in all the assessed dimensions (see improvement percentages and effect size).Such a result was expected in consideration of the previous evidence [15].
It is interesting to highlight the fact that the weekly SMS, used as a prompt, has seemed to work as simple and effective support for patients.
Comparing the two groups (Yes SMS and No SMS), regardless of the diagnosis, the Yes SMS group has shown significantly better outcomes in depression (HAMD) at followup, and in anxiety (HAMA) both at post-treatment and at follow-up.
Comparing the two subgroups taking into account the diagnoses, the subgroup Yes SMS -diagnosis of anxiety -has shown significant better outcomes in anxiety (HA-MA) at post and at follow-up and in depression (HAMD) at follow-up; the subgroup Yes SMS -diagnosis of depression -shows better outcomes in anxiety (HAMA) at follow-up.
The evidence that the Yes SMS Anxiety subgroup has also improved depressive symptomatology at follow-up can be explained by the fact that if anxiety is reduced, patients improve their overall well-being by recovering the hope of a better life, with beneficial effects on their mood.A similar mechanism may explain the evidence that the Yes SMS Depression group has improved anxious symptomatology at follow-up.
It should not be excluded that the treatment and practice of relaxation and mindfulness exercises has influenced the symptomatic area common to depression and anxiety (features that overlap in the two diagnostic classes such as muscle tension, sleep disorders, asthenia, irritability, etc.).
Moreover, in relation to the comparison between Yes SMS Depression and No SMS Depression subgroups significant differences have emerged for the HAMA test only.It should be noted that all participants with depression improve depressive symptoms; so, we can hypothesize that this has a reassuring effect, with greater impact on the anxious component when the treatment is associated with sending SMS.Furthermore, it could be assumed that sending targeted SMS could be more effective on the mood.Indeed Head, Noar,Iannarino,and Grant Harrington [56] indicate that the mean effect size of text messaging in health promotion interventions gets close to medium magnitude.In addition, they observed that the larger effect sizes have been found with tailored messages (messages aimed at a specific individual, or based on specific demographic and psychosocial variables) or targeted (messages aimed at a specific group, such as smokers or depressed people).Also the use of personalized strategies -such as using the name of the participants -seems to be useful to improve the effectiveness of the SMS [57].Finally,Head et al. [56] suggest taking into consideration the possibility to plan the timing of when to send text messages in relation to the behavior to be supported (for example, at the end of the working day when a person has to decide whether or not to go to the gym).
Thus, considering the typology of participants in this study, it will be interesting to investigate the differential effectiveness of changes in the content of the SMS message and the scheduled time for messaging.It could be possible to make the SMS more personalized (using the participant's name) and more tailored with respect to the diagnosis (for example, possible mood improvements for participants with depression and possible improvements on worries for participants with a diagnosis of anxiety can be highlighted).Finally, it could be possible to better assess the time at which text messages are sent based on demographic factors, such as the profile of housewife, unemployed or employed.
This study employed clinician-rated and patient self-report measures of depression and anxiety.
The differences between the scales, which are not completely overlapping, may partially explain the inconsistent results between the scales in both depression and anxiety A symptomatological improvement has been detected on both types of scales, but there have been some differences between the degree to which symptoms have been shown to improve according to clinician-rated and patient-rated scales (outcomes of the clinician-rated scales are higher than selfreport outcomes).However, it is well known that in the depressive disorder area, the agreement between self-reported and clinician-rated measures is far from perfect, even though there is a correlation rated from moderate to strong between clinician-rated scales and self-reported questionnaires [58 -60].It could be assumed that this is the case for the Anxiety Disorders as well [61].
There could be many reasons for these differences, such as a) slightly different foci of the questionnaires used by clinicians and patients, and b) the degree to which particular symptoms may be regarded as important to the patents in their own functioning.In any case, it is reassuring that even at a lower level, there has been a symptomatic improvement even on patient-rated scales.
CONCLUSION
The clinical effect of the treatment can be assessed overall as positive.As was expected, the brief multi-component program has been successful with those patients who previously had shown little change in their symptomatology with pharmacotherapy, despite the fact that patients had been under the same drug treatment dosage the previous three months and during the whole period of group treatment.Symptoms of Anxiety and Depression can be modulated and reduced by learning self-management and self-regulation skills.This shortterm treatment offers a cost-effective tool for treating the most common psychiatric disorders claimed in public health settings.
If we compare this work with previous ones, this study adds some evidence of the effectiveness of adding the use of SMS to motivate participants to perform relaxation and mindfulness exercises.This effect can also be traced back to the fact that people feel better treated even after the end of the therapeutic process.This supports the compliance with the therapeutic indications that implies inserting exercise into the daily routine.
A future hypothesis to be verified could be a change in the content and timing of sending SMS messages, as previously discussed.
Finally, it should be highlighted that, unlike several studies, which have used only patient self-report measures of depression and anxiety, this study has also employed clinicianrated measures.Uher,Perlis,Placentino,Dernovsek,Henigsberg,Mors,Maier,McGuffin,and Farmer [62] has highlighted that self-report and clinician-rated outcomes are not equivalent, each of the two providing unique information that is relevant to the clinical analysis.In general, the most accurate prediction of outcomes can be achieved when both, clinician and self-rating assessments, are available [54].
STUDY LIMITATION
A first observation concerns the fact that the patients to whom the text messages have been sent, had not been chosen randomly.This reflects the observational nature of the study and the usual clinical practice in mental health services; in any case, the two groups equaled at the pre-test in the variables studied.
A second limitation is that the sample size when patients are split into two diagnostic classes is small, and so the results should be interpreted with caution.
Another limitation is the lack of a control group; so, the gathering of data from the control group can be a future goal.
In any case, the improvement after CBGT with respect to the baseline, when participants had undergone only the psychopharmacological treatment, suggests that it might be useful to integrate it with the pharmacological approach.The proposed treatment could, therefore, be considered as one of the tools available to the clinician to work in the perspective of integrated treatments.
ETHICS APPROVAL AND CONSENT TO PART-ICIPATE
Not applicable.
HUMAN AND ANIMAL RIGHTS
No Animals/Humans were used for studies that are the basis of this research.
CONSENT FOR PUBLICATION
The manuscript has no individuals' data, such as personal detail or audio-video material.Standard informed consent was obtained for participation in this research.
Table 1
for the Yes SMS and No SMS groups shows the means and standard deviations of all the tests used for the three
Table 1 . Mean (standard deviations) for overall symptoms (SCL-90 R), depression and anxiety in pre, post and follow-up treatment, as well as the percentage (%) of improvement between pre-test and follow-up, and effect size (d) comparing the pre-test and the follow-up, for the Yes SMS and No SMS groups.
LegendaComparison of Yes SMS with No SMS groups using the Mann-Whitney test: *p < .05;** p < .0001In bold the boxes with significant differences.
Table 2 . Mean (standard deviations) for overall symptoms (SCL-90 R), depression, and anxiety for the Yes SMS Depression and No SMS Depression subgroups in pre, post and follow-up treatment.
LegendaComparison of Yes SMS Depression with No SMS Depression subgroups using the Mann-Whitney test: *p < .05In bold the boxes with significant differences.
Table 3 . Mean (standard deviations) for overall symptoms (SCL-90 R), depression, and anxiety for the Yes SMS Anxiety and No SMS Anxiety subgroups in pre, post and follow-up treatment.
LegendaComparison of Yes SMS Anxiety with No SMS Anxiety subgroups using the Mann-Whitney test: *p < .05;** p < .0001In bold the boxes with significant differences. | 2019-02-11T03:16:44.203Z | 2019-01-31T00:00:00.000 | {
"year": 2019,
"sha1": "37f7d92b54e342c9ffcda58f3c8ba814ad318e8c",
"oa_license": "CCBY",
"oa_url": "https://openpsychologyjournal.com/VOLUME/12/PAGE/12/PDF/",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "37f7d92b54e342c9ffcda58f3c8ba814ad318e8c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
231828237 | pes2o/s2orc | v3-fos-license | The cost disadvantage of steep slope viticulture and strategies for its preservation
The falling fallow of steep slope vineyards is caused by cost disadvantages that have not been analysed so far. This study quantified the production costs of different types of steep slopes, identified cost drivers within viticultural processes and assessed the impact of grape yield on the production cost for vertical shoot positioning (VSP) systems. It also examined under what conditions the reshaping of steep slope vineyards into transversal terraces (TTs) is economically viable. Costs were derived from a dataset of 2321 working time records for labour and machine hours from five German wine estates over three years. The costs for standard viticultural processes were compared across five site types with different mechanisation intensities by univariate analysis of variance with fixed and random effects. The net present value (NPV) of reshaping slopes into horizontal terraces was also assessed. Manual management of steep slopes was determined to be 2.6 times more costly than standard flat terrain viticulture. The cost disadvantage of steep slopes mainly stems from viticultural processes with limited mechanisability that require specialised equipment and many repetitions. Current subsidies fall short of covering the economic disadvantage of manual and rope-assisted steep slopes. Climate change-related drought and yield losses further increase the economic unsustainability of steep slopes. Under certain conditions, the transformation of manual steep slope sites into TTs can be a viable economic option. Strategies to reduce the cost disadvantage are outlined. The estimated cost benchmarks provide critical input for steep slope wine growers’ cost-based pricing policy. These benchmarks also give agricultural policy reliable indicators of the subsidies required for preserving steep slope landscapes and of the support needed to transform manual steep slope sites into TTs.
The cost disadvantage of steep slope viticulture and strategies for its preservation Larissa Strub 1 The falling fallow of steep slope vineyards is caused by cost disadvantages that have not been analysed so far. This study quantified the production costs of different types of steep slopes, identified cost drivers within viticultural processes and assessed the impact of grape yield on the production cost for vertical shoot positioning (VSP) systems. It also examined under what conditions the reshaping of steep slope vineyards into transversal terraces (TTs) is economically viable. Costs were derived from a dataset of 2321 working time records for labour and machine hours from five German wine estates over three years. The costs for standard viticultural processes were compared across five site types with different mechanisation intensities by univariate analysis of variance with fixed and random effects. The net present value (NPV) of reshaping slopes into horizontal terraces was also assessed. Manual management of steep slopes was determined to be 2.6 times more costly than standard flat terrain viticulture. The cost disadvantage of steep slopes mainly stems from viticultural processes with limited mechanisability that require specialised equipment and many repetitions. Current subsidies fall short of covering the economic disadvantage of manual and rope-assisted steep slopes. Climate change-related drought and yield losses further increase the economic unsustainability of steep slopes. Under certain conditions, the transformation of manual steep slope sites into TTs can be a viable economic option. Strategies to reduce the cost disadvantage are outlined. The estimated cost benchmarks provide critical input for steep slope wine growers' cost-based pricing policy. These benchmarks also give agricultural policy reliable indicators of the subsidies required for preserving steep slope landscapes and of the support needed to transform manual steep slope sites into TTs. k e y w o r d s steep slope viticulture, production costs, mechanisation, climate change, transversal terraces, economic sustainability, Germany
INTRODUCTION
Planting vines on steep slopes has permitted viticulture in climatically marginal suitable zones. The practice has a long tradition in Europe, with the famous steep slope valley along the river Mosel dating back 2000 years to Roman times. The slopes provided climatic advantages for viticulture through improved insolation in spring and autumn based on the inclination of the slopes towards the sun, which was required to bring the grapes to ripeness (Hoppmann et al., 2017). Historically, steep slope vineyards have made use of otherwise unsuitable agricultural land, as flat terrains have been reserved for the production of foodstuff.
Disadvantages from limited mechanisability and climate change
Nowadays, steep slope viticulture faces threats on two fronts: cost and climate. Viticulture on steep slopes has always been more burdensome than on flat terrain. While this extra effort was initially marginal when all viticulture involved manual work, its disadvantage increased sharply with the growing mechanisation of flat terrain sites starting in the 1950s (Schreieck, 2016;Strub et al., 2021a).
Climate change has transformed the former climatic advantage of steep slopes for viticulture into a disadvantage. Dependent on the soil setting, intensified solar radiation often leads to problematic conditions on steep slopes. Reduced water retention capacity and high evapotranspiration often induce water stress that results in reduced yields (Hofmann and Schultz, 2015). Because of water scarcity, new plantations generally take up to three years longer to establish and bear fruit compared to flat terrain sites. Intense solar radiation can cause sunburn on the berries, which in turn alters the phenolic structure of wines and can negatively affect the sensory structure of white wines (Pons et al., 2017;Ramos et al., 2007;van Leeuwen and Darriet, 2016). These effects on the quantity and quality of wines have economic consequences for steep slope wine growers.
The ongoing decrease in the acreage of steep slope vineyards has been attributed to these detriments. For example, in Germany's largest wine-growing state, Rhineland-Palatinate, steep slopes have declined by 28 % between 1999 and 2015 (Strub and Loose, 2016). So far, there has been a lack of economic research on steep slope viticulture, particularly on the effects of limited mechanisation and yield losses on production costs. Although viticulture has become increasingly mechanised in recent decades, the exact cost disadvantages for different types of steep slopes remain unknown. Reliable cost information is indispensable for wine producers' pricing decisions. Full costs must be covered if wine estates are to be economically sustainable (Strub et al., 2021a). The intense price competition in the wine market (Loose and Pabst, 2019) and low consumer awareness and appreciation of steep slope wines pose significant challenges for steep slope wine producers in covering their full cost. The wine sector will therefore benefit from reliable empirical information about cost drivers, the economic impact of yield losses and strategies to reduce costs in steep slope viticulture.
Transformation of steep slope sites into transversal terraces (TTs)
The installation of transversal terraces (TTs) along the contours of a hill instead of rows running in the direction of steepest slopes (DSS) has been performed for decades in areas such as Baden in Germany and Priorat and Penedès in Spain to enable mechanisation in steep slope viticulture (Ramos et al., 2007;Stanchi et al., 2012). As a second advantage, TTs prevent rainwater from easily flowing down the hill, which is particularly important for strong rain events (Oliveira, 2001). However, so far there is no agreement whether terraced vineyards have a generally higher water retention capacity (Oliveira, 2001;Ramos et al., 2007). The installation of TTs requires massive movements of soil, initiating a substantial intervention in the landscape (Cots-Folch et al., 2006). Particular soil and topographic conditions must be met for the installation of TTs (Huber, 2015); which then require constant maintenance to prevent soil erosion and mitigate the risk of landslides (Tarolli et al., 2014). Because a considerable share of the surface is used for the embankments carrying the terraces, the building of terraces leads to a decrease of the number of plants per hectare, depending on gradient and the embankment height (Huber, 2015).
Positive external effects of steep slopes for society
While steep slopes are no longer required to grow ripe grapes, they still provide positive external effects to society in the form of benefits for tourism and biodiversity (Cox and Underwood, 2011;Job and Murphy, 2006;Tafel and Szolnoki, 2020). European agricultural policy pays subsidies to steep slope wine growers to compensate for the benefits of the public goods provided. Part of those subsidies is dedicated to increasing mechanisation by, for instance, transforming vertical steep slopes into TTs. To date, it remains unclear to what extent these subsidies cover the actual cost disadvantage. To make an informed decision, society in general and agricultural policy, in particular, depend on reliable information about the cost of subsidies required for the preservation of otherwise economically unsustainable steep slope viticulture.
Research questions
This study aimed to analyse the cost structures in the management processes for steep slope viticulture with a focus on the cost effects of mechanisation and yields with respect to cost-saving potential. The study also examined the cost-saving potential of transforming vertical steep slopes into TTs.
The first set of research questions addressed the effects of mechanisability on cost differences between vineyard site types: RQ1: What are the cost disadvantages of various steep slope viticultural systems compared to standard flat sites? (assuming identical yields) RQ2: Which viticultural process increases costs most substantially on steep slope sites?
RQ3: How do differences in yields impact cost differences between viticultural systems?
The second set of research questions assessed whether transforming DSS steep slopes into TT sites is an economically viable option to overcome cost disadvantages and to sustain steep slope viticulture: RQ4: To what degree can annual costs be reduced by reshaping steep slopes into TTs? RQ5: When do annual cost savings pay off the cost of installing TTs?
MATERIALS AND METHODS
The process steps of viticulture are first presented with their degree of possible mechanisation on flat terrain sites. External factors of terrain and the orientation of rows towards the slope resulted in a total of five different vineyard site types that differed in the degree to which specific viticultural processes could be mechanised. Total and single process step costs of these five site types will be compared in the analysis from a data set of labour and machine time records.
Processes of viticultural management
Viticultural management consists of different processes that are performed in a specific order throughout the vegetation period. The management cycle starts with pruning in winter and ends with picking the grapes in autumn. The cycle can be subdivided into three main complexes: winter pruning, general viticultural management and harvesting (modified based on Müller et al., 2000;Strub et al., 2021a). The required process steps and their execution vary depending on the training system of the vines. This study was limited to vineyard sites trained in a trellis with vertical shoot positioning (VSP). The standard processes performed on an annual basis in this system are listed in Table 1, together with their maximum degree of mechanisation at flat terrain sites and their required frequency within one year.
For flat terrain, almost all processes can be fully mechanised except for Tying (200), Straw application (1700), and particular methods of Shoot positioning (400, 500) and Yield regulation (800). These processes are all performed only once during the vegetation period (see Table 1). All processes that must be performed frequently (Pest management and Soil management) can be almost fully mechanised at flat terrain sites.
Site types
In the last section, the maximum degree of mechanisation of viticultural processes related to flat terrain sites was described. Three main external factors can limit mechanisability: "slope and access to vineyard sites", "the orientation of rows towards the slope", and "the training system" (Strub et al., 2021a). As this paper is limited to vineyards with a VSP system, the factor training system is not relevant to the progress of the analysis. The "slope and access to vineyard sites" factor can be broken down into three levels: "no limitation", "limited access for machines" and "no access to machines" (Column 1 in Table 2). Rows can be oriented in the DSS or on TTs (Column 2 in Table 2). The combination of the levels of both factors results in five different, optimally mechanised site types (last column in Table 2) that differ in the degree of mechanisation of the viticultural processes of "general management" and "harvesting" (Columns 3 and 4 in Table 2). These types will be detailed in the following. Site type 1-Standard: In flat terrain, standard narrow track tractors and standard (grape) harvesters (SHs) can be used for viticultural management and harvesting (as detailed in Table 1).
Site type 2a-SSHs: For slopes with a gradient above 35 % to 40 %, depending on soil conditions, SHs must be replaced by steep slope harvesters (SSHs) or manual labour for the harvesting process (Walg, 2007).
Site type 2b-Rope: For slopes with a gradient above 40 %, standard tractors can no longer operate (Grečenko, 1984;Walg, 2007;Yisa et al., 1998). Instead, for general viticultural management, crawler tractors, in combination with winch-andrope systems, are used to prevent the tractor from sliding down the hill (Grečenko, 1984;Walg, 2007). These systems permit the mechanisation of most processes, which are also mechanised for flat terrain. However, the use of a crawler tractor critically depends on good soil structure. In the case of rainfall, it can be impossible to enter a steep slope vineyard with machinery. Moreover, once the crawler tractor is secured with a rope, every row needs to be passed twice (downwards and upwards in the same row), resulting in more working and machine hours compared to standard tractors (Schreieck, 2016).
Site type 2c-TTs: Rows are planted on TTs that permit the use of a standard narrow or crawler tractor for general viticultural management in combination with an SSH for harvesting. The TTs created today are usually wide enough for one row of vines, which are planted towards the edges of the terraces with enough space between the row and the embankment for a narrow track tractor or a crawler tractor to pass. The tractors do not need any additional securing by winch and rope because they drive on flat terrain. Thus, the disadvantage of winch-and-rope systems, double-passing rows, is eliminated (Leimbrock, 1984).
Site type 3-Manual: On the most challenging steep slope sites, no access for machinery is possible, due to either the gradient or the location. This restriction necessitates manual labour for most processes. Although requiring special permits and at a high cost, pest management at these extreme sites can be mechanised using helicopter spraying.
Database of work and machine time records
This study's data set consisted of 2321 working time records from 28 different vineyards that represented the five vineyard site types.
TABLE 1. Standard processes of viticultural management in VSP systems (modified based on Müller et al., 2000).
Notes: VSP-vertical shoot positioning; *maximum degree of mechanisation at flat terrain sites; **referred to as "Greening management" in Strub et al. (2021a). The codes in the fourth column in Table 1 were created to simplify and structure the recording of working times and will be referred to throughout the paper. Aligning process steps to main process complexes causes codes to not be ordered strictly numerically. This set was part of a more extensive data set (Strub et al., 2021a). The data were collected throughout 2017, 2018 and 2019 at five larger, management-led wine estates located in five different German wine-growing regions. Not all sites were sampled in all three years. For all number-coded viticultural activities (see Table 1 and Appendix I), workers recorded labour and machine hours in daily diaries. For comparability, the time records were standardised to per hectare values. Details of the sample are shown in Table 3.
The relative share of site types represents the typical distribution within the five wine estates as well as in Germany overall and therefore differs in the number of observations.
Detailed information on the sites analysed in this study are provided in Appendix II. Because of limited observations, those vineyard characteristics could not be included in the cost models. All sites were managed according to integrated principles, no site was cultivated by organic principles. Notes: DSS-direction of steepest slope; TT-transversal terrace; SH-standard harvester; SSH-steep slope harvester. * reference site type ** in deviation from Table 2, for some observations, harvest was suboptimally mechanised but did not affect the cost because there was no significant cost difference between SSHs and manual harvesting (Strub et al., 2021a); hence, cases can be jointly analysed. Site type 2c is only analysed descriptively because of the small number of observations. The sites were predominately planted with white varieties, mainly Riesling. The planting patterns mostly reflected standard German row distances of about 2 metres and common plant distances of around 1.2 metres. Row distances deviated partially in steep slope sites. Here also the previously common narrower distance of 1.6 metres was observed as well as very wide distances that resulted when the middle row was taken out to permit access of crawler tractors. The planting years were widely distributed between 1979 and 2014.
Selection of process steps for analysis
Out of the complete list of 21 viticultural activities recorded (see Appendix I and (2100) Irrigation.
Although the costs for these steps were not included in the total cost, they only represented about 1 % of the total cost in this sample. The process steps (1500) Under-vine cultivation and (1600) Chemical weed control are two substitutive options for removing weeds and are rarely performed jointly by one estate. Therefore, it is sensible to analyse them jointly as a single process: Weed removal (code 1500+).
All process steps included in the comparative cost analysis are listed with corresponding sample sizes per site type in Table 4. The number of observations differs across single process steps.
The majority of steps are imperative for viticulture and are performed at each vineyard site, while a few process steps are not compulsory and are thus performed in fewer cases (e.g., Lowering the wires and Yield regulation).
Valuing time with cost
The per hectare working time records were valued with cost estimates for labour and machine hours (for full details, see Strub et al. (2021a). The labour cost was based on union wage agreements as well as federal minimum wage provisions, depending on the type of process and the required workers' qualifications, and included non-wage labour costs (AGV Hessen e.V. and IG BAU, 2010; Federal Ministry of Labour and Social Affairs Germany, 2019). The machine cost included costs for depreciation, interest for tied-up capital, expenses for maintenance, repair and storage, as well as fuel consumption, insurance and taxes based on Walg (2016), Becker and Dietrich (2017) and ÖKL (2020). The costs for pest control by helicopter and harvesting by SSH were based on contractors' prices and also included costs for personnel and their profit margin.
Average cost shares of the total cost for all process steps were calculated across all observations of each site type, independent of whether the viticultural process step was actually conducted or not. Thereby, the relative share was small for processes that were not compulsory and were rarely performed, and the sum of shares added up to 100 %. Total cost shares would exceed 100 % if the average was only calculated across those vineyards performing the processes.
The costs included in the analysis of variance were limited to labour and machine costs that in the following are referred to as "total cost". For the full viticultural cost, the following cost components would have to be added: a) Costs for viticultural materials and consumables of around 1000 €/ha (Becker and Dietrich, 2017). b) Depreciation costs for the vineyard plantations (around 1000 € per year and hectare for DSS, higher for TTs). c) Interest for tied-up capital for land and vineyard plantations, which is highly variable depending on land value. d) Costs related to the transit time from the machine shop to the vineyards, which are highly specific to individual wine estates.
Analysis of variance with fixed and random effects (RQ 1-2)
The data include related observations from five wine estates across three vintages. To account for this interrelatedness, univariate analysis of variance with fixed and random effects was conducted. Site type served as a fixed effect, and Year and Estate served as random effects. A series of univariate models of variance with fixed and random effects were estimated in SPSS to test whether the dependent variables total viticultural and process-related costs differed significantly between site types. Post-hoc differences between the site types were estimated using Tukey-B. Because of the limited number of observations (n = 3), site type 2c TT could not be included in the analysis of variance.
The effect of yield on cost per litre (RQ 3)
Out of the processes analysed, only (900) Manual harvesting (positive correlation) and (800) Yield reduction (negative correlation) depend on yield.
The costs of all other process steps can be assumed to be independent of yield. To account for the yield effect on costs, based on practitioners' experience, the cost for manual harvesting was reduced by 20 % for yields below 50 hl/ha and increased by 20 % for yield levels above 100 hl/ha. Also, the cost for yield regulation was set to zero for yields below 30 hl/ha. Total machine and labour costs were divided by different yield levels to obtain the cost per litre. Material cost, cost of depreciation and interest, as well as the cost of transit time were not included.
The average German yield of 90 hl/ha (2014 to 2018, Federal Statistical Office Germany, 2015-2019) for the standard site type 1 was used as the reference value for the analysis. For ease of interpretation, the costs of the other types dependent on yield were expressed as relative factors, where a factor of 2 represented 100 % higher cost.
For the different site types, the yield levels analysed were chosen to represent ranges from 20 hl/ha to 150 hl/ha observed in German vineyards. There are no official data available for yields at different site types. Practitioners estimated 75 hl/ha as the average German yield level for site types 2b and 3, but considerably lower levels down to 20 hl/ ha were observed for the driest sites without irrigation. The maximum yield level for quality wine of the German Mosel region was limited to 125 hl/ha, which was chosen as the maximum of the range analysed for site types 2b and 3. Because of higher water availability, fully mechanised sites can mostly produce larger yields, and 50 hl/ha was chosen as the minimum value for the yield range for site types 1 and 2a.
Cost comparison with transversal terraces (TTs) (RQ 4)
The mean costs for site types 2b and 3 were descriptively compared to the mean cost for site type 2c TT from n = 3 observations to obtain the cost differences. Because of the limited number of observations, inferential statistics could not be performed.
Profitability of conversion into transversal terraces (TTs) (RQ 5)
The advantage of the reduction of the viticultural cost of type 2c TT sites was compared to the cost of conversion, taking into account risk and time.
The net present value (NPV) and time of amortisation were calculated with formulas (1) and (2) to assess the profitability of the conversion of type 2b and 3 sites into type 2c TT sites: NPV = net present value, -C 0 = initial investment, C i = cash flow at year i (i=1,…, 30), r = discount rate, T = useful life of 30 years for the vines. The useful life of the terraces exceeds that of the vines. The initial investment -C 0 corresponds to the installation costs, which is a summary of different cost positions for the installation of terraced sites, detailed in Table 5. The amount of 74,228 € includes vineyard management for the young vines for the first two years ( Table 5). The cash flow C i is derived from the annual saving of vineyard management costs by improved mechanisation that is discounted at an interest rate.
Scenarios with two different interest rates were analysed. First, the standard cost of capital in the wine sector of r 1 = 4 % was applied, reflecting the higher risk of private equity compared to debt capital. As a second scenario, a higher discount rate of r 2 = 8 % was applied to reflect the high risk from climate change and limited experience with the transformation of DSS sites to TTs in Germany.
The amortisation period in years t* reflects the time at which the cost savings will balance or exceed the investment (the NPV is zero).
t* = amortisation period in years At NPV = 0 €, the economic situation equals the (unprofitable) reference situation. The cost of the transformation is covered. Benefits will only arise after the time of amortisation.
Reference condition A: transforming unprofitable steep slopes
The reference condition is unprofitable for type 2b or 3 sites, where full cost, including depreciation, interest and material costs, exceeds revenues. It is assumed that these sites are fully depreciated; i.e. they reached or exceeded the end of their useful life of 30 years. Because of its unprofitability, replanting steep slopes in DSS is economically irrational, and these sites risk falling fallow. The full cost of installation must be taken into account in the analysis of this condition. Profit losses for the first five years, when the vines do not produce any yield, are not to be included here, because these unprofitable sites did not produce any profit.
Reference situation B: transforming profitable steep slopes
The reference situation is different when a steep slope vineyard (such as a famous single vineyard site) is profitable and there are viable plans to replant it with, for instance, grape varieties that are more suitable from a climate and/or market perspective. Because it would be replanted in any case, the cost of the plantation of 30,000 €/ha cannot be counted towards the cost of transformation to TTs. Hence, the required investments are lower.
Similar, if a vineyard is transformed into TTs but
is not yet fully depreciated, the residual value must be added to the installation cost. Profit losses for the initial five years without yield would also occur when replanting in DSS and are therefore not to be included.
Subsidies for conversion
The effect of subsidies on the profitability of conversions was analysed. Agriculture and Consumer protection, 2017). Therefore, only the marginal higher subsidy of 6000 €/ha applies in this condition.
Factors not included in the analysis
Two factors, the yield and water availability of TT sites were not included in the analysis. The relative advantage of the availability of water at TT sites over DSS sites has not yet been sufficiently examined and quantified. Similarly, it is still unknown to what degree the lower planting density of up to 50 % (Huber, 2015) caused by embankments taking up space affects yield.
Total cost differences between viticultural systems (RQ1)
The results of the statistical analysis are detailed in Table 6. There was a substantial and highly significant difference in total cost between site types (Column C, F = 26.6, p < 0.001 fixed effect). At a smaller effect size, total cost also differed significantly between wine estates (Column D, F = 4.7, p < 0.01 random effect), suggesting a smaller influence of managerial decisions of winegrowers on the total cost. The random factor Year did not affect total cost. The post-hoc test confirmed highly significant differences in total cost between vineyard sites that increased the more mechanisation was limited. The total cost of manual type 3 sites was on average 2.6 times as high as the total cost of type 1 standard flat terrain types (12,320 €/ha compared to 4,720 €/ha). With a cost factor of 1.6 and 2.1, respectively, site types 2a SSH and 2b rope were positioned in between both extremes. To answer RQ1: Steep slope sites cause significantly higher total labour and machine costs than flat terrain sites, and the costs increase the more mechanisation is inhibited by slope and access. 6. Univariate model of variance of total cost and cost per process step with fixed and random effects, post-hoc tests, as well as absolute and relative cost difference of manual steep slope type 3 versus standard type 1.
Notes: Columns C-E: Univariate model of variance with fixed effect (Site type) and random effects (Estate, Year). F-fixed effects; R-random effects. Columns F-I: Post-hoc test Tukey-B for dependent variable "total cost per process step" and fixed effect "Site type" as an independent variable. Different superscripts indicate significantly different values at p = 0.05. Columns J-K: Difference of mean value of manual type 3 (Column I) and standard type 1 (Column F) absolute in € and relative in % *** p ≤ 0.001; ** p ≤ 0.01; * p ≤ 0.05.
Cost differences for viticultural process steps between site types (RQ 2)
For 9 out of 13 viticultural processes, the costs differed significantly between the site types (F-statistics in Column C of Table 6). The factor Site type had the most substantial effect on the costs of the processes Yield regulation, Harvesting, Trimming, Cover crop management and Pest control. Of these five processes, Pest control showed the highest increase in the relative share of costs across all site types. Because of its high frequency, Pest control represents the relatively most costly process step for site 3 (28 % cost share) and the third most costly process step for types 2a and 2b (14 % and 16 %, respectively). The Trimming and Cover crop management processes also had a higher frequency (Table 1), but their relative share of costs only doubled for the least mechanisable sites.
Differences in the mechanisability of single processes (Table 1 and Table 2) were reflected in significant cost differences between the individual site types compared to the standard type 1 and in relative cost-shares (Table 7). Type 2a only differed from the standard type 1 by using an SSH for the harvest, which resulted in a significantly higher cost (3.7 times higher) for Harvesting and a significantly higher total cost (60 % higher). For type 2a, Harvesting represented 30 % of the total cost compared to 13 % for type 1 (Table 7).
Type 2b was further limited by the rope system requirement for general viticulture (Table 2). Compared to the standard, this requirement resulted in significant cost increases for 5 of 13 processes: Trimming, Harvesting, Pest control, Cover crop management and Weed removal. Because of less time-efficient processes resulting from the double passing of rows and the higher machine cost for rope systems, the cost of these five processes was on average 3.9 times as high as that of standard type 1. Since the total cost more than doubled, with an increase of 110 %, the relative cost share of these five processes was about twice as high relative to type 1 ( Table 7).
As expected, manual type 3 showed the largest number of processes for which costs were significantly higher than for standard type 1 (8 out of 13 processes analysed). Detailed absolute and relative differences are provided in Columns J and K of Table 6. The Pest control cost increased the strongest, by a factor of 10, due to expensive external helicopter service providers or extensive manual work. Costs for Cover crop management, Defoliation and Yield regulation increased by factors of 7.4, 6.8 and 2.5, respectively, and significantly differed from all other site types.
The cost for Harvesting, representing between 13 % and 30 % of the total cost, did not differ between the three steep slope site types but was about 3.6 times as high compared to type 1. Although SSHs enabled mechanical harvesting for types 2a and 2b, the higher machine cost currently still compensates for the saved labour cost compared to type 3 manual steep slopes. As expected from Table 1, the costs of generally manual processes, such as Tying, Lowering the wires and Shoot positioning, did not differ significantly across site types.
Besides the three factors (1) higher investment cost for specialised machines, (2) less time-efficient processes and (3) the frequency of processes, the degree of necessity of viticultural processes is the fourth factor that explains cost differences. This factor expresses how imperative a process is for vineyard management. The frequency of process observations in Table 4 indicates that viticultural management does not necessarily have to include processes such as Yield regulation, Cover crop management, Lowering the wires and, partially, Defoliation. These processes show very high absolute cost differences in Table 6, but their relative share across all vineyards in Table 7 only increases marginally because many wine estates refrain from conducting these processes at all on steep slopes. For instance, Cover crop management is required for flat terrain sites to permit the trafficability of machines, but it is not often performed at type 3 sites where machines cannot be used in any case.
To summarise the results for RQ2, the factors mechanisability, frequency of repetition and necessity of the processes determined relative cost disadvantages. The Winter pruning, Harvesting, Pest control and Cover crop management processes showed the highest absolute differences and were thus the most potent cost drivers for site types 2b and 3. All these processes are mandatory except for Cover crop management, which can be omitted from very steep sites. Pest control and Cover crop management require several repetitions throughout the vegetation cycle, and potentially small cost differences add up across repetitions. For the Winter pruning and Harvesting processes, type 2b and type 3 sites were disadvantaged through time-inefficient and expensive mechanisation or manual labour.
The influence of random factors Estate and Year on variance
The random factor Estate had a significant effect on 6 out of 13 process steps. The effect was most substantial for Shoot positioning, Shoot thinning and Pest control. For both shoot-related processes of canopy management, the variance explained by the Estate factor was higher than that of Site type, suggesting that wine estates differ in their canopy management and can thereby influence and reduce the costs of processes that jointly represent between 9 % and 21 % of the total viticultural cost. The random factor Year only affected the Yield regulation process that was related to the plentiful 2018 harvest, when most wine estates reduced their yield significantly more than in other years. For all other processes, the random factor Year did not significantly explain the variance, suggesting generalizable results.
The influence of yield on cost per litre (RQ 3)
The per litre cost for different yield levels was calculated from labour and machine costs (total cost), not including the costs for materials, depreciation, interest and transport time. The results are presented in Figure 1 as factors relative to the cost of 0.52 €/litre for the average yield of 90 hl/ha for standard type 1. In Figure 1, there is a distinct convex shape, and cost per litre decreases less than proportional with rising yields. For type 1, cost per litre decreases to 0.31 €/litre when the yield rises to 150 hl/ha. On the contrary, cost per litre increases more than proportional when yields decline. For type 3, manual cost per litre increases from 1.37 €/litre for a yield of 90 hl/ha to 5.32 €/ litre when yields decline to 20 hl/ha, resulting in a cost 10 times higher than the reference.
At a given yield on the x-axis, the vertical cost differences between the vineyard types represent the cost disadvantages from limited mechanisation. The horizontal effect on costs from lower yields represents the uncontrollable effect of climate change and lower availability of water as well as the controllable effect of yield reductions. Considering the substantial increase of costs with lower yields, their leverage on per litre price is considerably stronger than the effect of mechanisability (vertical differences between the curves).
To answer RQ3: Yield levels play a more critical role in the profitability of steep slope sites than mechanisability.
Cost reduction from reshaping steep slopes into horizontal terraces
For type 2c TT, the average total cost and process costs are provided in Column A of Table 8. Compared to the standard type 1, the total cost is 34 % higher. Of all steep slope sites, the TT type has the lowest cost disadvantage.
Reshaping type 2b rope and 3 manual steep slopes into TTs can strongly reduce annual labour and machine costs by 3600 €/ha and 6100 €/ha, or 37 % and 49 %, respectively. 1 The highest absolute cost-saving potential comes from Harvesting and Winter pruning for both types and Pest control and Yield regulation for type 3. The relative cost can be reduced the greatest for Yield regulation, Defoliation and Trimming.
Profitability of conversion to transversal terraces (TTs)
At the time of amortisation, the cost of conversion equals its benefit. Further benefits result in a profit that can be discounted to its NPV. For unprofitable vineyards that would fall fallow (reference condition 1), the transformation of type 3 starts to pay off after 17 years, accruing an NPV of about 31,132 €/ha during its useful life of 30 years (under standard discount rate r 1 = 4 %, first line in Table 9). The transformation of type 2b only pays off after 45 years, and its NPV after 30 years is negative. The NPV increases to 11,937 € (55,130 €) for site type 2b (3 manual) when maximum subsidies of 24,000 € are included in the analysis. Then, the investment starts to pay off after 21 and 10 years, respectively.
If the high risk of climate change and its impact on water availability and temperature are considered through a risk premium in the higher discount rate of r 2 = 8 %, the transformation is only economically viable for subsidised type 3 sites.
For profitable vineyards, the replanting cost does not count towards the transformation into TTs (reference condition 2), and the investment is hence reduced by the average planting cost of 30,000 €/ha to 44,228 €/ha. The transformation is paid off sooner at 9 years for type 3 and 17 years for type 2b (Table 10). Subsidies are not required to motivate a transformation at the standard discount rate. Taking into account future risks by the higher discount rate, the investment will reach the break-even point with cost savings after 54 years The cost savings only relate to the main processes outlined in Materials and Methods and therefore differ marginally from the values in Strub et al. (2021a), where all processes were included in the analysis.
Economic sustainability of steep slope viticulture
Steep slope viticulture in Germany suffers from 1.6 to 2.6 times higher labour and machine costs. In a highly competitive market environment, wine estates have few options to compensate for this substantial disadvantage. Wine from steep slopes generally does not benefit from a higher reputation or price mark-ups Strub and Loose, 2016). A few famous single vineyard sites, such as Bernkasteler Doctor (Mosel), Roter Hang (Nierstein, Rheinhesse) or Würzburger Stein (Franconia), profit from high reputation and price mark-ups. Generally, wine estates with steep slopes have to focus on profitable market channels, such as direct cellar door sales with its high margins or premium wine retailers with high average prices. However, both market channels are limited in size and have been declining in Germany (Loose and Pabst, 2018).
Over the short term, wine estates can crosssubsidise their steep slope vineyards by returns from cost-efficient flat sites. Cross-subsidisation poses difficulties in wine-growing areas such as the Mosel valley or the Middle Rhine valley, where flat sites rarely exist. Many family estate owners perceive their steep slopes as a personally imposed obligation and are willing to sacrifice part of their income to preserve the heritage of their families (Loose and Strub, 2017). While this might work in the short term, the insufficient economic sustainability of small steep slope wine estates poses a significant risk for their longterm survival (Loose et al., 2021). Required investments in equipment and marketing cannot be undertaken, further deteriorating long-term perspectives and opportunities to find successors for their businesses.
Strategies for cost reductions
Generally, mechanisability reduces manual labour and decreases cost disadvantages. Besides this overall relationship, the analysis identified four particular factors as cost drivers: (1) timeinefficiency of mechanisation solutions (doublepassing of rows with rope) that require more labour and machine time, (2) higher costs from investment in specialised machinery (SSH, rope systems), (3) the number of repetitions of processes required, and (4) the degree of necessity of processes. Of these factors, the first two are related to the cost of mechanisation and the last two are associated with viticultural processes. Three particular strategies for cost reduction can be derived from these factors and can be applied on their own or in combination.
Cost-efficient mechanisation of costly processes
The mechanisation of steep slope viticulture should focus on the costliest compulsory processes of Harvesting, Pest control and Winter pruning and provide time-efficient solutions that do not require major investments which increase machine costs. For instance, Strub et al. (2021a) showed that the total time and machine costs of the SSH harvester are currently still on par with manual harvesting costs. Economies of scale and cooperation in the ownership and usage of machines are viable options for decreasing costs in the future. This also applies to the current development of spraying solutions with drones as an alternative to helicopters, which also permit a significant reduction of energy intake as well as treatment doses.
Change in viticultural management
Viticulture on steep slopes must take advantage of developments that make costly processes unnecessary or reduce their required frequency. Fungus-resistant grape varieties only require one or two spraying applications per year. So far, however, these varieties still suffer from limited market acceptance. Similarly, growing vines in low input training systems i.e. minimal pruning (MP; Clingeleffer, 1983) or semi-minimal pruned hedge (SMPF; Molitor et al., 2019) and to some degree in cordon training systems replaces manual pruning and tying in mechanised sites (Strub et al., 2021b). Some of these changes only apply to newly planted vineyards, and this strategy cannot be implemented in the short term.
Weighing costs and benefits of optional processes
The analysis provided wine estates with cost benchmarks for processes that are not mandatory but very costly to conduct on steep slopes, such as Yield regulation, Cover crop management, Lowering the wires and Defoliation. Individual estates must weigh the costs of these optional processes against their benefits, which are mainly related to the quality of the grapes and potential price mark-ups. Producers must critically evaluate their product portfolios (quality differentiation), volumes and pricing strategies. They must assess for which products marginal turnover exceeds the extra cost for these processes to pay off. Similarly, relative cost and quality potential must be taken into account for product allocation. High-quality wines that require particular processes should be produced at mechanisable sites if their quality potential suffices. If they do not benefit from a famous reputation or superior quality, type 3 steep slopes should be left to qualities that require minimal processes.
Stabilisation of yields to improve profit situation
The current observable reduction in the availability of water on steep slopes will further increase with climate change (Hannah et al., 2013). The resulting yield losses will have an immense impact on cost per litre. Already today, yields on steep slopes as low as 20-30 hl/ha are increasing the cost per litre by a factor of 7 to 10 compared to flat terrain (see Figure 1). Water availability is crucial for the survival of viticulture at these sites, and future research must therefore extend the analysis of this study to the installation and operation costs of irrigation.
Irrigation can be a mid-term solution in areas where water is available at a low cost. Contrary to Australia, Germany and many other European wine-growing countries still lack a systematic water allocation system for agriculture. The principle "first-come, first-serve" will soon break down the more agricultural businesses wish to access declining water resources. Dams to store water from winter precipitation are costly to build in densely settled European areas. Like in Australia, German society has begun discussing the social license of crop production (Dumbrell et al., 2020), whether scarce water should be used for the production of alcoholic beverages or instead for essential grains and vegetables (Motoshita et al., 2020). Drought-resistant rootstocks could be a long-term option by which experts can hope for successful breeds and selection in 30 years or more. However, these developments might come too late for German steep slope viticulture.
Assessment of vineyard transformation into terraces
The transformation of unprofitable manual type 3 sites can be an economically viable option, even when positive external benefits to tourism, biodiversity, etc. are not accounted for.
Quantifying these positive externalities will help to provide an economic rationale for subsidising the transformation that shortens the time of amortisation and provides an incentive for wine producers to continue steep slope viticulture even under the high risk of climate change.
The transformation into TTs is an investment in a future dominated by the accelerating impact of climate change. Temperatures and extreme rain events will increase; the availability of water will further decline. Any new planting today must therefore anticipate these imminent changes (Santos et al., 2020;van Leeuwen et al., 2019). Such planting must include preventive measures, such as the use of heat-tolerant, fungus-resistant grape varieties or water stress-resistant rootstocks planted for low-input training systems, thereby securing available water resources. That said, it remains uncertain whether such measures will suffice considering the 34 % cost disadvantage TTs have against standard flat sites. Considering this climate risk economically through a higherrisk premium strongly reduces the profitability of the transformation into TTs.
Consequences for agricultural policy
The rationale for subsidies for steep slope viticulture should be based on their positive benefits for biodiversity, touristic attractiveness of viticultural regions and wine producer business clusters as well as the public value of historic landscapes (Cox and Underwood, 2011;Job and Murphy, 2006;Tafel and Szolnoki, 2020). Unfortunately, those positive external effects are as of yet unassessed and therefore unavailable. From a pure cost perspective, subsidies should be aligned to differences in variable costs (here labour and machine costs) between steep slope and flat terrain sites. Current subsidy allocation that is based on slope alone must be revised. Instead, mechanisability and related cost disadvantages serve as a better basis for a fair and economical allocation of subsidies.
The results of this study indicate that steep slope viticulture with VSP systems suffers from a variable cost disadvantage of 1507 € (type 2c), 2726 € (type 2a), 5102 € (type 2b) and 7600 € (type 3) per hectare. The current German scheme of direct payments of up to 3000 €/ha only depends on the slope gradient and does not take mechanisability into account (Strub and Loose, 2016). It does not suffice to cover the cost disadvantages of types 2b (rope) and 3 (manual). If the full cost disadvantage was to be covered, this would require additional subsidies of 30.4 Mio. € annually, assuming 4 % of German vineyard acreage to be type 3 and type 2b each. In the long term, required payments could be reduced for all mechanisable type 2 sites by lowinput training systems and fungus-resistant grape varieties.
Finally, society must make a political decision on how it will allocate the available public funds (taxpayers' money). Besides the public benefits provided, the next best use of funds and land should be evaluated open-mindedly. Considering all these aspects, steep slope sites outside of tourist areas might possibly provide a higher overall benefit to society by being planted with trees instead of vines, thereby serving as a carbon sink (Pugh et al., 2019).
Limitations and future research
Data were limited to Germany and thus require replication in other wine-growing areas and climates. The number of observations of the different site types were limited, particularly for vineyards planted on TTs. Digital viticultural management applications, such as Vineyard Cloud®, will in the future provide larger data sets that allow more robust estimates. The economic analysis of steep slope viticulture will benefit from future research on the effect of planting density and water availability on the yield of type 2c TT sites compared to other sites. Research utilising the principles of true cost accounting will be crucial in the future, which considers positive external effects from biodiversity and attractiveness to tourism as well as the true costs of irrigation and water allocation systems. Future research into viticultural mechanisation solutions must consider their impact on viticultural costs. The economically sustainable transformation of steep slopes into TT sites depends on successful research into drought-resistant rootstocks and market-accepted, fungus-resistant grape varieties.
CONCLUSION
Through significantly higher labour and machine costs, steep slope viticulture poses a threat to the economic sustainability of viticulture that can only be partially reduced through mechanisation. The mechanisation of steep slopes comes at a cost that must be taken into account for the development of new technical solutions. The conversion of steep slopes into TTs only pays off in the future when the climate change risk for steep slope viticulture will have been further aggravated. The time of amortisation can be shortened by subsidies. Already, the lower yields from limited water availability on steep slopes are significantly increasing costs and risk profitability. The viability of steep slope viticulture in middle Europe risks being degraded further in the future. Decisions about its preservation through public subsidies depend on the implementation of true cost accounting and the valuation of public benefits provided by steep slope viticulture. | 2021-02-06T06:04:37.165Z | 2021-01-08T00:00:00.000 | {
"year": 2021,
"sha1": "cc5446c885b5e1e318c310ac7ab215f1e9131710",
"oa_license": "CCBY",
"oa_url": "https://oeno-one.eu/article/download/4494/14127",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cc5446c885b5e1e318c310ac7ab215f1e9131710",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
8046724 | pes2o/s2orc | v3-fos-license | Secure and Privacy Enhanced Gait Authentication on Smart Phone
Smart environments established by the development of mobile technology have brought vast benefits to human being. However, authentication mechanisms on portable smart devices, particularly conventional biometric based approaches, still remain security and privacy concerns. These traditional systems are mostly based on pattern recognition and machine learning algorithms, wherein original biometric templates or extracted features are stored under unconcealed form for performing matching with a new biometric sample in the authentication phase. In this paper, we propose a novel gait based authentication using biometric cryptosystem to enhance the system security and user privacy on the smart phone. Extracted gait features are merely used to biometrically encrypt a cryptographic key which is acted as the authentication factor. Gait signals are acquired by using an inertial sensor named accelerometer in the mobile device and error correcting codes are adopted to deal with the natural variation of gait measurements. We evaluate our proposed system on a dataset consisting of gait samples of 34 volunteers. We achieved the lowest false acceptance rate (FAR) and false rejection rate (FRR) of 3.92% and 11.76%, respectively, in terms of key length of 50 bits.
Introduction
Smart environments established by the development of mobile technology have brought vast benefits to human being [1]. Nowadays, mobile devices could be utilized not only for communication and entertainment but also for transaction [2], personal healthcare [3], or even in emergency situations [4]. As a result, more and more personal data are collected and kept in the mobile device for analysis [5], which would lead to increasing system security and user privacy concerns. Basically, security techniques for authentication and identification are commonly based on password (e.g., OTP [2]), token (e.g., ID cards), or biometric recognition (e.g., iris [6], fingerprint [7], face [8], and gait [9] recognition). Biometric based authentication mechanisms are more convenient in terms of end-user usage viewpoint when comparing with the two remaining methods of password and token. However, using biometric authentication on mobile devices should be considered carefully. Due to the fact that biometrics is unique but fuzzy and revocable, most conventional biometric authentication systems are developed based on pattern recognition and machine learning (PR-ML) algorithms to deal with the natural variations of biometric measurement [6]. Enrollment biometric templates or extracted features are stored under unconcealed form for matching with a new biometric sample to authenticate/identify users. This kind of approaches could leave critical vulnerabilities in terms of system security and user privacy, especially when it is implemented on mobile devices. These devices are easily lost so that an adversary could illegally access the mobile repository to obtain original biometric templates. Since biometrics is tied to unique characteristics of an individual which are hardly changed, the user privacy leak means an adversary could partly or fully determine the user's biometrics. From the viewpoint of system security, a compromise of biometric templates results in everlasting forfeiture. An adversary could utilize compromised templates to thereafter always illegally grant access to sensitive services.
In this paper, we introduce an authentication system based on biometric cryptosystem (BCS) to enhance the system security and user privacy on mobile devices. The biometric modality used in our system is human gait which is collected using an inertial sensor named accelerometer attached to the user's body. This type of sensor has been 2 The Scientific World Journal utilized to propose motivating applications in smart phones recently [3]. To the best of our knowledge, this is the first approach of a BCS using gait biometrics captured from the accelerometer. We utilize a fuzzy commitment scheme [10] whereby the key, acting as an authentication factor, is biometrically encrypted by the user's gait. The gait sample is merely employed to retrieve the cryptographic key and then be always discarded so that the system security and user privacy are significantly enhanced. Moreover, the system has significant advantages in terms of small storage space and low computational requirements. Therefore, it is more applicable to be deployed directly on mobile devices with limited resources, compared with other PR-ML based systems [9].
The rest of this paper is organized as follows. Section 2 presents the related works. Our proposed system is described in Section 3. Experimental evaluations are presented in Section 4. Finally, Section 5 draws our conclusions.
Related Works
To preserve the security and user privacy of biometric authentication systems, various modern approaches have been proposed [11], wherein biometric cryptosystems (BCSs) have attracted much research in recent years. State-of-the-art BCSs which were previously proposed mostly utilize physiological modalities such as iris [12], face [13], and fingerprint [14]. There are some studies that use behavioral biometrics such as signature [15] and voice [16]. Generally, BCSs could be classified into 2 subsystems including key-binding and keygeneration systems [11]. In key-binding systems, a random key string is generated and then bound with a biometric template yielding helper data. Such data are stored for further utilization to retrieve the key in the authentication phase. For example, Hao et al. [17] proposed an iris based BCS using fuzzy commitment scheme. They used 2048 bits of iris code combined with concatenated codes and achieved the false acceptance rate (FAR) and false rejection rate (FRR) of 0% and 0.47%, respectively, and the key length of their system is 140 bits. In contrast to key-binding systems-the key generation scheme-helper data is created directly only from the biometric template. Such helper data will associate with a presented query which is sufficiently close to the original template to generate either the unique key string or the original template. Typical techniques of such scheme are fuzzy extractor [18] and secure sketches [19]. Applications of key-generated scheme have already been implemented on iris [12] and voice [16]. Generally, approaches on physiological modalities achieved better results in terms of error rates and security level, compared with behavioral biometric factors. This is due to the fact that physiological modalities such as iris and fingerprint are more robust than behavioral factors which are significantly affected by various conditions. For example, human voice depends on the state of health, gait of individual changes over time, and so forth. Figure 1 sketches the specification of our gait based BCS using a fuzzy commitment scheme [10]. In the enrollment phase, gait signal of a user will be acquired and preprocessed to reduce the influence of the acquisition environment. Feature vectors are extracted in both time and frequency domains and then binarized. After that, a reliable binary feature vector is extracted based on determining reliable components. Concurrently, a cryptographic key , which is generated randomly corresponding to each user, is encoded to a codeword by using error correcting codes. The fuzzy commitment scheme computes the hash value of and a secured using a cryptographic hash algorithm ℎ and a binding function, respectively. The helper data which are used to extract reliable binary feature vectors and values of ℎ( ), are locally stored for later use in the authentication phase.
The Proposed Method
In the authentication phase, the user supposed to be will provide a different gait sample. It is also preprocessed to extract a feature vector and a reliable vector is extracted by using helper data which is previously stored in the enrollment phase. The decoding function computes the corrupted codeword via binding with and then retrieves a cryptographic key from using a corresponding error correcting code decoding algorithm. Finally, the hash value of will be matched with ℎ( ) for authentication decision.
Data Acquisition.
A Google Nexus One smart phone put inside front pocket is employed to collect user gait signal ( Figure 2). This discrete time signal is a sequence of combined values of gravity acceleration, ground reaction force, and inertial acceleration which are captured by a built-in 3dimensional accelerometer during walking. We present the output of this accelerometer as 3-component vectors where , , represent the magnitude of the acceleration values acting on three directions, respectively.
Data Interpolation.
As the accelerometer integrated in mobile devices is power saving and designed to be simpler than standalone sensors, its sampling rate is not stable and entirely depends on mobile OS. The time interval between two consecutive returned samples is not a constant. The sensor only outputs value when the acceleration on 3 dimensions has a significant change. The sampling rate of Google Nexus One used in our study is instable and fluctuates around 27 ± 2 Hz. Therefore, acquired signal is interpolated to 32 Hz using linear interpolation to ensure that the time interval between two sample points will be fixed.
Noise Filtering.
When accelerometer samples movement data by user walking, some noises will inevitably be collected. These could be yielded by idle orientation shifts or bumps on the road during walking. Moreover, mobile accelerometer produces numerous noises compared with standalone sensors since its functionality is fully governed by mobile OS layer. Hence, we adopt a multilevel wavelet decomposition and reconstruction method, specifically the Daubechies orthogonal wavelet (Db6) with level 2, to filter the gait signal. In 1st level, original gait signal is decomposed into two separate parts containing coarse and detail coefficients. Such coarse coefficients acquired in the 1st level are then used as input signal to be decomposed in the next level. This process continues until the desired level is achieved. To eliminate the impacts of noise, in each level, we assign detail coefficients which are lower than a predefined threshold to 0. The noise-filtered signal is reconstructed conversely to the decomposition process, wherein coarse coefficients will associate with new detail coefficients starting from the lowest level until the zero level is achieved. Because walking is a cyclic activity, we segment a sequence of gait signal after eliminating noise to separate patterns which consist of consecutive gait cycles. A gait cycle is defined as the time interval between two successive occurrences of one of the repetitive events when walking.
We observed that whenever the human foot, which is on the same side as the device, touches the ground, the acceleration value in the vertical dimension signal changes obviously as illustrated as red points in Figure 3. We determined these points by calculating the autocorrelation coefficients = ∑ −| | =1 + on the vertical dimension signal and filtering vivid peaks based on mean and standard deviation. Then based on these points, we segment gait signals into separate patterns, in which each pattern consists of gc ( gc = 4 in our experiment) consecutive gait cycles of all 3 dimensions. Finally, a feature vector is extracted from each pattern in both time and frequency domains. (i) Average maximum acceleration (ii) Average minimum acceleration avg min = mean(min (GC )) gc =1 .
(iii) Average absolute difference (iv) Root mean square (v) 10-bin histogram distribution (vi) Standard deviation (vii) Waveform length where () is the time length of a gait cycle.
Feature Vector Binarization.
We adopt a quantization method which is previously used in [13] for face template binarization. Assume the number of users is denoted by . The number of feature vectors extracted from each user is . Let ( ⃗ ) , ( = 1 ⋅ ⋅ ⋅ , = 1 ⋅ ⋅ ⋅ ) be the th feature vector of the user ; the mean over intraclass variability ⃗ of the user is calculated as The mean over all feature vectors ⃗ in the enrollment phase is calculated by The The quantization method transforms th component in ( ⃗ ) , into {0, 1} by comparing th component of ⃗ with a specific threshold defined by corresponding tth component of . For each user , the binary feature vector is determined by In the enrollment phase, we use enrollment feature vectors to approximately estimate the value of ⃗ . This ⃗ is stored as the helper data and used as the specific threshold for binarizing real-valued feature vectors in the authentication phase.
Reliable Binary Feature Extraction.
As the authors pointed out in [13], when using the quantization method to transform real-valued vectors into the binary forms based on statistical analysis as in the previous section, components in ⃗ are significantly instable when using → and ⃗ to determine the output bit. For example, if the tth component of ( → ) is close to ( ⃗ ) , the error probability for the next verification will be higher. Therefore, it is necessary to extract only high robust and reliable bits among ⃗ . First, the variance 2 of each tth component for each user is calculated by Assume that the variability of components is modeled as a Gaussian. Then, the standard error functions of tth bit of the user are estimated as Indices of rel val (called rel idx ) are also stored as the helper data to extract reliable bits in authentication phase.
Key Binding Scheme.
We adopt the BCH code [20] as an error correcting code to overcome the natural variations between biometric measurements. The advantage of BCH code, compared with other codes, is that it can correct single errors which could occur randomly as in our extracted binary feature vectors. Moreover the decoding process of BCH code is designed to be simple. Therefore, it requires less computational capability and low-powered consumption so that our system is more lightweight to be possibly deployed on mobile devices. Let BCH 2 ( , , ) be a binary BCH code, where is the code length of bits, is the key length of bits, and is the error correction capability. The binary cryptographic key of length is generated randomly corresponding to each user and then is encoded into the codeword of length using a BCH 2 ( , , ) encoding scheme [20]. After that, we conceal this by binding it with the extracted binary feature vector yielding a secured and then discard . Since , are two binary strings, an exclusive-OR operator is adopted to bind these two strings together.
In summary, we represent all of the necessary steps in both enrollment and authentication phases in our system as follows.
Enrollment Phase.
(i) Select a BCH 2 ( , , ) by predefining parameters including the length of the codeword and the length of the secret key.
(ii) For each user , real-valued feature vectors ∈ R are extracted.
(iii) Determine a mean over all feature vectors ⃗ and extract a binary vector ∈ {0, 1} by using the quantization scheme. Then, discard .
(iv) Determine the reliable bit indices rel idx and reduce the length of to by only selecting first bits among based on rel idx .
(v) Store ⃗ , rel idx as helper data for further use to construct new feature vectors in the authentication phase.
(vi) Randomly generate a binary secret key with the length of .
(vii) Calculate the hash value of by using a cryptographic hash function ℎ (e.g., SHA) and store ℎ( ).
(ix) Bind with using exclusive-OR operator yielding . Then, discard and store .
Authentication Phase.
(i) For each user , feature vectors ∈ R are extracted from a new biometric sample.
(ii) Extract binary feature vectors with length of with the help of ⃗ and rel idx . Then, discard .
(iii) Bind with the stored using exclusive-OR operator to obtain a corrupted codeword .
(iv) Decode using a BCH decoding scheme to obtain a key from .
(v) Calculate hash value ℎ( ) using the equivalent cryptographic hash function (e.g., SHA) as in the enrollment phase and then discard .
(vi) Match ℎ( ) with ℎ( ); if ℎ( ) = ℎ( ), the user is authenticated. Otherwise, he will be rejected. on Android SDK. A total of 34 volunteers including 24 males and 10 females with the average age from 24 to 28 participated in our dataset construction. Each volunteer will perform around 18 laps. To make the dataset more realistic, we collect gait signals regardless of footgear and clothes. Volunteers are asked to walk as naturally as possible and change their footgear (e.g., sandal, shoe, or slipper) as well as clothes (e.g., short to long trouser, etc.) whenever they start a new lap. We only have a constraint that when volunteers perform walking, the mobile put in the pocket will not change its position and orientation. To ensure that, we request volunteers to wear trousers having a narrow pocket. Totally, we accumulated the gait signals of 34 volunteers, each having at least 16 realvalued feature vectors which could be extracted using the method in Section 3.2. In our experiment, each volunteer will have an equal number of the extracted feature vectors so that we randomly select 16 vectors for users having more than 16. Figure 4 represents the Euclidean distance distribution of extracted real-valued feature vectors. Note that the operation of our BCS is likely to be similar to a thresholdbased classification, in which the threshold is likely to be low according to an appropriate distance metric. We can see that the mixing area between intraclass and interclass real-valued feature vectors is large. Thus, applying threshold based classification on these vectors would lead to the high error rate in terms of FAR and FRR. Fortunately, when such vectors are binarized by using the proposed method in Section 3.3, the discrimination of binary feature vectors between users is likely to be higher and the Hamming distance of intraclass feature vectors is getting lower. Figure 5 illustrates the Hamming distance of binary feature vectors of lengths of 127 and 255, respectively. These values of length are selected to be appropriate with the design of the BCH code which allows the length of codeword to be equal to 2 − 1, ∈ N, > 3 and the maximum dimension max of feature vector which could be extracted in this study ( max = 289). As already stated, the length of binary feature vector must be equal to the length of BCH codeword for possible binding using an Exclusive-OR operator. Hence, the reliable bit extraction process in Section 3.4 will only select a number of reliable components identical to the codeword length. Looking into Figure 5, we can see that the Hamming distance of intraclass feature vectors of length of 127 is lower than in case of length of 255. We found that this is due to the fact that the actual number of bits being highly reliable according to (16) is just approximately half of the original feature vector dimension. Hence, to obtain a binary feature vector of length of 255, even low reliable bits are also selected. Figure 6 illustrates the error rates of our proposed gait based BCS using fuzzy commitment scheme corresponding to two codeword lengths of 127 and 255, respectively. In both cases, when the key length increases which is equivalent to the number of errors allowed in the codeword decreases, the FAR is getting reduced to 0 and the FRR exponentially increases. The best error rates of our proposed system are (1) in the case of codeword length = 127; the achievements of FAR and FRR are approximately 3.921% and 11.76%, respectively, in terms of key length = 50 bits. (2) In the case of codeword length = 255, we achieve the FAR ≈ 1.4% and the FRR ≈ 32.53% in terms of the key length = 55 bits. These keys are rather sufficiently long to be secured by a cryptographic hash algorithm. The FRR of codeword length = 255 is significantly higher than in case of codeword length = 127 because, as already stated, selecting many low reliable bits makes the binary feature vectors of length = 255 more dissimilar. However, the achieved FAR is slightly better (1.4% compared with 3.921%). In both cases, we can see that the FRRs are rather high which could decrease the friendliness of the system. However, user's gait could be captured continuously and implicitly by an accelerometer which does not make the user annoyed as other biometric modalities (e.g., iris, fingerprint, face, and signature). Therefore, this issue is not so considerable. Table 1 shows the performance of our proposed system compared to some other state-of-the-art BCSs using different behavioral modalities such as voice and signature. Note that all these works use different approaches and the dataset used is totally different so the comparison is just relative. Therefore, through this study, we would merely like to illustrate that human gait captured from inertial sensors could be utilized to construct an effective BCS as other behavioral modalities. Moreover due to the fact that we adopt a quantization scheme similar to [13], we also compare our system with this face based BCS. The authors achieved the key length of 58 bits, the FAR of approximately 0%, and the FRR of approximately 3.5% and 35% corresponding to two different datasets of CALTECH and FERET, respectively. We can see that face is a physiological biometric which is more robust than human gait, which is a behavioral modality. Hence, the performance of their system in terms of key length, FAR, and FRR is slightly better.
Conclusion
In this paper, we introduce an approach of gait based biometric cryptosystem using fuzzy commitment scheme.
The results show a good potential to construct an effective gait based BCS especially on mobile devices. The drawbacks of our work are that the error rates in terms of FAR and FRR are still rather high. We expect to achieve the FAR of 0% to make the system more secured. Hence, our further work will focus on reducing the error rates of FAR and FRR by constructing higher discriminant feature vectors using global feature transformations as well as finding an 8 The Scientific World Journal optimal quantization scheme for binarization. Moreover, the system security should be analyzed in depth to ensure that a gait based biometric cryptosystem could fulfill the security requirement in order to be deployed in reality. Finally, validating the proposed system on a larger public dataset is also our main further work. | 2016-01-12T08:34:31.961Z | 2014-05-14T00:00:00.000 | {
"year": 2014,
"sha1": "a3964a77534edcff82ca490fccbfe4a1fe8e196a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2014/438254",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7af729d735b8328a981be9b22dd44aab369587df",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
234046892 | pes2o/s2orc | v3-fos-license | Using Landscape Change Analysis and Stakeholder Perspective to Identify Driving Forces of Human–Wildlife Interactions
Human–wildlife interactions (HWI) were frequent in the post-socialist period in the mountain range of Central European countries where forest habitats suffered transitions into built-up areas. Such is the case of the Upper Prahova Valley from Romania. In our study, we hypothesized that the increasing number of HWI after 1990 could be a potential consequence of woodland loss. The goal of our study was to analyse the effects of landscape changes on HWI. The study consists of the next steps: (i) applying 450 questionnaires to local stakeholders (both citizens and tourists) in order to collect data regarding HWI temporal occurrences and potential triggering factors; (ii) investigating the relation between the two variables through the Canonical Correspondence Analysis (CCA); (iii) modelling the landscape spatial changes between 1990 and 2018 for identifying areas with forest loss; (iv) overlapping the distribution of both the households affected by HWI and areas with loss of forested ecosystems. The local stakeholders indicate that the problematic species are the brown bear (Ursus arctos), the wild boar (Sus scrofa), the red fox (Vulpes vulpes) and the grey wolf (Canis lupus). The number of animal–human interactions recorded an upward trend between 1990 and 2018, and the most significant driving factors were the regulation of hunting practices, the loss of habitats, and artificial feeding. The landscape change analysis reveals that between 1990 and 2018, the forest habitats were replaced by built-up areas primarily on the outskirts of settlements, these areas coinciding with frequent HWI. The results are valid for both forest ecosystems conservation in the region, wildlife management, and human infrastructures durable spatial planning.
Introduction
The potential impact of landscape spatial characteristics induced by human activities over interactions with wildlife (HWI) has been globally studied. In developed countries, a large amount of public land went under private ownership and the interest for urban areas and road and energy infrastructures have massively changed the ecological landscape and triggered numerous wildlife intrusions into human habitat, which led to numerous conflicts [1,2]. In underdeveloped countries where the wildlife population still thrives, human demographics growth and connected anthropogenic activities encroach on once-wild areas, sometimes resulting in fatal animal attacks [3]. Habitat loss due to the expansion of road and transport infrastructures is one of the main causes of vehicle collisions with large mammals and it is responsible for severe human and animal injuries and expensive property damage [4]. Similarly, grazing activities favoured increasing rates of livestock being preyed on by large felids [5][6][7]. Habitat loss induced by agricultural practices generated conflicts between farmers and wildlife thus producing crop damage to farmlands [8]. In underdeveloped rural regions, natural resources extraction (primary wood for fuel) increased conflicts in wildlife corridors which connected protected areas [9]. The impact of habitat loss, tourism activities over HWI, and the changes in animal behaviour are the causes of public insecurity and affect the economic incomes of leisure areas in North America [10].
The analysis of local stakeholders' perspectives regarding HWI characteristics represents a wide-spread approach and plays a crucial role in improving long-term conservation of biodiversity and reducing risks to human security and economic activities [11][12][13]. The local stakeholders' attitude towards the potential management approaches of HWI (conservation approach vs. economic and traditional hunting practice) is important in understanding if these interactions are perceived as problems, as potential benefits or as sources of income. In developed countries, the economic, social and scientific progress have offered possibilities for a higher standard of living and influenced how people behave and understand wildlife interactions by shifting local stakeholders' perceptions toward conservation and protection of wildlife, to the detriment of raw economic use and mass resource extraction [14]. Through local stakeholders' level of interest concerning the subject, the significance of HWI can be outlined. The deficient communication and the negative attitude of locals concerning the decision-making authorities sometimes materialized through lack of trust and rebellion against their low implication and response to the problem [15].
HWI has increased significantly in post-socialist European countries, as in Romania's case, favoured by the presence of some of the "last remaining pockets of wilderness" (temperate primaeval forests), rich biodiversity, numerous forested habitats disturbances even inside protected areas due to high logging rates (generated by rapid ownership and stiff changes in institutional management), low effectiveness wildlife management and unregulated tourism development which imputed constant pressure on natural resources [16]. The Carpathian Mountains are famous for the high rate of HWI, where large predatory mammals, especially brown bears, are by far the most controversial [17]. Several studies were assessed to better understand and manage brown bear conflicts from different perspectives, such as the typology of the relationship with humans in the protected areas of Harghita County [18], followed by the perception of locals regarding the coexistence with brown bears within settlements located in high brown bear density areas in Bras , ov and Covasna counties [19]. The importance of institutional collaboration for achieving coexistence between wildlife and humans has also been discussed [20]. Furthermore, Dorresteijn et al. [21] analysed the threats and opportunities concerning a potential peaceful coexistence between humans and brown bears in Central Romania. Human attitude towards interactions with grey wolves in Romania has been described by Chiriac et al. [22]. Other assessments were dedicated to identifying the behaviour of wild boars towards human activities within the rural landscapes of Covasna County [23]. Pătru-Stupariu et al. [24] highlighted the presence of numerous wildlife species within the touristic areas of Prahova County, commonly involved in interactions with humans, such as brown bears, wild boars and red foxes, and sporadic ones, namely grey wolves, stone martens (Martes foina), European polecats (Mustela putorius), European roe deers (Capreolus capreolus) and common vipers (Vipera berus).
The South-Eastern Carpathians represent one of the areas with the most intense study of the HWI situation from Romania. Here, the most representative conflict hotspots are represented by the popular touristic resorts within the Upper Prahova Valley, located in the counties of Brasov and Prahova [25]. The valley offers proper conditions for a high intensity of HWI, based on the presence of favourable landscape characteristics: (a) protected wild areas which shelter old-growth forests and support high biodiversity habitats ( Figure 1A,B); (b) numerous human settlements characterised by a compact urban fabric in the central areas and a sprawled periphery where vacation houses are surrounded by degraded forests habitats ( Figure 1C), and (c) increasing pressure over the natural resources and wildlife habitats triggered by deforestations and intensive tourism practices ( Figure 1D). As a consequence of the complex aspects characterizing HWI and the acuteness of the phenomenon within the Upper Prahova Valley, we developed the next hypothesis: (i) "the upper Prahova valley suffered in the post-socialist period both a major loss of forest ecosystems and an increasing HWI conflict", and (ii) "local stakeholders could provide deep insights regarding the potential triggering factors of HWI". Therefore, the aim of the study is to identify the effects of landscape change on HWI. The objectives of our assessment are: (i) to quantify the local landscape spatial and temporal dynamics in the post-socialist period (after 1990), and (ii) to analyse the potential causes of HWI within the study area based on local stakeholders' perspective.
Study Area
The study was developed in three major settlements within the Upper Prahova Valley (Sinaia, Bus , teni and Predeal), popularly known as some most important winter tourism centres of Romania. The valley is located in the Southern Carpathians and it is bordered by mountain massifs: Bucegi (west), Baiului (east) and Clăbucetele Predealului (north) (Figure 2). The valley lies in the Alpine biogeographical region. Mixed forests composed by European beech (Fagus sylvatica), European silver fir (Albies alba), European spruce (Picea abies), European larch (Larix decidua), and common yew (Taxus baccata) dominate the landscape, with several large patches of intact old-growth forests still being preserved in areas where forest exploitation and management is difficult [26]. These habitats host one of the largest populations of large carnivores within Europe, the main species being the intensively studied brown bear, grey wolf, and Eurasian lynx (Lynx lynx). Other rare and protected wildlife species include the European wild cat (Felis silvestris), black goat (Rupicapra rupicapra) and several bird species. The Eurasian capercaillie (Tetrao urogallus) and common raven (Corvus corax) are the most representative. The area hosts a dense concentration of protected areas, such as the Bucegi Natural Park (designated in 2003), a homonymous Natura 2000 site of community importance (designated in 2007), and numerous scientific reserves, established in order to preserve both geological natural wonders and valuable botanical elements, such as the edelweiss (Leontopodium nivale) [27]. Despite the fact that in the post-socialist period the number of residential settlements decreased, the periphery expanded, and numerous vacation homes and accommodation units were built on areas occupied by forest until 1990. The phenomenon was driven by uncontrolled tourism expansion [28]. Due to these factors, the protected regions are facing unprecedented pressure on the natural environment and the increasing number of interactions with wildlife leads to frequent conflicts.
Landscape Change Analysis
In order to quantify local landscape spatial and temporal dynamics in the post socialist period, we have conducted a landscape change analyses through the Binary model [29] and the Markov model [30]. In our study, the Binary model is used to identify the areas where the landscape under study suffered changes within different periods of time, while the Markov chains model adopts a much more complex approach, aiming to highlight transitions of specific land cover classes within the respective time periods. We preferred these approaches for several reasons: (a) they allow for the quantification of landscape changes and even the development of evolutionary scenarios and can be implemented through any available GIS software; (b) they are discrete models in terms of time coordinate (by taking into consideration a finite number of maps of the same area with respect to different time periods); (c) they can be applied on both discrete or continuous spatial data (in our case we used discrete data represented by land cover classes types) and (d) they have a broad range of applications in numerous fields, primarily in natural sciences, geography, landscape ecology [31] and even biology [32].
For the application of the two models, we extracted the Corine Land Cover data from the European Environmental Agency website, for all available years, such as 1990, 2000, 2006, 2012 and 2018 [33]. Since the models are used to highlight changes between different years, we have selected 3 time periods for our assessment: 1990-2000, 2000-2006, and 2006-2018. We preferred a detailed time period approach at the expense of a general one (as in the case of 1990-2018) as our intention was to identify particular land cover class conversions that, although they took place after 1990, did not persist until 2018, yet could be a primal cause for wildlife disturbance.
The study area is characterised by the presence of 13 types of land cover classes with similar features. We chose the reclassification into three major categories, namely built-up, forests and other land cover classes (Table A1 in Appendix A). The reclassification system encompasses the prevailing land cover types within the study area based on the occupied area and the level of human intervention, from areas with high intensity (built-up), to land shared with wildlife (other classes-pastures, grasslands, shrubs etc.) and primary wildlife habitats (forests). Therefore, we were interested in analysing the spatial and temporal dynamics of the landscape between land cover classes which support permanent wildlife habitats and the ones with intensive or extensive human activity. The transitions between these categories are much more relevant as potential triggering factors for conflicts between people and wild animals [34].
We conducted a matrix encompassing all the possible transitions between the reclassified land cover for all the time periods mentioned above and calculated their surface expressed in hectares. Also, we quantified the areas for the unchanged land cover classes, followed by the total changed and unchanged land for the same periods. Nevertheless, because between 2012 and 2018, at the Corine Land Cover broad data scale (Minimum Mapping Unit of 25 hectares for areal phenomena and a minimum width of 100 m for linear), the models did not reflect any landscape changes, we amalgamated 2012-2018 with 2006-2012 into one single period, 2006-2018. Finally, the geographical coordinates of the households involved in HWI were overlapped with the general forest loss map for the entire time period assessed (1990-2018), in order to identify possible spatial correlation between the areas with HWI and the ones where wildlife habitats were removed in order to provide space for built-up areas or other land cover classes.
Assessing Local Stakeholders' Perspective on HWI
In order to analyse the local stakeholders' perspective concerning the triggering factors of HWI within the Upper Prahova Valley, we conceived a questionnaire comprising two questions: (a) the first, developed for extracting factual information regarding the main wildlife implicated in HWI and the temporal dynamics of their descends into settlements: "What are the main species implicated in HWI and when do they descend more often?", and (b) the second, aiming to reveal the locals' perception concerning the causes of wildlife descends in settlements within the study area: "What are the potential triggering factors of HWI?". For the first question, the potential answer options were represented by several time periods, with an emphasis on the post-socialist period, when HWI was expected to be much more frequent (2015-present, 2010-2015, 2000-2010, 1990-2000, and before 1990). For the second question, we set potential answer options based on preliminary knowledge concerning the HWI problem within the study area, fundamental through discussions with the local stakeholders and our own personal field observations concerning the phenomenon, from previous years [24].
The sites we used for the survey were selected based on a couple of criteria: (a) the presence of landscape features which could potentially favour HWI (abundance of households located at the outskirts of settlements, where built-up has increased after 1990 and has replaced initial forest habitats); (b) a long term notoriety as conflict areas where HWI are common, characteristic revealed by previous discussions with locals and mass media articles; and (c) the field presence of already applied measures regarding HWI management, such as reinforced fences or warning signs. Also, the selected sites were located on both sides of the Prahova Valley (Bucegi, Baiului and Clăbucetele Predealului Mountains). The two mountain massifs possess a different level of anthropization. Our interest was to highlight whether this aspect influences the manifestation of HWI. Therefore, we have conducted our research within the next sites: (a) Sinaia City Centre, Furnica neighbourhood and Peles , Castle area (Sinaia), Valea Cerbului camping area, Kalinderu ski area and Cezar Petrescu neighbourhood (Bus , teni)-Bucegi Montains; (b) Cumpătu neighbourhood (Sinaia), Zamora neighbourhood (Bus , teni), Cioplea neighbourhood and Clăbucet ski area (Predeal)-Baiului and Clăbucetele Predealului Mountains (Table A2, Figure 2).
We applied 449 questionnaires, between September 2018 and August 2019, to three categories of local stakeholders: residents and owners of guest houses, employees of the local leisure industry and seasonal or occasional tourists (Table A3). For all the respondents, we solely collected information concerning their interaction with wild animals which took place in the proximity of their households, apartment blocks, touristic houses or caravans, so that we could extract the geographical coordinates of every single type of settlement where HWI had been witnessed. We excluded from the assessment the households where, due to different reasons, we could not interact with the owners in order to apply questionnaires. Similarly, the households where the respondents suggested that they had never been involved in HWI, were kept in our analysis as investigated, yet lacking HWI.
The information was assessed after obtaining the respondents' verbal consent and the required data was processed in the same manner as it was initially explained to them. The process of applying the questionnaires and processing the data took into account the provisions of the GDPR regarding the anonymity of the respondents. In addition, the questionnaires were applied only after the respondents agreed to provide information. Also, the data provided by the respondents were processed exactly as previously specified.
Statistical Approach
We analysed the relation between the two variables (time period when HWI were most common and potential triggering factors) through the Canonical Correspondence Analysis (CCA) approach, a method of multivariate statistics commonly used in ecology and social sciences [35]. The algorithm is available within software R, version 3.1.2., where the Vegan package provides the function mod.cca. Should one of the respondents of a specific household leave out an answer to at least one of the two questions, the respective household would be eliminated from the statistical analyses, therefore resulting in a total of just 368 interviews registered for the application of the CCA. The data were codified with 0 and 1, (binary coded-1 for the presence and zero for the absence of species) and divided in two categories Site 1 (Bucegi Mountains) and Site 2 (Baiului and Clăbucetele Predealului Mountains). The two sites clearly differ. Site 1 is more populated and with a higher density of houses than Site 2. Also, the CCA data is grouped in two categories: explanatory variables and response variables [36]. The explanatory variables are driving forces, whereas the response variables represent the presence and absence of species within a specific time period [24] (Table A4).
Results
The (Table 2). In both cases, the main causes are represented by the expansion of the outskirts of the three major touristic resorts meant to provide space for leisure facilities, followed by the development of a major high altitude infrastructure for sportsmen training on the Bucegi Plateau ( Figure 3). Finally, we identified several hotspots where households frequently involved in HWI (after 1990) were developed on areas previously occupied by forests (before 1990) ( Figure 5). In a similar manner, in the case of Bus , teni city, Valea Cerbului camping area, where brown bear descends are common, was initially a forested area converted into a local pasture (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000), followed by a campsite (2000)(2001)(2002)(2003)(2004)(2005)(2006). Several deforestations took place after 1990 meant to expand the Zamora neighbourhood (Baiului Mountains, Romania). Nevertheless, in Predeal, the Cioplea neighbourhood, where HWI are frequent, mainly involving brown bears and red foxes, was visibly developed after 1990 on initially forested landscapes.
The data obtain from stakeholders emphasized the presence of three wildlife species which often descend into settlements: the brown bear, the wild boar and the red fox. 75.2% of the total investigated households suggested that the brown bear interactions with humans were much more frequent in the last five years (after 2015). Only 2.6% of the cases pinpointed a high frequency of brown bear HWI until 1990. The wild boar was much more common after 2015 (in 30% of the cases). The red fox was much more frequently involved in HWI after 2015 (19.5%). The grey wolf was usually observed between 2010 and 2015 (Table 3). The CCA offers us a deeper analysis than the clustering and it is specifically useful in understanding the relation between the driving forces and the presence or absence of species in the context of landscape change during 1990 until the present ( Figure 6).
The statistical significance of the CCA analysis was tested through permutation tests (999 permutations, alpha = 0.05), [37]. The answers of the local people in site 1 and site 2 are mostly related to variables (F = 10.01, p < 0.001). CCA1 (axis 1) is 2.9 and CCA 2 (axis 2) is 1.09. The variables (explanatory variables) which significantly contributed to a better understanding of the changes that had influenced the presence or absence of species, are as follows: DF1 = 1.8; DF2 = 6.04; DF3 = 1.6; DF4 = 1.3; DF5 = 2.0; DF6 = 2.5; DF7 = 0.8; DF8 = 0.6; DF9 = 0.5; DF10 = 0.8; DF11 = 0.29; DF12 = 0.3; DF13 = 0.6. The most significant are: DF2-Banning of hunting; DF6-Humans have invaded their habitat due to the construction of houses, roads or touristic infrastructures; DF5-The animals are accustomed to artificial feeding; DF1-Poaching; DF3-The park rangers do not feed the animals; DF4-There are too many wild animals compared to how much the habitat can support. We concluded there are no differences between the two sites in terms of presence or absence of species although the two sites are different in terms of human population and household density. V16-1990-2000/grey wolf; V17-before 1990/brown bear; V18-before 1990/wild boar; V19-before 1990/red fox; V20-before 1990/grey wolf), while the explanatory variables are represented as arrows (DF1-Poaching; DF2-Banning of hunting; DF3-The park rangers do not feed the animals; DF4-There are too many wild animals compared to how much the habitat could support; DF5-The animals are accustomed to artificial feeding; DF6-Humans have invaded their habitat due to the construction of houses, roads or touristic infrastructures; DF7-Wildlife habitats offer less food due to recent deforestation actions; DF8-Wildlife are affected by the intensive exploitation of mushrooms and berries; DF9-Relocations; DF10-The presence or absence of sheepfolds; DF11-The removal of the local dumpsite; DF12-Forest privatization leading to higher management intensities/shorter rotation periods; DF13-Lack of herbivores or natural enemies).
The Loss of Habitats Is Related to Human-Wildlife Interactions (HWI)
The landscape change models revealed that the Upper Prahova Valley suffered considerable forest loss, especially after 2000, when the outskirts of the major resorts sprawled into the forest and numerous vacation houses were constructed. Furthermore, within the same time period, significant forest conversions into meadows and camping areas were registered. At the opposite pole, between 2000 and 2006, the models pinpointed forest transitions into grasslands in highly inaccessible sloped areas [38]. Landscape changes that were registered after 2000 materialized through a persistent urban sprawl and were favoured by several political and economic events. First, the expansion of residential areas took place, probably caused by planning policies and poor role of State regulation [39]. Secondly, the mountain areas of Central Europe were affected by an economic trend of increasing touristic pressure and aggressive development of leisure facilities [40].
The Bucegi Mountains represent a traditional touristic attraction with large infrastructure and a high flow of tourists. However, the Baiului Mountains lack such popularity among tourists. Nevertheless, the statistical analyses indicated that HWI temporal patterns do not seem to be influenced by anthropization levels. Conversely, the chaotic expansion of medium and small accommodation units rapidly developed after 1990 within both mountain massifs, seems to be a plausible triggering factor for HWI. Therefore, we overlapped the location of households where HWI took place, according to the respondents, with the areas where forests were replaced by built-up areas and other land cover classes. The results suggest that wildlife habitat loss and disturbance could potentially influence the manifestation of HWI. In conclusion, the households affected by HWI from the outskirts of the three major settlements that were analysed, whether located on the slopes of Bucegi or Baiului Mountains, were built on the land that used to be a forest before 1990.
These above mentioned hotspots represent the result of several distinct space-based phenomena, such as development of suburbanization, development of recreational buildings and development of camping areas. The expansion of recreational buildings is, by far, the most common and widely spread process of space changes which characterises the outskirts of the study area, and it is specific to Furnica neighbourhood in Sinaia and Cioplea neighbourhood in Predeal. Here, a wide variety of recreational buildings, such as pensions, vacation houses, hotels and touristic villas, have spread into a once natural habitat, by replacing large forest areas and incorporating the smaller remaining patches into built-up areas. Cumpătu neighbourhood in Sinaia and Zamora neighbourhood in Bus , teni are both located on the slopes of the Baiului Mountains and clearly reflect the development of suburbanization. In these cases, initial forest habitats were replaced due to expansion of residential areas and private households, whereas touristic facilities are usually scarcer. Also, the new built-up areas possess a much more compact distribution, by comparison with the recreational areas from the Bucegi Mountains which are characterised by a more scattered pattern. Lastly, the development of camping areas can be found on the periphery of Bus , teni. Here, a dense concentration of caravans occupies large pastures from late spring to the beginning of autumn. The area is completely surrounded by forests which sustained a continuous wildlife habitat until the development of the pasture.
Overall, our analyses indicate that HWI has increased in the outskirts of the settlements within the Upper Prahova Valley after 1990. Furthermore, after 1990, in the same areas, accommodation units and camping sites expansion have degraded forest habitats. The temporal and spatial correlation of the two variables (HWI and forest transition into built-up areas) could suggest that HWI are o potential cause of continuous shrinking of natural habitats and chaotic tourism activities. The results correspond with Dorresteijn et al. [21], who analysed the different ways in which local people perceive interactions with brown bears in Central Transylvania and concluded that deforestation and landuse change were perceived as major wildlife disturbing factors with the potential to increase future conflicts. According to Rozylowicz et al. [34], between 1990 and 2006, in the Eastern Carpathians Mountains, 45% of the forest per mapping unit was clearcut without any landscape-scale management or ecologically oriented principles. By consequence, numerous wildlife habitats were disturbed. Similarly, despite the increasing area of natural reserves, forest habitat disturbances inside protected areas and even within core reserve areas were consistent after 1995 and 2005. This happened primarily because of massive logging rates and stiff ownership changes [16]. Conversely, a different perspective belongs to Chapron et al. [41]. He considers that the decline of human land-use activities materialized through the abandonment of agricultural land, followed by the migration of people from rural to urban areas in search of a higher life standard, has decreased pressure over the environment and allowed wildlife habitats to successfully recover. [27]. According to the management plan, the internal zoning system of the Bucegi Natural Park encompasses four functional areas, which allow the following activities: sustainable development, sustainable management, integral protection and strict protection. Based on our models, the post-socialist expansion of recreational building within the park boundaries is located in the areas of sustainable management. These areas have been designated precisely to allow the development of tourist activities. Yet, at the same time, the sustainable management areas occupy large portions of forests that extend from the periphery of settlements to regions with wild habitats, included in the integral or even strict protection zones. We argue that the areas of touristic development should be delineated by the ones of strict protection through a buffer zone, in order to reduce the potential ecological disfunctions generated by mass tourism and to prevent conflicts between humans and wild animals. Furthermore, we plead for redesigning the spatial arrangement of the tourism development areas as specified in the management plan, by focusing on a stricter limitation of their extension for minimising the pressure on adjacent natural ecosystems. Lastly, the areas of strict protection, which are characterised by the highest conservation value and scientific interest, should be mapped out once more, taking into account alternatives that avoid superimposing major leisure facilities and intensively used tourism trails [12].
The Perception of Local Stakeholders Could Help Us Understand the HWI Phenomenon
According to the local stakeholders, the other species that frequently interacted with humans (the brown bear and the red fox) were involved in an increasing number of descends into settlements after 1990, reaching a peak within the last five years. The grey wolf represents an exception, as it has been implicated in several sporadic interactions with humans between 2010 and 2015. This trend characterises all the surveyed sites. The wild boar was almost absent until 2015 when the number of interactions with humans increased abruptly. Our study reveals that species with generalist feeding habits and a wide-ranging diet are much more involved in HWI within major touristic areas than the pure carnivores, such as the wolf. The low number of households implicated in breeding grazing domestic animals within the studied settlements could explain the insignificant number of grey wolf interactions since this species enters in conflicts with humans for livestock depredation [22]. Besides the diet of the species, another explanation for our results would be the size factor. By comparison to grey wolves, brown bears are bigger and more powerful. They do not always fear or avoid humans and they engage in conflicts much more frequently [19].
The statistical model highlighted that after 1990, the most significant HWI driving forces perceived by locals are represented by two types of management practices, namely: (a) the conservation approach, which allows the increase of wildlife effectiveness and it is supported by active management and restrictive hunting legislation, followed by (b) the economic approach, characterised by a disturbing impact on wildlife natural behaviour, poor management practices of forest administration authorities (lack of food supply from forest rangers and illegal hunting) and rapidly increasing tourism generating habitat loss and wildlife habituation induced by artificial feeding. The conservation practices used to explain the increase of HWI were supported by Stăncioiu et al. [19], who revealed that conflicts are a negative side effect of wildlife conservation which affects the coexistence between humans and animals. These conflicts could be prevented by controlling wildlife effectively through sustainable hunting. However, this aspect is not possible. Conflict generating species such as large carnivores are strictly protected after Romania joined the European Union in 2007. According to Chapron et al. [41], Romania shelters a large number of brown bears, around 6000. Its population is characterised by high stability and active management in the past, due to the avoidance of institutional collapse following the post-communist transition. These aspects led to proper wildlife conservation and allowed a massive increase of large predatory mammals after World War II (1945). Popescu et al. [17] states that, through an unprecedented move in 2016, the Romanian government temporary restricted the traditional old practices of hunting and offered the change to reset wildlife conservation and to develop a scientific-conservation approach. Also, the large populations of wild ungulate from Europe could be a cause for the wellbeing of predatory mammals [41]. In the case of the wild boar, Geisser and Reyer [42] consider that the European effectiveness has increased, as it was favoured by changes in crops, the reintroduction of specimens in areas where they were initially exterminated, reduced the effectiveness of natural enemies (primarily grey wolves) and restricted hunting practices. Conversely, Vetter et al. [43] suggests that one of the main factors concerning wild boar population increase in Europe over the last decade is represented by changes in climate conditions, namely less severe winters and higher temperatures, which allow a higher survival rate of individuals over the winter season. The economic approach concerning increasing HWI highlights that, in the case of the brown bear, the total number of the animals is much lower than official data. The so-called very high number is used as a cover for authorizing hunting campaigns, where both Romanian and foreign citizens participate [44]. According to Linnell et al. [45], based on the continuous loss and high fragmentation of habitat triggered by the economic and social development of post-socialist Romania, the Carpathian brown bear population was considered a vulnerable species which required strict protection. The impact of touristic activities on HWI was analysed by Fortin et al. [46], who noted that the habitat of brown bears is increasingly intersecting the rapidly expanding area of tourism infrastructures. The study concluded that a consistent proportion of the peripheral specimens was influenced by tourist feeding the animals with artificial food.
The overall importance of our study is represented by the fact that the results strengthen the scientific knowledge concerning a few topics of interest within the field of wildlife conservation and management. These are the influences of landscape changes over HWI spatial and temporal pattern in mountain areas with major tourism resorts and high pressure on natural ecosystems, followed by their potential to develop into major sources of ecosystem disservices [47]. Landscape change analyses prove efficient in assessing the potential of human-induced spatial and structural dynamics of complex landscapes to trigger ecological dysfunctions materialized through increasing conflict interactions between local citizens, tourists and wild animals [48]. Also, these maps are relevant in highlighting the negative impact of sporadic and poor regulated economic activities, especially tourism, over the natural environment. They may reveal critical areas in a timely manner, so decision making authorities could implement urgent resolutions, such as the case of built-up development in strictly protected areas [49]. If correlated with wildlife habitat favourability maps (areas with high density, food resources or habitat connectivity maps), landscape change models could help improve the zoning system of protected areas by adapting the scientific integral protection areas to regions with high conservation value ecosystems and the ones destined for resource exploitation to sectors already suffering by disturbances [50,51]. Our maps can support local authorities to enhance their wildlife management practices by identifying unaltered natural habitats suitable for the location of wildlife feeding points or even to improve sustainable touristic practices through developing systems of trails and wildlife observation towers [52]. In Romania, studies were focused on assessing wildlife habitat requirements in human-dominated mountain landscapes [53,54]. The increasing HWI problem was usually handled through studies dedicated on describing conflict characteristics and human attitude towards large carnivores, especially brown bears [18,19], while few types of research focused on revealing and explaining potential triggering factors of HWI [21,55]. The conflict drivers of HWI phenomenon were usually attributed to legislation and connected management practices of forest administration institutions triggered by post-socialist changes [17,34], whereas the impact of landscape spatial changes due to capitalist economy and mass resource exploitation of natural resources on HWI magnitude and dynamics have been poorly linked. The usefulness of improving wildlife habitats connectivity through a system of protected areas in order to decrease livestock depredation by large carnivores and prevent conflicts with shepherds has been studied in the Western Carpathians [56]. The utility of decreasing human pressure over the wildlife habitats by restricting human activity in the proximity of protected areas has been proposed in the Rocky Mountains as a proper HWI management tool [57]. Furthermore, in order to effectively understand and reduce conflicts between black bears (Ursus americanus) and people in the USA, Atwood and Breck [58] developed a framework with the emphasis on data regarding both social and economic factors and wildlife habitat loss. In a similar manner, Koening et al. [59] proposed a conceptual framework aiming to understand and manage various dimensions of HWI through an interdisciplinary and transdisciplinary approach, focusing on agricultural landscapes, where habitat loss represents one of the main triggers for problematic interactions.
In the Upper Prahova Valley of Romania, post-communist transformations have led to the development of suburbanization and excessive tourism activities in the proximity of areas with high conservation natural value ecosystems and wildlife habitats. This phenomenon has favoured the degradation of natural environment, it has developed artificial feeding habits among wild animals, and it has intensified interactions between humans and wildlife, exposing both parties to conflicts. In addition to the expansion of built-up areas, other factors that have contributed to the increase of human-wildlife interactions have been the recovery of wildlife population due to hunting regulations and the lack of food supply in wildlife feeding points by forest staff.
The results indicate that the development of a network of protected areas in Romania has still to achieve all its objectives, especially to improve the acceptance of locals and tourists concerning the protection of large predatory mammals, such as brown bears and grey wolves. Moreover, the internal zoning system of protected areas planned by the authorities is being contested by locals, who are dissatisfied with the fact that protected areas extend to the vicinity of human settlements and there is no form of fencing that could hinder the entry of wild animals into inhabited areas. Lastly, the locals vehemently contest tourist activities allowed by local authorities on protected area territories. The most relevant example is the one of Bus , teni City Hall administration that permitted the construction of a large camping area in the proximity of the city, after 1990. The residents consider that tourists who camp there from spring to autumn are responsible for leaving large amounts of trash. All these have led to a change in the feeding habits of wild animals.
These driving forces can be controlled by increasing the collaboration among a complex range of entities involved in managing interactions with wild animals, such as decisionmaking authorities, researchers, conservationists, environmental activists, tourists and locals. Authorities should adapt upper-level decisions and regulations to both researchers or environmental conservationists' indications, and local stakeholders' needs [24]. In order to do so, a complex management must be elaborated. It must focus on a diverse pallet of methods which could take into account wildlife conservation, human welfare and economic development. These methods should include redesigning the internal zoning system of protected areas by regulating mass tourism development in the proximity of high conservation value natural habitats [57], enhancing landscape connectivity in areas where forest habitats were fragmented by developing ecological corridors for wild animals [56] and fostering efficient waste management in order to minimize wildlife habituation induced by artificial feeding [24].
Conclusions
The conclusion of this study states that it is vital to investigate the potential triggering factors and driving forces of negative HWI into depth in order to promote a sustainable economic and environment-friendly wildlife management.
The perception of local stakeholders plays a crucial role in understanding and enhancing the HWI problem. The attitude of the communities regarding HWI, be it positive or negative, is essential in balancing wildlife benefits determined by ecological management (focus on nature preservation, low impact tourism activities and wildlife effective control through organized hunting is preferred).
Also, landscape change models could represent an efficient and robust tool, suitable for revealing the potential landscape dysfunctions in terms of wildlife habitat loss, degradation or other human-induced disturbances. Further studies could indicate linkage with the HWI spatial and temporal manifestation pattern. The results could reveal hidden major HWI driving forces, by correlating the spatial distribution of HWI with landscape change models consisting in land cover conversions between natural classes which could potentially shelter wildlife habitats (such as forests) and built-up areas.
We consider that a highly connected collaboration between decision-making authorities, environmental research, conservationists and local stakeholders is crucial in order to sustain the healthy ecological recovery of viable wildlife population in human-dominated landscapes.
This approach could be enhanced through higher education and awareness of locals in terms of understanding the ecological and economic cohabitation with vulnerable and protected wildlife species. This could be achieved through awareness-raising events and science-based educational campaigns.
Conflicts of Interest:
The authors declare no conflict of interest. Table A4. The explanatory variables (driving forces) and response variables (presence and absence of species within a specific time period) used for the CCA analysis.
Driving Forces-Explanatory Variables
DF1-Poaching; DF13-Lack of herbivores or natural enemies DF2-Banning of hunting; DF3-The park rangers do not feed the animals; DF4-There are too many wild animals compared to how much the habitat can support; DF5-The animals are accustomed to artificial feeding; DF6-Humans have invaded their habitat due to the construction of houses, roads or touristic infrastructures; DF7-Wildlife habitats offer less food due to recent deforestation actions; DF8-The are affected by the intensive exploitation of mushrooms and berries; DF9-Relocations; DF10-The presence or absence of sheepfolds; DF11-The removal of the local dumpsite; DF12-Forest privatization leading to higher management intensities/shorter rotation periods; DF13-Lack of herbivores or natural enemies Table A4. Cont. | 2021-05-10T00:03:18.530Z | 2021-02-02T00:00:00.000 | {
"year": 2021,
"sha1": "232dafed087ac64b7026d90f4fbb820296a036e3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-445X/10/2/146/pdf?version=1612499140",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "de79754cc558ee002e652770c376fb46e49647ef",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
249104234 | pes2o/s2orc | v3-fos-license | Analysis of the Market Response of the Shared Accommodation Industry
Nowadays, more and more tourists choose to use A application and other travel software to book shared accommodation instead of traditional hotels. In recent years, shared accommodation has developed rapidly. This research focuses on the reason of choosing A application. The purpose of this research is to find out that why tourists choose A application and how to use it. The researcher uses qualitative method and used the constant comparison technique to analysis the data. The secondary data along with face to face interview, official website, journey and some handbook to analyses the research. This research also reviewed of three articles and the official website. As the research of this study, the researcher finding out shared accommodation application's advantages and give some advices to tourists.
INTRODUCTION
In the past few years, shared accommodation has grown very rapidly. Most of tourists today doesn't choose to stay in traditional tourist accommodations, but in shared private homes, they can not only get the ideal service but also experience the local life. Therefore, more and more tourists want to use shared accommodation. The owner of the house can rent out their idle house to the tourists, not only earning extra money, but also making new friends. Therefore, homeowners also like to use A application. So A application may become a vital part of the tourism industry.
At present, the shared accommodation application has covered 6.5 countries and regions, 65,000 cities, and has more than 4 million suites. As the earliest and most widely used short-term rental platform, the worldwide influence is other short-term rentals. The platform cannot be reached in the short term. In foreign countries, the home has become one of the main choices for people to stay when they travel. According to data released by the National Tourism Data Center, the number of foreign tourists entering Thailand has continued to increase throughout 2017. The increase in the number of foreign tourists will have more demand for domestic accommodation. The shared accommodation application has a worldwide influence, attracting more foreign tourists than other local short-term rental platforms in Thailand. Similarly, Thai tourists travel abroad to see the influence of A application abroad, and believe that it is more convenient and reliable to use it, which will also promote the development of the platform in Thailand.
Cooperation between shared accommodation and hotels may increase in future developments as both strive to learn other advantages. Shared accommodation will continue to develop its strengths to provide a more comfortable and special experience.
RO1.To find out the target market for A application in Bangkok.
RO2.To find out A application strategy to attract tourists when compared to traditional hotels in Bangkok.
RO3.To find out the future impact of A application.
RO4.To find a good accommodation through using the A Application. application: Interaction, family welfare, novelty, shared economic spirit, etc. The study selects Airbnb for tourists to book accommodation. A lot of useful analysis, there are many useful analyses, but there are many limitations to this research. i [1] Research on the Airbnb customer experience: Evidence of convergence across three countries, by Ana Brochado(2017). Ana Brochado(2017) studied Airbnb's customer experience in the report. He analyzed Airbnb as a shared platform that combines the Internet and rental housing. Its visitor evaluation reflects the customer's true response to Airbnb and can increase transaction volume. [2] Research on the If nearly all Airbnb reviews are positive, does that make them meaningless, by Judith Bridges. Explore the impact and standardization of Airbnb's comments. The results of the study show that Airbnb reviews are useful for visitors and can help visitors to help them choose the right accommodation. [3] According to Airbnb customer experience: Evidence of convergence across three countries, 2017. The purpose of this research note is to examine the customer experience of Airbnb. Airbnb, a sharing platform that links suppliers of living space with those needing shortterm accommodations, has remarkable customer satisfaction as evidenced by its user reviews (Ert, Fleischer, & Magen, 2016; [4] Zervas, Proserpio, & Byers, 2015). [5] While the aforementioned papers sug-gest that Airbnb's ratings may be inflated, other work indicates that the amount of reviewer bias is small and that the ratings reflect high transactional quality.
METHODOLOGY
This study used qualitative analysis to analyze shared accommodation. The text on shared accommodation supports this article. Make a simple inference by interviewing Bangkok International Travelers and local tourists. The research's target populations are tourists who uses the A application. Sample size about 10 interviewees. The first step taken by the researchers in the sampling process is to interview visitors in places where visitors often go to visit visitors based on check-list questions. [6] This study used email interviews. Conduct an email interview with a travel blogger. The problem involves the discovery, selection, and decision of shared accommodation. It has been recommended which shared accommodation is better.
This research used a qualitative method to collect the data through, face to face interview, phone interview and on officially websites. The plan is to start the interview with the local tourists and international tourists. A faceto-face interview was used with the local tourists and international tourists, which was also recorded and photographed. On the other hand, due to unavoidable circumstances, a Email interview was conducted with the famous travel blogger instead of a face-to-face interview. [7] The interviews cover how to choose a shared accommodation, the future development of shared accommodation, how to use the app, and how to recommend.
RESULTS
According to the results of and interview open coding, many young people like to use the shared accommodation software such as Airbnb to book accommodation. They like cleaning and comfortable, have a good environment and location accommodation. A application can provide them with such accommodation.
Airbnb is a software for renting shared accommodation. Shared accommodation can share a room or share an apartment. It allows you to choose more types of accommodation, allowing you to experience different experiences in the local area. [9] According to the data, the price of shared accommodation is cheaper than the hotel price, and it is more home-like. People will choose the accommodation that suits them according to the comments of other tourists.
Airbnb has become one of the most well-known and most trusted shared housing Apps. Because of its earliest creation of shared housing and long-standing experience and a good strategic plan. Therefore, there are a large number of landlords working with them, and a group of loyal customers. Airbnb has many advantages that traditional accommodation does not have, such as: 1. Good or unique scenery, there are many hotels nearby or there are no hotels nearby. Airbnb is definitely one of the good choices when there is no hotel nearby, and even if there are hotels nearby, it is almost expensive. Or can't match a room with such an expensive price. So for tourists, A application is the right choice for costeffective allocation of finance.
2. Feel like home. Compared to traditional accommodation, A application 's housing is usually the landlord's house. The house is very beautiful and warm, and then displayed on A application, and this kind of housing usually has a kitchen. If you are not used to eating or missing your hometown diet during the trip. It came in handy. Without the restraint of traditional accommodation, people in other places have no strange feelings, so they can better integrate into their destinations and enjoy a more enjoyable journey.
3. Booking in A application is very fast and convenient. Compared with other housing apps, it has a simple and clean operation interface, the location of the house, the price section, and the type of house is clear and easy to search. Payments are also more convenient and quick, and the simple steps make visitors no longer troubled by complicated and complicated payment methods. Fast check-in and check-out are convenient for tourists. [10] And the boutique accommodation recommendation of the home destination city allows people to learn more about the destination. In the process of housing, you can learn interesting places to visit from local landlords, local customs, and explore the delicious food that local talents can't find.
Different local experience, difference types of accommodation can be chosen also A application ' advantages. Because of A application has many advantages, so tourists would like to choose it.
CONCLUSION
Even if Airbnb has many advantages, it also has certain security problems. After all, private houses do not have the security conditions of traditional accommodation, and the probability that some accidents may be helped will be reduced. While tourists are most eager to protect their personal safety, Airbnb should take some measures to enhance the trust of A application, such as external facilities, which can enable the landlord to install cameras on the periphery of the house. Internally, an alarm device can be installed to directly connect to the police station. When the guest is in danger, he can press the alarm device to get help from nearby residents or wait for the police to rescue. In this way, A application will be better and better based on the current good development prospects and more sustainable development.
And for tourists how to use it better? The researcher gives some suggestions as follow. Secondly, Visitors must have their own judgment. When choosing accommodation, they must look at what services the homeowner provides, negotiate with the homeowner according to their needs, and finally make a choice.
Finally, is the most importance is focusing on other tourist's comments. It is very useful for you.
If the comments is bad, it means the accommodation is not worth being chosen. If the comments is good, it means the accommodation should be chosen. Then the other comments can give us some advices for protecting ourselves. | 2022-05-28T15:14:52.790Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "e57bd73562c0815ad58e28ecb8b24e9f656716cf",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125974444.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9f580cd194da8983eff26a239a95fd08ec4670ab",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
260929797 | pes2o/s2orc | v3-fos-license | NG2-positive pericytes regulate homeostatic maintenance of slow-type skeletal muscle with rapid myonuclear turnover
Background Skeletal muscle comprises almost 40% of the human body and is essential for movement, structural support and metabolic homeostasis. Size of multinuclear skeletal muscle is stably maintained under steady conditions with the sporadic fusion of newly produced myocytes to compensate for the muscular turnover caused by daily wear and tear. It is becoming clear that microvascular pericytes (PCs) exhibit myogenic activity. However, whether PCs act as myogenic stem cells for the homeostatic maintenance of skeletal muscles during adulthood remains uncertain. Methods We utilized PC-fused myofibers using PC-specific lineage tracing mouse (NG2-CreERT/Rosa-tdTomato) to observe whether muscle resident PCs have myogenic potential during daily life. Genetic PC deletion mouse model (NG2-CreERT/DTA) was used to test whether PC differentiates to myofibers for maintenance of muscle structure and function under homeostatic condition. Results Under steady breeding conditions, tdTomato-expressing PCs were infused into myofibers, and subsequently, PC-derived nuclei were incorporated into myofibers. Especially in type-I slow-type myofibers such as the soleus, tdTomato+ myofibers were already observed 3 days after PC labeling; their ratio reached a peak (approximately 80%) within 1 month and was maintained for more than 1 year. Consistently, the NG2+ PC-specific deletion induced muscular atrophy in a slow-type myofiber-specific manner under steady breeding conditions. The number of myonucleus per volume of each myofiber was constant during observation period. Conclusions These findings demonstrate that the turnover of myonuclei in slow-type myofibers is relatively fast, with PCs acting as myogenic stem cells—the suppliers of new myonuclei under steady conditions—and play a vital role in the homeostatic maintenance of slow-type muscles. Supplementary Information The online version contains supplementary material available at 10.1186/s13287-023-03433-1.
Introduction
Dynamic cellular equilibrium, a balance between gaining and losing cells, is an essential characteristic of multicellular organisms and strictly regulated for adaptation to variety of their external and internal conditions including activity, anabolism/catabolism, and diseases.In contrast to many mononuclear cells, skeletal muscle cells are one of the few syncytial cells.According to the myonuclear domain theory, a given myonucleus has limited transcriptional capacity and controls a defined volume of the myocytoplasm.Based on this theory, a linear relationship exists between the total number of myonuclei and muscle fiber size [1,2].The myonuclear number is adjusted to maintain the proper nuclear-to-cytoplasmic ratio-new nuclei are added during hypertrophy and lost with atrophy.Thus, the number of myonuclei is altered accordingly during muscle hypertrophy and atrophy [3].However, the mechanisms by which the myonuclei are turned over, and skeletal muscles are maintained during adulthood remain unclear.
Since skeletal muscle nuclei are post-mitotic, the remarkable muscular regenerative and homeostatic maintenance potencies are owned by the resident myogenic stem cell population, called satellite cells (SCs).These cells fuse into the syncytium for myonuclear addition or replacement [4][5][6].Recent studies utilizing conditional SC-deletion mouse models have reported that effective myofiber hypertrophy occurs in the absence of SCs [7].SCs are not globally required to maintain muscle mass throughout the lifespan [8,9].Therefore, myonuclei are assumed to be supplied by SCs and other myogenic cells to sustain muscle mass under homeostatic conditions.
Pericytes (PCs) are mural cells embedded in the capillary basal lamina that regulate fundamental microvessel functions, such as blood flow and permeability [10,11].Some PC populations exhibit multipotency, similar to mesenchymal stem cells (MSCs) [12][13][14][15].Several reports indicate that PCs, which are distinct from SCs, can differentiate into skeletal muscles in vitro and in vivo [14,16,17].Experiments using lineage tracing of alkaline phosphatase + PCs revealed that PCs fuse with developing myofibers and become Pax7 + SCs, contributing to myogenic growth during postnatal muscular development [18].Furthermore, Kostallari et al. [19] reported that PCs are indispensable for postnatal skeletal muscle growth using a transgenic mouse model for the selective deletion of neuroglial 2 proteoglycan (NG2) + PCs.PCs stimulate muscle growth through insulin-like growth factor 1 and regulate SC quiescence through angiopoietin, subsequently promoting muscle growth during the neonatal period.However, the myogenic effects of PCs, through myogenic stem cells and/or SC-associated cells, are restricted to the neonatal/juvenile developmental stage.However, the role of PCs in muscular maintenance during adulthood is not well understood.
Besides PCs, fibro-adipogenic progenitors (FAPs) have various functions, such as the maintenance of somatic stem cells, including SCs [20,21], and their MSC-like multipotency contributes to ectopic fatty formation in skeletal muscle tissues [22].Platelet-derived growth factor receptor alpha (PDGFRα) + FAPs are reportedly required for the homeostatic maintenance of adult skeletal muscle by providing an SC-sustaining microenvironment [23,24].Additionally, the inducible depletion of FAPs in adult mice under normal breeding conditions for up to 9 months reduced the number of SCs and resulted in muscle atrophy [23].Further, Uezumi et al. [24] reported that inducible FAP-deletion exhibits phenotypes markedly similar to sarcopenia, including myofiber atrophy, alterations in fiber types, and denervation at neuromuscular junctions.However, the inability to genetically target FAPs in vivo has limited the accurate assessment of their roles in muscle regeneration and homeostasis.Ultimately, whether myogenic stem cells other than SCs act on the homeostatic maintenance of skeletal muscles during adulthood remains uncertain.
In this study, PC-specific lineage tracing and inducible PC-deletion mouse models were used to address the necessity of PC in adult skeletal muscle maintenance.PC-specific lineage tracing experiments were performed to demonstrate the myonuclear turnover and the contribution of PCs to myonuclear supplementation.Collectively, we aimed to elucidate the role of microvascular PCs in maintaining long-term homeostasis in the skeletal muscles.
Animals
All mice used in this study had a C57BL/6 J genetic background and were housed under specific pathogen-free (SPF) conditions and kept at 22-26 °C under 12 h:12 h light-dark cycle and provide regular chow ad libitum and tap water during the experiment.The NG2-specific fluorescent mice (NG2-DsRed mice) and NG2-specific Cre-inducible cell lineage tracing mice (NG2-CreERT/Rosa26-STOP-floxed tdTomato-Tg [NG2-CreERT/Rosa TdTomato] mice; The Jackson Laboratory) were generated as previously described [25,26].The NG2-specific Cre-inducible cell deletion mice (NG2-CreERT/DTA mice) were generated by crossing NG2-CreER mice and Rosa26-STOP-floxed-DTA-Tg mice (The Jackson Laboratory) [27].Male mice [12-16 weeks of age, 25 ± 5 g body weight (bw)] were used for all experiments.To induce Cre recombinase, mice were treated with Tam (Tamoxifen; Sigma-Aldrich, St. Louis, MO, USA) intraperitoneally at a dose of 100 mg/kg bw for 5 days.For long-term observation (more than 1 month), additional Tam (100 mg/kg bw) was injected monthly to maintain PC deletion.Rosa26-STOP-floxed-DTA mice treated with Tam and NG2-CreERT/DTA mice treated with vehicle and corn oil were regarded as controls A and B, respectively.All animal experiments were performed in accordance with the ethical guidelines approved by the Animal Care and Use Committee of Asahikawa Medical University.
Physiological performance tests
Muscle strength was assessed using a grip-strength device (MK-380 M; Muromachi Kikai Co., Tokyo, Japan).Mice were held by the tail and were made to grab the wire mesh of the grip-strength device with each limb.The mice were gently pulled away until the grip was released, and the maximal force was recorded.Ten measurements were performed 10 times for each mouse.Exercise tolerance was assessed using a treadmill test, with minor modifications to a published protocol [17].Briefly, the mice were placed on the belt of a lane motorized treadmill (TMS-4B; MELQUEST, Toyama, Japan).After a warm-up period of 5 min (flat lane, belt speed of 10 m/ min), the mice were run under the test conditions (+ 15° slope lane, 15 m/min), and the maximum running time was measured.
Histology and immunohistochemistry
To estimate functional vessels, mice were anesthetized with isoflurane (between 1.5 and 2.5%) to minimize suffering and injected with 300 µL of fluorescein isothiocyanate (FITC)-or rhodamine-labeled Griffonia simplicifolia lectin (500 µg/mL in PBS; Vector Laboratories, Burlingame, USA) before euthanasia, as described previously [25].Euthanasia was performed by cardiac puncture under isoflurane anesthesia.Fresh muscle samples were embedded in a compound (Surgipath FSC 22 Blue; Leica Biosystems, Wetzlar, Germany), quickly frozen in liquid nitrogen, and stored at − 80 °C until further use.
Measurement of myonuclear domain
Isolation of single myofibers was performed with modifications to the previous description [28].Briefly, after skeletal muscles were fixed with 4% PFA for 48 h, the muscles were incubated in a 40% NaOH solution for 2 h at 24 °C to isolate single myofibers.Muscle samples were then neutralized by soaking in 1 M Tris HCL solution (pH 6.0).The isolated myofibers were mounted on glass slides with 10% glycerol containing Hoechst 33,342 (H3570; Invitrogen, Thermo Fisher Scientific, Waltham, MO, USA).Isolated myofibers were imaged using a fluorescence microscope (BZ-X710; Keyence), and the area of each myofiber and the number of nuclei within the fibers were measured.
Fluorescence in situ hybridization (FISH)
To detect NG2 + PC-originated myonuclei, the nuclei in which genetic recombination was induced by NG2 Cre were detected using PCR-FISH methods.To detect the Cre-specific genetic recombination site (tdTomato gene), the following primers were designed: forward primer, GGG CCC TAA GAA GTT CCT ATTC; reverse primer, GGG GAA GGA CAG CTT CTT GT.Myofibers were labeled by in situ PCR to synthesize digoxigenin (DIG)-labeled DNA using the PCR DIG Probe Synthesis Kit (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer's protocol, with some modifications (denaturation at 95 °C for 40 s, annealing at 60 °C for 20 s, and elongation at 72 °C for 15 s).DIG-labeled probes were detected with an anti-DIG antibody (ab420, mouse monoclonal, 1:100, Abcam) and visualized with Alexa 488-conjugated anti-mouse IgG antibody (A11029, goat polyclonal, 1:1000, Invitrogen).
In vitro myogenesis assay
After treatment with Tam for one week, soleus muscle fibers of NG2-lineage mice were isolated using collagenase I solution (100 mg/mL Hank's PBS).Muscle fiber samples were incubated in a complete Dulbecco's modified Eagle's medium (DMEM; Gibco, Thermo Fisher Scientific, Waltham, MO, USA) containing 20% fetal bovine serum (FBS; CORNING, Corning, NY), 100 U/mL penicillin, and 100 μg/mL streptomycin.After the cells were grown from fiber explants, the medium was switched to a differentiation medium-DMEM containing 2% horse serum (Gibco, Thermo Fisher Scientific, Waltham, MO, USA).After changing the medium every 4 days, myogenic differentiation was confirmed by observing myotube formation or immunostaining of MyHC, MYH2 and MYH7.
In vitro cell viability assay
Adipose stromal cells (ASCs) were prepared from subcutaneous adipose tissues of the NG2CreERT/ DTA mice, and NG2 + cells were isolated using magnetic-activated cell sorting system as described previously [29].Isolated cells (1.5 × 10 4 cells per well) were seeded on 12-well plates and incubated overnight and then, treated with 2 µM 4-hydroxytamoxifen (4-HT; Sigma-Aldrich) or vehicle (DMEM with 10% FBS, 100 U/ml penicillin, and 100 mg/ml streptomycin) for 6 days.Cell numbers were counted in four high-power fields for each well, and the averages were compared.
Gene microarray analysis
One month after treatment with Tam or the vehicle (corn oil), soleus muscles from each group of PC-deletion mice were dissected, frozen in liquid nitrogen, and stored at − 80 °C.The mRNA of these samples was subjected to microarray analysis using the 3D Gene Mouse Oligo Chip 24 K (Toray Industries Inc., Tokyo, Japan), as described previously [15].The signal corresponding to each gene was normalized using the global normalization method (Cy3/Cy5 ratio median = 1).Intensity values greater than two standard deviations above the background signal were considered valid.GO enrichment analysis was conducted using the metascape.orgwebsite.
Statistical analysis
Experimental data are presented as means ± standard error of the mean (SEM) unless otherwise noted.The sample numbers (n) are shown in the figure legends.Differences between two measurements were evaluated using the unpaired Student's t-test, and for comparisons of more than two groups, one-way ANOVA was used for normal distributions.This was followed by Tukey's post hoc test using Prism software version 9.0 (GraphPad, San Diego, CA, USA).A P value < 0.05 was considered statistically significant.
NG2 + PCs contribute to muscular regeneration in a muscle-type-dependent manner during adulthood
To examine whether PC contributes to myogenesis during in vivo homeostatic conditions, we genetically labeled PCs and observed their long-term fate in uninjured normal adult muscles.We utilized NG2 (neuron-glial antigen 2) as a PC marker since we previously confirmed that PCs were labeled specifically in peripheral tissues, including skeletal muscle (Additional file 1: Fig S1).We efficiently induced recombination in over 80% of NG2 + PCs after treatment with tamoxifen (Tam) using an NG2 promoter-derived gene recombination system, NG2-Cre-ERT [17,25,29].
NG2-CreERT/Rosa-tdTomato mice were treated with Tam for one week.Muscles were harvested at the indicated time points (Additional file 1: Fig S2).After Tam treatment, tdTomato-expressing cells were observed specifically in PCs adjacent to microvessels in skeletal muscle (Additional file 1: Fig S2).After normal breeding for 1-12 months, tdTomato-expressing cells were observed in myofibers.Notably, the degree of PC contribution to myofibers was different among the muscle types; the soleus and gastrocnemius (limb muscles) comprised 85% and 35% tdTomato + myofibers, respectively, whereas the muscles of the diaphragm and abdominal walls contained 80% and 30%, respectively (Additional file 1: Fig S2).The soleus and diaphragm are rich in slow-type myofibers.
Skeletal muscle fibers are broadly classified into type I (slow-twitch/fatigue-resistant), type IIB (fast-twitch/ fatigue-susceptible), and type IIA (intermediate) based on their physiological properties and myosin heavy chain (MyHC) isoforms [30].To confirm the muscle-type specificity of PC-contributing myofibers, myofibers were immunolabeled with MyHC isoforms.The soleus muscles contained mostly slow-type myofibers (45, 55, and 5% of type I, IIA, and IIB, respectively), and tdTomatoexpressing cells were located in slow-type myofibers 6 months after Tam treatment (Fig. 1A, B).These data suggested that PCs contribute to the selective regeneration of muscles in a slow-type myofiber subset under static breeding conditions.
Rapid turnover of muscular nuclei with NG2 + PCs for maintenance of muscle mass
Next, we examined the time course of tdTomato labeling in myofibers.The tdTomato-expressing myofibers were observed during the normal breeding period of one week, and their ratio to total myofibers increased and peaked around 1 month after Tam treatment (Fig. 1C, D).Notably, NG2 + PCs were labeled with Tam treatment once at the beginning of the experiment, and the peak ratio was maintained for up to 12 months (Fig. 1C, D Rodent muscle mass is dramatically increased during the postnatal growth period (0-10 weeks old) but remains mostly constant after the growth period under normal breeding conditions.According to the myonuclear domain theory, the myonuclear number in myofibers is stable under a fixed volume of myofibers in adulthood, especially under normal breeding conditions [1].Indeed, the number of myonuclei in each myofiber of the soleus among the 16-32 week-old mice was constant (Fig. 2A, B).
Because skeletal muscle fibers are multinuclear cells, tdTomato-expressing myofibers can be detected if any myonuclei are replaced with nuclei from NG2 + PCs.To detect myonuclei originating from NG2 + PC within myofibers, the recombinant tdTomato gene within myonuclei of isolated myofibers was determined using fluorescence in situ hybridization (FISH).NG2 + PCoriginated nuclei were detected in isolated myofibers of the soleus 2 months after induction of tdTomato expression in NG2 + PCs (Fig. 2C).According to the time course of the ratio of tdTomato-labeled myofibers (Fig. 1C, D), PCs contributed to myonuclear replacement (at least one nucleus in each myofiber) at a ratio of 5-7% and 1-2% per day in the soleus and gastrocnemius muscles, respectively.These data suggest that PCs contribute dynamically and persistently to myofiber regeneration through myonuclear replacement under homeostatic conditions.
Muscle-resident NG2 + PCs have myogenic potential to differentiate into muscle fibers
Other investigators and we have reported that PCs isolated from peripheral tissues, including skeletal muscle and adipose tissue, have myogenic potential in vitro and differentiate into functional skeletal myofibers when transplanted into dystrophy model mice [16,17].To minimize artificial modification by cell subculture manipulation, we prepared myofiber explants from skeletal muscles and examined the myogenic potency of primary PCs in the microvessels that attach to myofiber explants (Additional file 1: Fig S3).After 6 days of myogenesis induction, tdTomato + PCs differentiated into myosin heavy chain (MyHC)-stained myofibers (Fig. 3A).In parallel with myogenesis, the expression of myogenesis-related genes, including myoD and myogenin, was increased (Fig. 3B).To test whether myogenic PCs would differentiate to the fiber type similar to where they originated, isolated PCs from soleus muscles differentiate to MyHC-positive myotubes but not to advanced type specific myofibers (Additional file 1: Fig S4 ).
Deletion of NG2 + PCs induced muscular atrophy, specifically in slow-type muscle fibers
Although lineage-tracing experiments suggest that PCs contribute to myogenesis through myonuclear replacement, it is possible that PCs fuse into myofibers nonspecifically and that PC-originated nuclei do not act as myonuclei.Thus, we examined the consequences of genetic deletion of PC on the structure and function of skeletal muscle under homeostatic conditions.NG2-CreERT/DTA mice were treated with Tam and subjected to muscular functional and histological analyses at the indicated normal breeding time following PC deletion.NG2CreERT -/-/DTA with Tam and NG2-CreERT/DTA without Tam were used as controls A and B, respectively.
We confirmed that the recombinant CreERT/DTA system appropriately induced deletion of NG2 + PCs with tamoxifen treatment in vitro system (Additional file 1: Fig S5).However, in vivo general phenotypic changes including circulation disorder and loss of body weight were not observed in the early stages of PC deletion, similar to a previous study [19].Four months after PC deletion, body weight slightly decreased (Fig. 4A), and the general physiological performance (measured by the treadmill test) was significantly reduced (Fig. 4B).The relative pure muscle strength (tested by grip power) was not altered during the observation period up to 4 months (Fig. 4C).Notably, there was no alteration in body weight, muscular atrophy, and performance in all control groups due to non-specific side effects.
After 4 months of Tam treatment, the mass of the soleus, a typical slow-type red muscle, was selectively decreased.In contrast, muscular atrophy in fast-type muscles such as the triceps and gastrocnemius was not observed (Fig. 5A, B).Notably, aside from the changes in muscle volume, we also observed alterations in the external characteristics in the soleus, i.e., a whitish appearance; this was not present in other lower limb muscle types (Fig. 5A).Histological analyses indicated significant atrophy of the myofibers in the soleus, but not in the gastrocnemius, after 4 months of Tam treatment (Fig. 6A).Notably, atrophy of myofibers in the soleus muscle was already observed in the first month of Tam treatment.In agreement with the atrophy of the soleus by PC deletion, the cross-sectional area (CSA) of myofibers within the soleus muscle was significantly attenuated.Myofiber CSA distribution across soleus muscles showed a leftward shift owing to the higher abundance of smaller fibers at 1 and 4 months after PC deletion (Fig. 6B).
The soleus is a muscle that is rich in slow-type muscle fibers (MyHC type I and type IIA fibers) but rare in fasttype muscle fibers (type IIB) (Figs. 1A, 7A).At 4 months after the induction of PC deletion, the proportion of type IIB fibers (fast) was increased and, accordingly, the proportion of type IIA fibers was decreased in the atrophic soleus muscles (Fig. 7A).No differences were observed in lectin-stained functional microvessels within the muscles between the PC-deletion and control groups (Fig. 7B).
Genomic expression analyses in skeletal muscles under PC-deletion
To confirm the muscular condition under PC deletion, quantitative PCR and comprehensive gene analyses were performed using the soleus muscles of PC-deletion and control mice.PC deletion for 1 month induced a change in gene groups related to several pathways according to Gene Set Enrichment Analysis (GSEA) and Gene Ontology (GO), e.g., an increase in inflammation reaction, cell proliferation, and apoptosis, and a decrease in metabolic activity (Table 1, Suppl. Figure 5).
Although we confirmed that Tam treatment induced the deletion of NG2 + PCs using the NG2-CreERT/DTA system in in vitro experiments (Additional file 1: Fig S4 ), observing PC deletion in tissues in vivo was challenging.NG2 gene expression in the soleus muscles was not altered during early (one week) to extended periods (4 months) after the induction of PC deletion (Fig. 7C).The expression of Pax7 (an SC marker) was also not altered (Fig. 7C).Gene array analysis demonstrated that the expression of PC marker genes, including PDGFRβ and smooth muscle actin (acta2), or the SC marker Pax7 did not change, while the expression of NG2 tended to increase (Table 1).
The deletion of PCs leads to muscle atrophy; thus, the expression of the myogenesis-related genes (MyoD and Myogenin) was significantly increased in the PC-deletion group compared to the controls (Fig. 7D, Table 1).Furthermore, in line with the alterations in MyHC expression patterns, the expression levels of muscle-type specific myogenic genes, Myh7 (slow type) and Myh2 (intermediate type), decreased, while those of Myh4 (fast type) increased significantly (Fig. 7D).
Discussion
NG2 + PC-specific lineage tracing experiments demonstrated that PCs contribute to myogenesis in adult steady-state conditions, especially slow-type myofibers such as the soleus.Fluorescent tdTomato-expressing PCs fused into multinuclear myofibers and, subsequently, PC-originated nuclei labeled the entire myofiber.In line with the myonuclear domain theory, the number of myonuclei in each myofiber was constant with a fixed muscle volume under homeostatic conditions.In addition, we demonstrated that NG2 + PC deletion induced muscular atrophy in a slow-type myofiber-specific manner.This evidence denies the possibility of a non-specific fusion of labeled PCs to myofibers in lineage-tracing experiments.Collectively, these data indicate that the myonuclear turnover of slow-type myofibers is relatively fast, and PCs act as myonuclear suppliers to maintain homeostasis in slowtype myofibers.
Discrepancy between in vivo and in vitro PC-deletion studies
Because PC is a crucial component of microvessels that regulates and maintains their function and structure, their deletion may cause circulation disorders.It is well documented that genomic deletion or antibody-mediated blocking of PDGFβ, which is a marker for PCs/smooth muscle cells and mediates their function, causes chronic PC deletion and, subsequently, several abnormalities due to microvascular circulation disorder [31,32].Alternatively, PDGFRβ + -deficient mice exhibit reduced body weight and fail to survive beyond the postnatal first week, presumably due to severe impairment of vessel wall integrity [33,34].In contrast to PDGFRβ + PCs, NG2 + PC deletion induces a reduction in PC coverage of cortical capillaries at 3 days post-PC deletion and rapid neurovascular uncoupling in the brain circulation system only in the acute phase; the long-term effects of PC deletion have not been reported [35].Similarly, in the present study, circulation disorders and related phenotypic abnormalities, including body weight loss and muscle atrophy, were not observed (Figs. 4, 7B).In addition, gene expression profiles did not indicate the prevalence of ischemic disorders in PC-deleted muscle tissues, i.e., the angiogenesis-related genes such as VEGF and PECAM1 did not increase.However, the expression of some ischemicrelated genes, such as HIF, was slightly elevated (Table 1).
In contrast to the in vivo system discussed above, in vitro experiments using the NG2-CreERT/DTA system demonstrated that Tam treatment induced NG2 + PC deletion and, accordingly, the gene expression of NG2 was decreased (Additional file 1: Fig S5).This discrepancy between in vivo and in vitro experiments may
Table 1 Comprehensive gene expression profile in response to PC deletion
After induction of NG2 + PC deletion for 1 month, gene expression within the soleus of PC-deletion and control mice was estimated by microarray analysis.The signal for each gene was normalized using the global normalization method.The definition of a significant difference was more or less than twice (up/down * 2, ** 4, *** 8) the difference in the log2 ratio among genes with a fluorescence value of > 100 (boldface).ND Not detected be explained by compensatory PC replacement.Berthiaume et al. [36] reported that an in vivo laser beammediated acute single PC deletion causes temporary loss of microvascular tone at the deletion site; however, PCs were promptly refilled in this area, although little is known about the "PC progenitor cells" responsible for PC-replenishment.NG2 is a specific marker for PCs, whereas PDGFRβ is a relatively wide range PC/smooth muscle cell marker, including presumably PC progenitor cells [10,37].Thus, the different phenotype in celldeletion experiments targeting between NG2 + PCs and PDGFRβ + PCs might be owing to the broad range of PC populations.Deletion of relatively broad PC populations, i.e., PDGFRβ + PCs, may abolish the PC compensatory function and induce severe microvascular dysfunction.Induction of NG2 + PC deletion in NG2CreER/DTA mice paradoxically maintained the expression of NG2 genes.This may be due to the balance between the deletion and regeneration of NG2 + cells by PC renewal.The compensatory PC regeneration system could maintain the basic circulatory function and structure of microvessels.It is well documented that dysfunction of PCs, namely PC loss followed by microvascular disorder, is a fundamental pathological feature of advanced diabetes mellitus complications [38].Hyperglycemia induces damage in vascular cells including PCs within hours or days through the excess production of reactive oxygen species (ROS) [39].A crucial unanswered question is why hyperglycemia in humans in vivo cases requires a decade or more to provoke microvascular disorder followed to diabetes-related complications, such as retinopathy and cardiovascular diseases.The gap of the timing of toxic effects of hyperglycemia in vivo may be explained by the presence of PC-replenishment system.Further studies are required for the mechanisms of compensatory PCreplenishment system.
Rapid myonuclear turnover in slow-type fibers
In adulthood, skeletal muscle myonuclear number is maintained under steady conditions, with the sporadic fusion of myocytes to compensate for muscle turnover from daily wear and tear [1].PC-specific lineage tracing experiments have demonstrated that PCs contribute to myonuclear replacement, and myonuclear turnover is relatively fast, at least in slow-type myofibers.This is supported by previous studies demonstrating that the myonuclear turnover of the soleus is higher than that of other fibers, using labeled myonuclear tracing techniques [2].In mammalian skeletal muscle, multiple myofiber types are intermingled within a single muscle group.Each muscle group exhibits varying proportions of the different fiber types, and muscle fibers can remodel their phenotypes to adapt to environmental changes.
Reduced muscle usage from paralysis or prolonged bed rest, namely immobilization and disuse syndrome, causes significant atrophy of all muscles.It is accompanied by a decrease in myonuclear number, particularly in type I fibers compared to type II fibers [1,2,40].From the energy balance perspective, this is reasonable, as the energy consumption of type I fibers is high.Thus, rapid myonuclear turnover might contribute to a quick adaptive response to regulate fiber volume.
In our lineage tracing experiments, PCs were labeled by the expression of tdTomato only once before observation.Skeletal muscle fibers were labeled by tdTomato, and their ratio to total muscle fibers reached a peak (approximately 80-90% in the soleus) within 1 month.Since myonuclear turnover is rapid and labeled PCs are consumed for frequent myonuclear replacement, the proportion of labeled myofibers may decrease unless tdTomatoexpressing myonuclei are continuously supplied.However, the proportion of labeled fibers was maintained for more than 1 year.In general, somatic stem cells, including myogenic stem cells, are induced to divide for tissue demands such as muscle damage or increased activation.During the division of cells for myogenic differentiation, it appears that at least one of the daughter cells is maintained as a stem cell, namely asymmetric cell division [41].According to previous studies [14,16], PCs act as myogenic stem cells in vivo.Thus, when PCs are labeled as Td-tomato-expressing cells at the start point, labeled PCs continuously supply myonuclei to the fiber syncytium.In contrast, some self-renew and are maintained as labeled PCs.Indeed, in addition to labeled myofibers, labeled PCs were well maintained even after a long observation period of more than 1 year (Figs.1C, 7A).Myogenic stem cells, satellite cells derived from fast-or slow-type muscles are heterogeneous cells with different differentiation potentials [42,43].In our in vitro myogenesis system, isolated PCs from soleus muscles differentiate to MyHC-positive myofibers (Fig. 3) but not to advanced type specific myofibers (Additional file 1: Fig S4).It remains to seek certain condition to induce further myo-differentiation to confirm PCs have heterogenic myogenic potentials.Alternatively, additional external stimuli are required to induce advanced myofiber differentiation.It has been reported that the fate of SCs during muscle regeneration is primarily influenced by a complex network of intrinsic and also extrinsic regulators such as innervation [44,45].
Significance of slow-type muscle specificity
The myofibers switched from type I (slow) to type II (fast) in an adaptive response to reduced myonuclear supplementation from PCs.The percentage of SCs in the soleus muscle is generally higher than that in other muscles [46,47]; thus, under PC deletion, myogenesis might be mediated by other myogenic stem cells, such as SCs, which can differentiate into fast-type myofibers [48].However, this compensatory myogenesis is not sufficient to compensate for muscular atrophy that results from PC deletion.The niche surrounding SCs is crucial for regulating their functions and affecting muscle regeneration [6].PCs also act as SC-associated cells to form a niche for SCs during the neonatal period [19].Thus, PC deletion might attenuate SC functions for muscular homeostasis even in adulthood, although there was no change in the expression of SC marker genes such as Pax7 (Fig. 7C).
Skeletal muscle fibers vary in their metabolic characteristics, i.e., type I fibers have a highly oxidative metabolism with high capillary density, and type II fibers are further defined as type IIA (oxidative) and type IIB types (having oxidative and glycolytic metabolic characteristics) [30].In addition to oxidative metabolism, type I slowtwitch myofibers have high lipid oxidative capacity and increased insulin-stimulated glucose uptake, with a high content of insulin-regulating glucose transport protein (GLUT4) compared to type II fibers [49].The proportion of type I myofibers correlates with insulin responsiveness and may be involved in the etiology and insulin resistance in obesity.Obesity and type 2 diabetes mellitus are associated with reduced proportions of type I fibers and, conversely, increased proportions of type IIB fibers in the skeletal muscle [50,51].In our study, the expression profiles of genes related to the generation of metabolites and energy were significantly reduced in the soleus muscle under PC deletion (Additional file 1: Fig S6).Thus, dysfunction of PCs that is a fundamental feature of diabetes mellitus [38] may contribute to the metabolic disorder in diabetes mellitus through the slow-type specific skeletal muscle atrophy, in addition to microvascular disorders.The PC-deletion mouse model could be utilized as a type I-muscle fiber specific atrophy model, and further studies are required to investigate the role of type 1 muscle fibers in the metabolisms and pathogenesis of diabetes mellitus.
Conclusions
The myonuclear turnover of slow-type muscle fibers is relatively fast.PCs contribute to myonuclear supplementation, acting as myoprogenitor cells for the homeostatic maintenance of type I muscle fibers.The mouse model is valuable for investigating the role of slow-type muscle, especially in metabolism.Thus, understanding the fiber type-specific role may provide critical insights into the pathophysiology of metabolic syndrome, obesity, and type 2 diabetes mellitus and their potential treatments.The mechanisms by which PCs are renewed after their deletion and their interaction with SCs and muscular neurons for the homeostatic maintenance of muscle tissue require further investigation.
Fig. 1
Fig. 1 NG2 + cell-originated muscle fibers in a muscle-type dependent manner.A NG2 + PC-specific lineage tracing was performed using NG2-CreERT/Rosa-tdTomato mice.Immunostaining short axis view of lower leg muscle (gastrocnemius and soleus) demonstrates the distribution of each type of muscular fiber, types I, IIA, and IIB.A dashed line square area indicates the soleus.Scale bar = 500 µm.B High-power immunostaining view of the soleus area.Scale bar = 200 µm.C The representative fluorescence view of tdTomato + myofibers within the soleus and gastrocnemius is shown at the indicated time labeling of NG2 + PCs.D The time course of the proportion of labeled myofibers to total myofibers is shown.The values are presented as the means ± standard error of the mean (SEM); n = 3-4
Fig. 2
Fig. 2 Determination of myonuclei originated from NG2 + PCs.A Myofibers isolated from the soleus, measured area of each myofiber, and the number of myonuclei.B The calculated ratio of the myonuclear number to that of myofibers.The values are presented as mean ± SEM (n = 3).ns = not significant.C. NG2 + PC-originated nuclei within myofibers, determined by fluorescence in situ hybridization (FISH).A non-specific DNA probe was used as a negative control.Nuclei are stained by Hoechst 33,342.Scale bar = 50 µm
Fig. 4
Fig. 4 Physiological performance after deletion of NG2 + PCs.PC-deletion induced by Tam treatment of NG2-CreERT/Rosa-DTA mice.Rosa-DTA mice with Tam or NG2-CreERT/DTA without Tam were used as control A and B (Ct A, Ct B), respectively.After primary Tam treatment at the start point, mice were treated with Tam every month.A The time course of body weight of PC-deletion (PC-del) and control mice.B At 4 months after PC deletion, exercise tolerance was assessed using the treadmill test.C At indicated time after induction of PC-deletion, muscular power was estimated by the grip strength of four limbs.The value was normalized by body weight, and presented as the means ± SEM (n = 4-8); *P < 0.05, ns = not significant
Fig. 5 Fig. 6
Fig. 5 Deletion of NG2 + PCs induces muscular atrophy in soleus muscles.A Appearance of isolated soleus, gastrocnemius, and triceps surae muscles in each group 4 months after PC-deletion (PC-del).B Weight of isolated muscles 4 months after induction of PC-deletion.Rosa-DTA mice with Tam or NG2-CreERT/DTA without Tam were used as control A and B (Ct A, Ct B), respectively.Values are presented as the means ± SEM (n = 4-8); *P < 0.05, **P < 0.01, ns = not significant
Fig. 7
Fig. 7 Muscle fiber types and gene expression profiles in soleus after NG2 + PC-deletion.A At 4 months after PC deletion, the muscle types of each myofiber in the soleus were determined by immunostaining, and the proportion of each myofiber type was calculated (n = 4).Scale bar = 50 µm.B functional vasculature within muscles was estimated as the area of rhodamine-lectin-stained microvessels per observation area of soleus.Expression of myogenic stem cell marker genes (C) and myogenesis-related genes within the soleus muscle (D) estimated by quantitative RT-PCR.Closed bars = PC deletion (PC-del), open bars = control (Ct).The values are presented as the means ± SEM (n = 3-5); *P < 0.05, **P < 0.01, ns = not significant analysis of the soleus of PC-deletion and control mice was performed.The top 20 upregulated and downregulated pathwayrelated gene sets are listed. | 2023-08-17T13:46:12.558Z | 2023-08-17T00:00:00.000 | {
"year": 2023,
"sha1": "b9279d4cb79bad6dedabc666d3e2bcfc098cf2ed",
"oa_license": "CCBY",
"oa_url": "https://stemcellres.biomedcentral.com/counter/pdf/10.1186/s13287-023-03433-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "363c12c9e7fe40985ce851f9341b511d3beb0a88",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
260998096 | pes2o/s2orc | v3-fos-license | An Approach to Solving Direct and Inverse Scattering Problems for Non-Selfadjoint Schrödinger Operators on a Half-Line
In this paper, an approach to solving direct and inverse scattering problems on the half-line for a one-dimensional Schrödinger equation with a complex-valued potential that is exponentially decreasing at infinity is developed. It is based on a power series representation of the Jost solution in a unit disk of a complex variable related to the spectral parameter by a Möbius transformation. This representation leads to an efficient method of solving the corresponding direct scattering problem for a given potential, while the solution to the inverse problem is reduced to the computation of the first coefficient of the power series from a system of linear algebraic equations. The approach to solving these direct and inverse scattering problems is illustrated by several explicit examples and numerical testing.
Studying a Zakharov-Shabat system, even with a real-valued potential, naturally leads to a couple of equations of the form (1) with complex-valued potentials; see [9]. Indeed, consider the Zakharov-Shabat system where ρ is a complex spectral parameter and u(x) is a real-valued potential.
This solution is called the Jost solution of (1). It admits the Levin integral representation [12] (see also [18,23,24]) where for every fixed x, the kernel A(x, t) belongs to L 2 (x, ∞). In [25] (see also [26]) a Fourier-Laguerre series representation for A(x, t) was proposed in the form A(x, t) = ∞ ∑ n=0 a n (x)L n (t − x)e where L n (τ) stands for the Laguerre polynomial of order n. A recurrent integration procedure was developed in [27] to calculate the coefficients a n (x). The substitution of (7) into (6) was found to lead to a series representation for the Jost solution [25,26] e(ρ, x) = e iρx 1 + (z + 1) ∞ ∑ n=0 (−1) n z n a n (x) , x ≥ 0, ρ ∈ C + where In the present work, we consider the direct and inverse scattering problems for (1) subject to the homogeneous Dirichlet condition however, the approach developed here is also applicable in the case of other boundary conditions, such as y (0) − hy(0) = 0 with h ∈ C.
The problem (1) and (10) under Condition (2) possesses a continuous spectrum coinciding with the positive semi-axis λ > 0, and may have a point spectrum that coincides with the squares of the non-real roots of the Jost function e(ρ) := e(ρ, 0), if such roots exist. Let us denote them as ρ 1 , . . . , ρ α . Their multiplicity may be greater than one. In this case, instead of norming constants associated to the eigenvalues, the corresponding normalization polynomials X k (x) naturally arise (see Section 3.3 below).
As a component of the scattering data for (1), the scattering function is considered in the strip |Im(ρ)| < ε 0 where ε 0 is sufficiently small (see Section 3.2 below). The direct scattering problem for (1) and (10) consists of obtaining the set of the scattering data {ρ k , m k , X k (x)} α k=1 , s(ρ) .
The overall approach developed in the present work to solve this problem is based on the representation (8). Indeed, the calculation of {ρ k } α k=1 is easily realizable with the aid of the argument principle theorem applied to find zeros of (8) in the unit disc. To the best of our knowledge, there has been no practical way of calculating the normalization polynomials. We propose a simple procedure for computing their coefficients by solving a finite system of linear algebraic equations. For this, an auxiliary result for the derivatives ∂ m ∂z m e(ρ(z), x) is obtained.
The calculation of the scattering function s(p) requires an analytic extension of the Jost function e(ρ) obtained from (8), onto the strip −ε 0 < Im(ρ) < 0. We explore different possibilities for such an extension, including the Padé approximants (see [28,29]) and the power series analytic continuation [30] (p. 150), [31]. This results in an efficient numerical method for solving the direct scattering problem.
The inverse scattering problem consists of recovering the potential q(x) from the set of the scattering data. A general theory of this inverse problem can be found in [12,13,20] (p. 353), [24,[32][33][34][35]. Here, we use the representation (7) for the numerical solution of the problem, thus extending the approach developed in [25,26,[36][37][38] to the non-selfadjoint situation. The inverse Sturm-Liouville problem is reduced to an infinite system of linear algebraic equations. The potential q(x) is recovered from the first component of the solution vector, which coincides with a 0 (x) in (7).
The reduction to the infinite system of linear algebraic equations is based on the substitution of the series representation (7) for the kernel A(x, t) into the Gelfand-Levitan equation (see [39]), where the function f can be computed from the set of scattering data (11): To approximate the complex-valued function a 0 (x), we consider the truncated system of linear algebraic equations, for which the existence, uniqueness and stability of the solution is proved.
Finally, we illustrate the proposed approach by numerical calculations performed in Matlab2021a.
We discuss the details of the numerical implementation of the method: its convergence, stability and accuracy. In a couple of examples, we show the "in-out" performance of the approach, i.e., we solve the direct problem numerically and use the results of our computation as the input data to solve the inverse problem.
The approach based on the representations (7) and (8) leads to efficient numerical methods for solving both direct and inverse scattering problems.
In Section 2, we recall the series representations for the kernel A(x, t) and for the Jost solution, then prove additional results related to these representations. In Section 3, we recall the set of scattering data and put forward an algorithm for solving the direct scattering problem. Additionally, we present analytical examples. In Section 4, the approach for solving the inverse scattering problem is developed. Analytical examples from Section 2 are considered in order to illustrate the approach. In Section 5, we discuss the numerical implementation of the algorithms proposed for solving the direct and inverse scattering problems. Section 6 contains some concluding remarks.
Series Representations for the Transmutation Operator Kernel and Jost Solution
Consider the one-dimensional Schrödinger equation on the half-line (1) where λ = ρ 2 ∈ C is the spectral parameter. The potential q(x) is a complex-valued function satisfying Condition (2) for some ε > 0.
Remark 2. Under Condition
provided the existence of these derivatives; see [21].
Additionally, the kernel A(x, t) has first continuous derivatives that satisfy the inequalities [12] and the equality [12] (p. 328) As was pointed out in [25], since A(x, ·) ∈ L 2 (x, ∞), the function belongs to L 2 ([0, ∞); e −t ) and hence admits the series representation a(x, t) = ∞ ∑ n=0 a n (x)L n (t), (22) where L n (t) stands for the Laguerre polynomial of order n and a n (x) are complex-valued functions such that {a n (x)} ∞ n=0 ∈ l 2 for any x ≥ 0. For all x ≥ 0, the series (22) converges in the norm of L 2 ([0, ∞); e −t ). Thus, and This series representation was obtained in [25] for real-valued q(x). However, (23) remains true in the non-selfadjoint case as well.
Proposition 2.
For any fixed x ≥ 0, the series converges pointwise.
Proof. We use [40] (Theorem 6.5), and thus need to verify that the following assertions are true.
The integrals
exist.
To prove the first assertion, it is enough to consider estimate (15). Indeed, Thus, The second assertion follows from the inclusion A(x, ·) ∈ C 1 (x, ∞). The existence of the first integral in (26) follows from the continuity of a(x, ·). Finally, for the second integral we have and thus, from the proof of the first assertion, we obtain Now, the application of Theorem 6.5 from [40] completes the proof.
The series (8) is convergent in the open unit disk of the complex z-plane, D := {z ∈ C : |z| < 1}, and for every x, the function e(ρ, x)e −iρx belongs to the Hardy space H 2 (D) as a function of z [26]. Proposition 3. Let q(x)(1 + x) ∈ L(0, ∞). Then, the kernel A(x, t) admits the representation (23), where for any x fixed the series converges in the norm of L 2 (x, ∞), and the complex-valued coefficients a n (x) satisfy the system of equations −l[a n ] − a n = −l[a n−1 ] + a n−1 , n = 1, 2, . . . , (29) as well as the inequality Proof. The proof of (28) and (29) from [26] (Theorem 10.1, p. 66) given for the case of a real-valued q remains valid in this more general situation as well.
Remark 3.
Under the assumption that functions a (ν) (x, t) are absolutely continuous with respect to t in [0, ∞) for ν = 0, 1, 2, the convergence of the power series in (8) for z ∈ D can be proved with the aid of a result from [42], which states that and Moreover, .
To ensure Condition (33) for j = 0, notice that from (16) we have For j = 1, Condition (33) holds due to (19). However, the fulfillment of (33) for j = 2 as well as that of (34) requires the additional regularity of q(x), ensuring the possibility of the differentiation of the integral equation for the kernel at least three times [12] (p. 296).
Remark 4.
Denote In [27], the following statements were proved in the case of a real-valued potential.
1. If Im ρ > 0, then These results remain valid in the case of a complex-valued potential. Moreover, under the assumptions of Remark 3, we obtain the inequality .
Remark 5.
The substitution of ρ = i 2 into (8) leads to the equality a 0 (x) = e i 2 , x e x/2 − 1. Moreover, note that we have By ω(ρ, x), we denote the solution of (1), satisfying the initial conditions We also need the solution
Definition 1.
We call the roots of e(ρ) that lie in C + \ {0} the singular numbers of the problem (1) and (10).
If they exist, their number is finite. Let us denote the non-real singular numbers by ρ 1 , . . . , ρ α . The numbers λ k = ρ 2 k constitute the point spectrum of the problem, and the multiplicities of the zeros ρ k (k = 1, . . . , α) are called the multiplicities of the singular numbers and denoted by m k , respectively.
Thus, we are interested in the zeros z k of the Jost function e(ρ) = 1 + (z + 1) ∞ ∑ n=0 (−1) n z n a n (0) (41) to obtain the eigenvalues from λ k = − z k −1 For an estimate of the number of the eigenvalues, we refer to [43].
A function satisfying properties 2-5 is said to be of S-type in the strip |Im ρ| < ε 0 . The following examples illustrate some of the above definitions.
Example 1 ([44,45]). Consider the potential (2). With the aid of Wolfram Mathematica v.12 the Jost solution can be obtained in a closed form, where J ν (z) stands for the Bessel function of the first kind of order ν. Hence, and the eigenvalues are the squares of the values ρ ∈ C + such that From here, we obtain the only singular number The scattering function has the form It is well-defined in the domain and is an S-type function in the strip |Im(ρ)| < 1 2 .
The corresponding Jost solution e 2 (ρ, x) is obtained from the Jost solution of a Zakharov-Shabat system (see [46]) with the potential u(x), Thus, the Jost function is It has one root, ρ * = −1, which corresponds to the spectral singularity λ * = ρ 2 * = 1. The scattering function is given by which is an S-type function in the strip |Im(ρ)| < 1 2 .
Example 3 ([21]
). Consider the potentials of the form satisfying Condition (2) for 0 < ε < 2a. The Jost solution has the form from which the Jost function is obtained The square of this ρ represents the discrete spectrum of the problem. The potential (49) is complex-valued when b is not purely imaginary. The scattering function has the form which is an S-type function in |Im(ρ)| < min{a, Im(−ia tanh(b))} in the case of a complexvalued potential. In the case of a real-valued potential, s(ρ) is an S-type function in |Im(ρ)| < a. To present an explicit example, we fix a = 1 and b = −1 − i in (49). Then, (2) and the Jost solution is Thus, the Jost function has the form and one eigenvalue exists: λ = − tanh 2 (1 + i). The scattering function is an S-type function in the strip |Im(ρ)| < 1.
Normalization Polynomials
The normalization polynomial X k (x) of degree m k − 1, associated with the eigenvalue ρ 2 k (m k is the algebraic multiplicity of ρ k as zero of e(ρ)), defined by the equation [18] i Res(Ω(ρ, x); ρ k ) = e iρ k x X k (x) where Ω(ρ, x) is defined by (40). Using the series representation (23) of the kernel A(x, t), we can obtain a method to compute the coefficients of X k (x).
Remark 6.
Note that the series (8) can be written as in terms of the Jacobi polynomials P (α,β) n (τ).
Let us write Equation (51) in terms of the Jost solution and Jacobi polynomials, as follows.
. . , α be an eigenvalue of problem (1) and (10) and m k be its multiplicity. For the normalization polynomial X k (x), the equality holds Proof. The substitution of (23) into (51) yields Here, we change the order of summation and integration due to Parseval's identity [47] (p. 16) and additionally use the equality Thus, The last integral can be explicitly evaluated [48] (Formula 7.414 (7)) where F(a, b, c; z) stands for the hypergeometric function [49] (p. 56). Thus, we have the equation and due to Remark 6, we obtain (53).
Hereinafter C n k = n k denotes the binomial coefficient.
Proof. We use the identity [50] where F z means the derivative with respect to z, and j, n are integers. Let us prove the lemma by induction. For m = 1, from (52), we have ∂ ∂z (e(ρ, x)) = e iρx x (z + 1) 2 1 + (z + 1) The application of (55) gives Consider Formula (54) as the induction hypothesis for m = k. The idea is to prove the equation and the equality Then, noting that the second terms on the left-hand side of (57) and (58) coincide up to the sign, the desired result is obtained by summing up both equations.
The proof of Equations (57) and (58) is presented in Appendix A, which completes the proof of the Lemma.
As long as there is no possible misunderstanding, we consider a fixed ρ = ρ k with a multiplicity m = m k and the corresponding normalization polynomial X(x) = X k (x). Thus, the index k is omitted along the following two statements.
satisfy the equation Proof. Comparing (53) with (60) we see that, in fact, we need to prove the equality Note that Then, upon comparison of (61) with (62), it can be observed that proving (61) is equivalent to proving the equality for some natural number s ≤ m − 1. Thus, we are going to prove (64). The substitution of the term with the derivative in (63) by Formula (54) for m = r + 1 is enough to obtain (64) as follows This completes the proof of the Lemma.
Equation (60) provides us with a simple method for computing the coefficients b j in (59), and consequently for calculating the normalization polynomials.
to a complex singular number ρ k satisfy the system of linear algebraic equations where A is an m × m k matrix with entries defined by Here, x j ≥ 0 are distinct points, j = 1, . . . , m (m ≥ m k ). B is an m k vector with its entries being the normalization polynomial coefficients B n = b n−1 , n = 1, . . . , m k , and D is an m vector defined by Proof. The proof consists of observing that each row in (65) is just Formula (60) corresponding to a point x j . The number of rows must be at least m k ; otherwise, the system (65) is underdetermined.
Thus, the coefficients of the normalization polynomial are obtained from the system (65).
is called the scattering data set of problem (1) and (10).
Here, ρ k are the non-real singular numbers, m k their multiplicities, X k (x) the corresponding normalization polynomials, and s(ρ) is the scattering function (S-type function in the strip |Im ρ| < ε 0 ).
In order to recall a result on the characterization of the scattering data, we need the following definition [39]. Definition 3. Let s(ρ) be an S-type function in the strip |Im ρ| < ε 0 and let L be a curve lying in the strip and running from −∞ to +∞, such that all roots (poles) of s(ρ) are situated above (below) L. The increment divided by 2π of a continuous branch of Arg s(ρ), when ρ runs along L from −∞ to +∞, is called the index of s(ρ) and denoted by Ind s.
Let us assume that a set J as in Definition 2 is given. A necessary and sufficient condition (obtained in [18]) to ensure that this set represents the scattering data for a problem (1) and (10) with Condition (2) is the following relation In the case when m k = 1, the notion of the Birkhoff solution is useful for computing the corresponding norming constants. (1) (see [24] (p. 113)), i.e., a solution satisfying the asymptotic relation
Remark 7. Let E(ρ, x) denote the Birkhoff solution of Equation
is also a Birkhoff solution of (1) for any constant c ∈ C. Note that for ρ = ρ k (a singular number of the problem), the values of all Birkhoff solutions at the origin coincide. We have E( Note that ρ k is a pole of Ω(ρ, x) in the upper half-plane of the complex variable ρ if and only if it is a root of the Jost function e(ρ) (see (40)). Thus, in case of a simple pole ρ k in Equation (67), the residue can be computed as follows whereė(ρ) := d dρ e(ρ), and the corresponding normalization polynomial (in fact normalization constant) is given by Moreover, due to (70), we have Similarly to the case of a real-valued potential [51] (p. 95), one can see that is an entire function of ρ (see [51] (p. 95)). In this case, as a Birkhoff solution E(ρ, x), one can consider the Jost solution e(−ρ, x), Im(ρ) > 0, and hence from (72) we obtain Example 4. According to Remark 8, the normalization constant associated with the unique eigenvalue of the operator from Example 3 is and, in particular, for q 3 (x), we have Example 5. With the aid of Remark 8, an approximate value of the normalization constant for the eigenvalue λ 0 from Example 1 is obtained
Numerical Algorithm
The approximate solution of the direct problem can be performed with the following steps.
4.
To locate the eigenvalues, find the non-real poles of the function Ω(ρ, x), which is equivalent to finding zeroes of the function e(ρ) in the unit disk in terms of z. This can be achieved with the aid of the argument principle theorem. In particular, in the present work, we compute the change in the argument along rectangular contours γ.
If the change in the argument along γ is zero, consider another contour. Otherwise, subdivide the region within the contour until the desired accuracy is attained. Note that for a sufficiently large N, zeros of e N (ρ), approximate the square roots of the eigenvalues of the problem arbitrarily closely. The proof is analogous to that in [54] and is based on the Rouché theorem from complex analysis.
5.1
For simple poles, use Remark 8 to obtain the normalization constants.
5.2
Otherwise, for higher multiplicities, solve the linear system of Equation (65) for the coefficients b n k , n k = 0, 1, . . . , m k − 1 computing A j,n and D j defined in Equations (66) and (67) for several values of x j .
Inverse Problem
In order to reconstruct the potential in (1) from the scattering data, it is convenient to introduce the function [39] where η is a number satisfying the inequalities 0 < η < ε 0 (ε 0 s, defined in Section 3.2), and the function Remark 9. Hereinafter, we use the notation for 0 < η < ε 0 where L η represents a line parallel to the real axis crossing iη.
The kernel A(x, t) and the function f (x) satisfy the following Gel'fand-Levitan (G-L) equation [39] (Theorem 10.1) 4.1. Infinite Linear Algebraic System for Coefficients a n (x) Following [38], from the G-L Equation (78), we deduce the following system of linear algebraic equations for the coefficients a n (x) from the series representation (23).
Theorem 2.
The complex-valued functions a n (x) satisfy the equations Proof. Substitution of the series representation (23) into (78) leads to the equalities where the change in the order of summation and integration is justified by the general Parseval identity [47] (p. 16).
We have A(x, x + u), e −u/2 L n (u) (81) is equivalent to Multiplying the last equation by L m (s)e − s 2 and integrating this, we obtain Note that and ∞ 0 f (s + 2x + y)L n (y)e − y 2 dy = f n (2x + s).
Expressions for f m (x) and A mn (x)
It is convenient to regard the functions f m (x) and A mn (x) as a sum of the components corresponding to the continuous f m,c (x), A mn,c (x) and discrete spectra f m,d (x), A mn,d (x), and simplify these expressions with the aid of the formula ( [48], Formula 7.414 (6) The continuous and discrete components for the function f m (x) have the form and For the function A mn,c (x), we have and for A mn,d (x), we use (84) to obtain Remark 10. When an eigenvalue ρ 2 k is simple and the corresponding normalization polynomial X k (x) is just a normalization constant c k , expressions (86) and (88) can be written in the form We illustrate the calculation of the functions (85)-(88) with some examples.
Example 6.
Consider the scattering function obtained in Example 2: in the strip 0 ≤ Im(ρ) < 1, with no discrete spectrum and thus no normalization polynomials. Let us compute the function φ s (x) defined by (75), where the line L η lies in the strip 0 < Im(ρ) < 1.
Since the function s 2 (ρ) is analytic in the strip 0 < Im(ρ) < 1, the value of the integral is independent of the choice of 0 < η < 1. Using Jordan's lemma to calculate the integral in (75), we obtain Now, computing the functions f m (x) and A mn (x) from Formula (85) and (87) and using the residue theorem, we obtain Thus, in the case of the potential q 2 (x), the system of Equation (79) can be written explicitly.
Example 7.
Consider the scattering function s 3 (ρ) from Example 3. It has two poles in the upper half-plane: at i and i tanh(1 + i). Hence, using the residue theorem, we find that Again, the corresponding system of Equation (79) can be written explicitly.
Example 8. Consider s 1 (ρ) from Example 1. To compute φ s (x), we consider the singularities of s 1 (ρ) in the upper half-plane. From the set D(s 1 ) (see Example 1), we have that s 1 (ρ) has an infinite number of isolated singularities at the points ρ k = ik 2 with k ∈ N \ {0} and a singular number ρ 0 ; see (46). Using properties of the gamma function, we obtain where c 0 is the normalization constant obtained in Example 5. Therefore, for x > 0, we have Hence, the function f (x) has the form and we obtain the functions f m,c (x) and f m,d (x) in terms of z 0 = 1 2 +iρ 0 1 2 −iρ 0 (see (9)) as follows and f n,d (x) = (−1) n+1 c 0 e ixρ 0 (z 0 + 1)z n 0 .
Thus, from Equations (92) and (93) we obtain Likewise, applying the residue theorem, we have Thus, as in the previous two examples, the system of Equation (79) can be written explicitly.
The cancellation of terms when summing up (92) with (93) is not incidental and is generalized below in Remark 12.
To calculate the integrals in functions f m and A mn in the case when the scattering function is given explicitly, we implement Jordan's lemma and the residue theorem considering the asymptotics (44). However, often the function s(ρ) is not given in a closed form but as a table of data-then, the following techniques can be useful to compute the integrals. First, we recall a widely used technique for the quadrature of highly oscillatory integrals through approximations of the Fourier sine and cosine transform. This is illustrated below in Example 16. A second option is a transformation of integrals in f m and A mn into integrals over a finite interval providing a certain advantage for its numerical implementation. This is illustrated below in Example 21.
Remark 11.
We mainly discuss the calculation of the functions f m . The calculation of A mn is analogous.
provided the series on the right-hand side is convergent; see [55] for some 0 < η < 0 . Following the approach from [56] (p. 236), denote The integrals on the right hand side (the Fourier cosine and sine transforms) are approximated by the corresponding sums where h and N are chosen to be sufficiently small and large, respectively. 3.
Transform the line L η into a circle centered at −2η 1+2η of radius 1 2+η with the aid of the formulas This enables us to consider the integral in (85) in the form reducing the integration to a finite interval.
Stability of the System and Its Solution
Consider the truncated system (79): Denote its solution as U M = a M m M m=0 . In the following two theorems, we prove the unique solvability of (100), the convergence of its solution to the exact one as well as its stability. Proof. Since { f m (2x)} ∞ m=0 ∈ 2 and {A m,n (x)} ∞ m,n=0 ∈ 2 ⊗ 2 and we look for {a m (x)} ∞ m=0 ∈ 2 , the assertion of the theorem for the truncated system follows directly from the general theory presented in [57] (Chapter 14, §3). Proof. Note that the truncated system (100) coincides with that obtained by applying the Bubnov-Galerkin procedure to the G-L Equation (78) with the orthonormal system of Laguerre polynomials in L 2 ([0, ∞); e −x ); see [58] ( §14). Let I M denote the (M + 1) × (M + 1) identity matrix, L M = {A m,n (x)} M m,n=0 be the coefficient matrix of the truncated system and R M = { f m (2x)} M m=0 the right hand side of (100). Following [58] ( §9), consider a system called inexact where Γ M is an (M + 1) × (M + 1) matrix representing errors in the coefficients A m,n , and δ M is the column vector representing errors in the coefficients f m . Let V M be a solution of the non-exact system. The solution of the Bubnov-Galerkin procedure is said to be stable if there exist constants c 1 , c 2 > 0, such that for Γ M ≤ r and arbitrary δ M the non-exact system is solvable, and the following inequality holds Now, since in the case under consideration, the inequality (102) is true (see [58] (Theorems 14.1 and 14.2)), the approximate solution is stable.
Algorithm to Recover the Potential
Given a scattering data set J as in Definition 2, the algorithm to recover q(x) consists of the following steps.
1.
Compute the functions f m (x) and A mn (x) with the aid of (85)-(88).
2.
Solve the truncated system of linear algebraic Equation (100) to obtain the coefficient a 0 (x).
Numerical Examples
We implemented the algorithms proposed in Sections 3.4 and 4.4 to solve the direct and inverse problems, respectively, with machine precision and with the aid of Matlab2021. Several examples are discussed, some of which have been introduced in previous sections.
Direct Problem
In this subsection, we discuss the computation of the scattering data, based on the series representation of the Jost solution (8). We deal with the approximate solution obtained by truncating the series (36).
The computation of the coefficients a n (x) is performed with the aid of the recurrent integration procedure from [27].
First of all, we discuss the choice of the number N in (36). Below, we show that a satisfactory accuracy is attained for a relatively small N (from several units to several dozens), and a reliable indicator can be used to choose an appropriate N.
In the case of simple singular numbers ρ k , the norming constants can be computed with the aid of (73): Another possibility consists of using (74) in the form where [m, n] e N (ρ) (−ρ k ) stands for the Padé approximant of e N (ρ) at ρ = −ρ k . This can be achieved when the accuracy of this rational approximation in the upper half-plane is satisfactory, i.e., when one has a suitable small value of max [m, n] e N (ρ) (ρ) − e N (ρ) in a sufficiently large region in the upper half-plane of the complex variable ρ.
A reliable algorithm to compute derivatives of (36) in (104) is proposed in [27].
To obtain the scattering function (42) in the strip |Im(ρ)| < ε 0 we consider two options depending on how the computation of the Jost function is performed for ρ in the lower half-plane. The first one uses provided [m, n] e N (ρ) (ρ) extends e N (ρ) analytically onto a certain strip in the lower half ρ-plane. A second option for the computation of s(ρ) is where the expression (36) is calculated at points ρ of a parallel line sufficiently close to the real axis and contained in the lower half ρ-plane. Remark 13. The notation for the approximate Jost solution (Jost function) may contain two indices, k and N: e k,N (ρ, x) (e k,N (ρ)), where k denotes the solution associated with the Schrödinger equation with the potential q k (x) and N is the parameter from (36).
Example 9.
Consider the potential q 2 (x) from Example 2. We present the indicator ε N in Table 1 for different values of N in (103). Table 2 presents the maximum absolute and relative errors of the approximate Jost function e 2,N (ρ(z)) for z ∈ D for different values of N. The distribution of the absolute and relative errors of the approximate Jost function is presented in Figure 3 and Figure 4 (respectively), where the maximum absolute error is 1.98 × 10 −14 and the maximum relative error is 3.17 × 10 −13 . Furthermore, a good approximation of the derivative of the Jost function becomes essential for the argument principle algorithm performance. This is necessary to obtain the eigenvalues as the squares of non-real zeros of the approximate Jost function. In Figure 5, we illustrate de 2,30 (ρ(z)) dz , and Figures 6 and 7 depict the distribution of the absolute and relative errors, respectively. The maximum absolute error is 9.6 × 10 −13 and the maximum relative error is 7.21 × 10 −13 . To find the singular numbers, we consider the circle {z ∈ C : |z| = 1} (real axis in ρ) and a cubic spline interpolation of the approximate Jost function (N = 30). For the spline interpolation, we use the Matlab routine csapi. To locate the zeros of the spline, we use slmsolve from the Shape Language Modeling (SLM) toolbox, version 1.14 by John D'Errico [59], available for Matlab2021a. The value ρ 1 = −1.000000000000003 was obtained with an absolute error of 3.11 × 10 −15 . Additionally, the argument principle algorithm applied to e 2,30 (ρ(z)) in D discarded any eigenvalue of the problem (non-real zero ρ).
In Table 3, we computed the maximum absolute and relative errors of the Padé approximant [1,1] Table 4, we confirm that this approximant satisfactorily extends the Jost function to a desirable strip in the lower half-plane (the strip is related to the one needed for the calculation of the scattering function s 2 (ρ)). To obtain s 2 (ρ) numerically on the strip 0 < Im(ρ) < ε 0 = 1, we use the truncated series e 2,30 (ρ) and the Padé approximant [1,1] as the most suitable option to avoid the appearance of Froissart doublets. Indeed, the use of the Padé approximants when there is no available information about the smoothness of the function to be approximated is challenging. Some publications propose modified algorithms [60], even using the Toeplitz matrix theory with many numerical implementations in Maple, Wolfram Mathematica (see [61]) or Matlab (see [62]). For the purposes of this paper, it is sufficient to use only the information obtained from the truncated series e N (ρ) and the argument principle algorithm to construct the approximant. Consider the number K of zeros counting multiplicities of the approximate Jost function e N (ρ) (singular numbers being calculated using the argument principle algorithm) located inside D as the degree of the polynomial in the numerator in the Padé approximant. Recalling that, in most cases, an accurate Padé's approximation is obtained on the diagonal approximant types for analytical functions, it is reasonable to choose the Padé approximant as [K, K] e N (ρ) .
Example 10.
Consider the potential q 3 (x) from Example 3. The approximate Jost function e 3,N (ρ) is computed in the strip 0 ≤ Im(ρ) < ε 2 = 1 for several values of N. In Table 5, the maximum absolute error of the approximate Jost function is presented. Similarly to the previous example, a search for real singular numbers was performed; however, none were detected. Subsequently, the argument principle algorithm located a non-real singular number in D, with the value z 1 ≈ −0.386709149322063 − 0.105221869864471i (ρ 1 ≈ −0.271752585319512 + 1.083923327338694i). Its absolute error is 8 × 10 −15 . The contour refinement is not a concern, since the performed algorithm from [54] is based on the argument principle algorithm followed by several Newton iterations.
Additionally, the Jost function was extended to the strip |Im(ρ)| < ε 0 = 1 through the Padé approximant [1,1] Next, an approximate value of the normalization constant corresponding to ρ 1 was computed with an absolute error of 2.8 × 10 −9 . Finally, we calculate the scattering function by The maximum absolute error of the approximation of s 3 (ρ) in R 1 is 1 × 10 −9 (see Figure 8). . Example 11. Consider the potential q 1 (x) from Example 1. Table 6 shows the parameter ε N for some values of N. Note that the approximation of the Jost function in this example requires more terms in the series representation than in previous examples. To control the accuracy of the approximation, in addition to the parameter ε N , one can use the asymptotic relation for the Jost function from [24] (p. 105), where ω(x) = − 1 2 ∞ x q(s)ds. This relation is valid for q(x) with first and second summable derivatives. Figure 9 depicts the Jost function computed with N = 98 and the singular number ρ 0 ≈ 1.784065846059995 + 0.608788673578742i. Figure 10 shows the fulfillment of the asymptotic relation (108), namely the graph of e 1,98 (ρ) − ω(0) iρ + q(0) (2iρ) 2 − ω 2 (0) (2iρ) 2 , which tends to 1 when |ρ| → ∞.
The eigenvalue is computed numerically as a zero of the exact Jost function with the aid of Wolfram Mathematica v.12 (Wolfram Research, Inc., Champaign, IL, USA) λ 0 ≈ λ * 0 := 2.8122672899483 + 2.1722381890043i. This "exact" eigenvalue is compared with the approximation 2.812267289948449 + 2.172238189004328i obtained as the square of the approximate ρ 0 . The absolute error is 1.52 × 10 −13 .
For the numerical calculation of the analytic extension of e 1,98 (ρ) onto the strip − 1 2 < Im(ρ) < 0, it is not possible to consider the Padé approximant [1, 1] e 1,98 (ρ) . This does not approximate e 1,98 (ρ) accurately even in the upper half-plane of the complex variable ρ. Using the Padé approximant [7,7] To compute the scattering function s 1 (ρ) on a line parallel to the real axis contained in the strip |Im(ρ)| < ε 0 = |Im(ρ 0 )| ≈ 0.608788673578742i, Formula (106) was used. The function e 1 (ρ) is represented by (36) for ρ on a line in the lower half ρ-plane parallel and sufficiently close to the real axis. Having calculated these series representations for the functions involved in s 1 (ρ), we compute with a maximum absolute error 1.45 × 10 −7 along the line L η=0.1 (see (77)).
In this example, we obtain a satisfactory accuracy in the calculations of the scatterin data set using the expression (36) alone and the derivatives required by (104).
The Jost solution is not available in a closed form. In order to check the validity of the numerical calculation of the coefficients a n (x) for e 4,N (ρ.x), we consider the indicator ε N (Table 7). Figure 11 depicts the Jost function computed with N = 137 and the approximation of the singular number ρ 1 ≈ 1.416695330664399 + 0.634534798062634i, with its square belonging to the box B. Additionally, Figure 12 shows the fulfillment of the asymptotic relation (108). The normalization constant c 1 is calculated using (104), Finally, the scattering function is approximated by e 4,137 (−ρ) e 4,137 (ρ) . Now, take R = 30 in the potential q 4 (x). In this case, two boxes localizing the only two eigenvalues λ 1 and λ 2 were obtained in [44] 55 46 i. Table 8 provides the values of the indicator ε N for several values of N. Note thatλ 1 ∈ B 1 andλ 2 ∈ B 2 for N = 200. Finally, the normalization constants are calculated using e 4,200 (ρ) in (104), Although, in this example, more powers for the series representation of the Jost function were used, the method proved to be applicable to obtaining the scattering data set without any additional informatio. The good accuracy achieved is confirmed by the ability t use the scattering data obtained as input data to solve the inverse scattering problem to recover the potential q 4 (x) with R = 30 below in Example 22.
Inverse Problem
In the present section, we discuss the accuracy, convergence and stability of the proposed method for solving the inverse scattering problem.
Remark 15.
By q k,M (x), we denote the approximation of the potential q k (x) (k = 1, 2, 3, 4, 5) obtained by solving the truncated system (100) with the sum up to M, i.e., with M + 1 equations.
We shall recover the potential q 3 (x) = −2 sech 2 (x − 1 − i). The system (100) of linear algebraic equations for this example is obtained in a closed form (see Example 7). For a different number of equations in the truncated system, we obtain a solution symbolically by using the Matlab routine solve. The potential q 3 (x) is recovered from (38). Figure 13 presents the recovered potential in each case. The corresponding absolute and relative errors are presented in Figure 14 and Figure 15, respectively. Note that a high accuracy is attained even in the case of a very reduced number of equations in the truncated system. Moreover, a very fast convergence of the method can be appreciated.
Example 14.
Consider the scattering data J = {s 2 (ρ)} from Example 2. As was shown above (Example 6), the system (100) for this example can be written explicitly. Again, when solving the corresponding truncated system for different values of M we observe a fast convergence and remarkable accuracy even for small values of M (see Table 11 and Figure 16). Figure 16. Exact and computed potential q 2,6 (x).
Example 15.
Consider the closed form of the scattering function s 1 (ρ) from Example 1. We compute functions f m,c (x) and A mn,c (x) using the first option from Remark 11. Some poles and residues are given in Table 12 (computed with the aid of the package Numerical Calculus of Mathematica v.12). Note that the absolute value of the residues decreases considerably as the poles move away from the origin on the imaginary axis. This allows us to use a small number of poles for the calculation of the functions f m,c (x) and A mn,c (x).
The convergence of the method in this case results to be slower; see Figure 17, although a satisfactory accuracy is attained for M = 9.
Stability of the System
Since the stability of the method was proved in Theorem 4, we are able to work efficiently with noisy scattering data. First, we consider the natural noise arising from the numerical implementations of the last two procedures in Remark 11, i.e., calculation of the approximate matrix in (100) from the scattering function s(ρ) given in a closed form. Another situation considered in this subsection is the recovery of the potential from a uniformly noisy scattering function.
Remark 17.
In the last step of the algorithm from Section 4.4, for recovering q with the aid of (38), the coefficient a 0 needs to be differentiated twice. This was performed by interpolating a 0 (x) with a quintic spline through the Matlab routine spapi and a posterior differentiation with the Matlab command fnder.
Example 16.
Let us consider the scattering data from Example 2. The recovery of the potential q 2 (x) from the exact scattering function s 2 (ρ), obtained by using approximate functionsf m (x) andà mn (x) in the truncated system (100), is presented. The computation of functions f m (x) and A mn (x) requires numerical integration along the line L η=0.5 (see (77)). For this purpose, the last two procedures in Remark 11 were applied. Method 1. The second option in Remark 11 is implemented. With the scattering function (91) at points ρ = σ + 0.5i and σ = −(k + 1/2)h for k = 0, 1, . . . , N(x), where N(x) = 55000/x and h = 0.145454545, the calculation of the Fourier transforms in (98) is carried out. In Table 13, the maximum absolute error off m (x) is presented for 4 values of the parameter m. Now, we computeà mn,c (x) using the same numerical integration method with parameters N(x) = 5500/x and h = 0.127272727. Table 14 shows the maximum absolute error ofà mn (x) for parameters m, n = 0, 1, 2, 3. The system (100) constructed withf m (x) andà mn (x) is solved numerically in Matlab for several values of M. Maximum absolute and relative errors of the approximation of the potential q 2,M are shown in Table 15. Figure 18 presents the absolute value of the recovered q 2 potential from 4 equations in (100). Method 2. Now, we compute the approximate functionsf m (x) (see Table 16) andà mn (x) (see Table 17) following the third procedure in Remark 11. In Table 18, the absolute error of the recovered potential for some values of M in (100) is presented. Both methods (procedures 2 and 3 from Remark 11) illustrated in the above example have proven to be suitable for calculating the functions f m and A mn from a table of values for the s 2 (ρ). Nevertheless, it is worth mentioning that although the first method (procedure 2) produced slightly more accurate results, this approach might be sensitive to the choice of the N(x) and h parameters, whereas the second method (procedure 3) only requires the implementation of trapz, the Matlab integration routine on a dense set of points defined in the interval (0, 2π). Hence, for the purposes of this paper, it is sufficient to consider procedure 3 from Remark 11 in the following examples, so as to obtain satisfactory approximations of f m and A mn .
As expected from the results of Example 14 for this potential, the numerical method for recovering the potential q 2 (x) converges very fast. Indeed, an acceptable approximation of q 2 (x) is achieved with only four equations in this case, where an inexact matrix in the linear system (100) is considered. In fact, the difference between the approximate and the exact potential presented in Figure 18 is indistinguishable.
In the following examples, a noisy scattering function with a uniformly distributed noise ε(ρ) added to the rand routine of Matlab is considered.
Example 17.
Consider the scattering function s 2 (ρ) and denote the noisy scattering function byŝ 2 (ρ) := s 2 (ρ) + ε(ρ). Here, ε(ρ) is ±5% uniformly distributed complex-valued noise (the percentage of the noise is applied pointwise to the modulus and argument of the value of s 2 (ρ)). The maximum absolute error ofŝ 2 on the line L η=0.5 is 2.46 × 10 −1 . The potential was recovered using five equations with a maximum absolute error of 5.2 × 10 −1 . The real and imaginary parts of the potential and the absolute error of its recovery are shown in Figure 19. Despite the noise thatŝ 2 (ρ) produces in the matrix of the system (100), the method recovers the shape of the potential q 2 with reasonable accuracy.
The maximum absolute error ofŝ 3 on the line L η=0.5 is 1.75. The potential was recovered using eight equations with a maximum absolute error of 8.6 × 10 −1 . The real and imaginary parts of the potential as well as the absolute error of its recovery are shown in Figure 20. Although, in this case, the absolute error ofŝ 3 (ρ) is larger, the shape of the recovered potential is still quite close to that of the exact one.
In-Out
In this subsection, we consider the results obtained in Section 5.1 as input data for the inverse problem.
Example 19.
We use the approximate scattering function s 3 (ρ) from Example 10 calculated by (107). Particularly, the form in which it is given allows for us to approximate functionsf m,c (x) andà mn,c (x) with the aid of the numerical calculus of residues, i.e., the first procedure in Remark 11 (see Tables 19 and 20). The potential q 3 (x) was recovered with an absolute error of 1.8 × 10 −5 in the interval (0, 15) using 8 equations.
Example 20. Consider the approximate scattering function s 1 (ρ) from Example 11. The coefficient a 0 (x) was recovered using 14 equations with a maximum absolute error of 4.29 × 10 −3 , from which the potential was recovered with a maximum absolute error of 0.23, Figure 21. Example 21. Consider the approximate scattering data obtained in Example 9. The approximate functionsf m (x) (see Table 21) andà mn (x) (see Table 22) were obtained accurately enough to recover the potential (see Table 23).
Example 22.
Consider the potential q 4 (x) = 30i sin(x) exp(−x) introduced in Example 12. Using the results of the solution of the direct scattering problem from Example 12, we recover q 4 (x) using 20 equations with a maximum absolute error of 8.67 × 10 −1 (Figure 23). It is worth mentioning that the coefficient a 0 (x) is recovered with an absolute error of 2.28 × 10 −2 ( Figure 24). The error is calculated and compared with the solution of the Cauchy problem a 0 (x) − a 0 (x) = q 4 (x)(a 0 (x) + 1), for a sufficiently large value of b > 0, obtained using ode45 routine of Matlab2021a. This is a case where closed formulas for the scattering data set are unavailable. Therefore, the In-Out procedure confirms a satisfactory accuracy in the solution of both the direct and inverse scattering problems. Example 23. Consider the singular potential q 5 (x) = exp(−2.5x) x − π 2 1/3 .
In Table 24, we present the parameter ε N for different values of N in (103). Using data from Table 24, we computed the scattering data with N = 45. No eigenvalue was detected, so the scattering data set consists of the scattering function approximated by the expression s 5 (ρ) ≈ e 5,45 (−ρ) e 5,45 (ρ) , ρ ∈ R.
Using this scattering data set to solve the inverse problem, we obtained the coefficient a 0 (x) as shown in Figure 25. The maximum absolute error resulted in 1.9 × 10 −4 . The potential is recovered as shown in Figure 26. The corresponding absolute error is presented in Figure 27. Indeed, the maximum absolute error is 9.82 × 10 −2 . This example shows the applicability of the proposed algorithms to both the solution of the direct and inverse scattering problems in the case of non-smooth potentials.
Conclusions
An approach to solving the direct and inverse scattering problems on the half-line for the one-dimensional Schrödinger equation with an exponentially decreasing complexvalued potential is developed. It is based on a series representation of the Jost solution from [25], which is shown in the present work to remain valid in a non-selfadjoining case.
When solving the direct problem, this representation is used to calculate the scattering data set through a simple and efficient procedure, which includes a proposed algorithm for computing normalization polynomials (which are part of the scattering data set) by solving a finite system of linear algebraic equations for its coefficients.
When solving the inverse problem, the use of the series representation combined with the Gel'fand-Levitan equation reduces the problem to a system of linear algebraic equations for the series coefficients, and the knowledge of the first coefficient is sufficient to recover the potential.
The numerical results illustrate the remarkable accuracy of the proposed algorithms in solving both the direct and inverse scattering problems. | 2023-08-19T15:31:23.169Z | 2023-08-16T00:00:00.000 | {
"year": 2023,
"sha1": "c44ab7a0a227f0314ef79ca8f93fb0453373f07b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/11/16/3544/pdf?version=1692191922",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "745d0dd8c33e570f56471efb0264c5f0e6d4aae6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
266344620 | pes2o/s2orc | v3-fos-license | p16 and p53 can Serve as Prognostic Markers for Head and Neck Squamous Cell Carcinoma
Objective The present study aimed to explore the expression and clinical significance of human papilloma virus-related pathogenic factors (p16, cyclin D1, p53) in patients with head and neck squamous cell carcinoma (HNSCC) and construct a predictive model. Methods The Cancer Genome Atlas was used to obtain clinical data for 112 patients with HNSCC. Expression of p16, p53, and cyclin D1 was quantified. We used the survival package of the R program to set the cut-off value. Values above the cut-off were considered positive, while values below the cut-off were negative. Kaplan–Meier analysis and univariate and multivariate Cox regression analyses were performed to investigate prognostic clinicopathological indicators and the expression of p16, p53, and cyclin D1. A predictive model was constructed based on the results of multifactor Cox regression analysis, and the accuracy of the predictive model was verified through final calibration analysis. Follow-up of patients with HNSCC at the Affiliated Hospital of Binzhou Medical University was conducted from 2015 to 2017, and reliability of the predictive model was validated based on follow-up data and molecular expression levels. Results According to the results, expression of p16 and p53 was significantly associated with prognosis (P < .05). The predictive model constructed based on the expression levels of p16 and p53 was useful for evaluating the prognosis of patients with HNSCC. The predictive model was validated using follow-up data obtained from the hospital, and the trend of the follow-up results was consistent with the predictive model. Conclusion p16 and p53 can be used as key indicators to predict the prognosis of HNSCC patients and as critical immunohistochemical indicators in clinical practice. The survival model constructed based on p16 and p53 expression levels reliably predicts patient prognosis.
Introduction
Head and neck squamous cell carcinoma (HNSCC) is a cancer that originates in the squamous cells of the head and neck.3][4] Smoking and drinking are primary risk factors associated with HNSCC. 1 The prevalence of head and neck squamous cell carcinoma should have decreased with societal progress and the reduction of these detrimental behaviours.However, this anticipated decrease has not materialised. 5According to previous research, human papillomavirus (HPV) is an independent factor influencing the occurrence and development of HNSCC, especially oropharyngeal squamous cell carcinoma (OPSCC). 6,7The rise in the prevalence of HPV-positive HNSCC can be attributed to shifts in sexual behaviour patterns.This phenomenon provides an explanatory factor for the observed increase in HNSCC incidence, rather than the decrease that was expected. 5HPV-positive HNSCC often has a better prognosis than HPV-negative HNSCC, which has attracted the attention of many researchers. 8The primary carcinogenic mechanism of HPV-related HNSCC is the deletion of E1/E2 from the viral genome after the integration of viral and cellular DNA. 9The protein E2 can inhibit the expression of proteins E6 and E7, while E6 binds to p53 and degrades it, which does not affect the structure of p53. 10,11Currently available treatments activate p53 and inhibit the occurrence of tumours, while the E7 protein binds the Rb protein and inactivates it, leading to upregulation of p16 expression and downregulation of cyclin D1 expression, and thus promoting tumour formation. 9,10oth p16 and p53 are tumour suppressor molecules, and cyclin D1 is a positive regulator of the cell cycle that also regulates the occurrence and development of tumours. 10,124][15] Therefore, the combination of p16 immunohistochemistry and HPV detection is often used to assess HPV infection status. 16Immunohistochemical results for p16 are the key indicators used in the primary screening of HPVpositive OPSCC. 17p53 is encoded by the TP53 gene and regulates radiation-induced DNA repair. 18Alcohol and alcohol abuse often lead to TP53 sequence variants. 19In HNSCC, high expression of p53 often predicts good prognosis and is associated with the occurrence and development of HPV-positive HNSCC. 20Previous studies have suggested the combined use of p16 and p53 as routine immunohistochemical indicators for HPV detection. 21,22Cyclin D1 plays an essential pathogenic role in HPV-positive HNSCC. 23High expression of cyclin D1 can shorten the G1 phase of the cell cycle, reduce dependence on growth factors, cause the mitotic phase to occur before DNA repair, and lead to expansion in the absence of growth factors. 24This abnormal proliferation often leads to the development of cancer. 24Many studies have shown that HPV-positive HNSCC patients often have good prognosis. 4,8,25owever, according to previous studies, p16, p53, and cyclin D1 do not fundamentally define HPV infection status, and insufficient evidence that they are important indicators of good prognosis is available. 12,26,27Investigation of the manifestation of HPV-positive HNSCC and the necessity of its detection have emerged recently as prominent areas of research.In this study, p16, p53, and cyclin D1 are employed as distinct pathological markers and their clinical relevance is assessed.
The Cancer Genome Atlas (TCGA) established by the United States National Cancer Institute and National Human Genome Research Institute provides cancer-related clinical data, in addition to genomic, epigenomic, transcriptomic, and proteomic data. 28,29The TCGA database employs highthroughput gene sequencing technology to acquire comprehensive omics data for 33 distinct cancer types.This database is currently widely used and is in the public domain. 30n this study, the TCGA database was used to screen HNSCC clinical data and p16, p53, and cyclin D1 transcriptomic data for evaluation of whether p16, p53, and cyclin D1 could be used as prognostic genes in HNSCC.This study provides a foundation for the regular identification of these markers in clinical practice.
TCGA data obtained and cleaning
HNSCC phenotype data (604 samples; data were updated on 6 December 2019) and survival data (604 samples; data were updated on 13 September 2018) were downloaded from the University of California, Santa Cruz Xena platform (https:// xenabrowser.net),which is the official website for TCGA database downloads.
The following data were extracted: sample ID, age, gender, lymphatic invasion, pathologic T staging (T1-T2, T3-T4), pathologic lymph node metastasis, clinical stage, drinking history, radiotherapy history, primary tumour site (oral cavity, laryngopharyngeal, or oropharyngeal region), and disease-free survival.HPV infection status and p16, p53, and cyclin D1 mRNA expression levels were downloaded from the cBioPortal (http://cbioportal.org)and the merge function was used to merge data for 112 cases.The primary approach employed in data cleaning involved the removal of empty values to acquire data suitable for later study, thereby ensuring the integrity and scientific rigour of the analysis.
Collection of patient data
Informed consent was obtained from the patients and their families, and the Ethics Committee of the Affiliated Hospital of Binzhou Medical University approved all experimental procedures and protocols.The samples used in this study were collected from 22 patients with HNSCC who underwent surgical treatment in the Affiliated Hospital of Binzhou Medical College from 2015 to 2017.The patients have been monitored continuously, and the statistical outcomes assessed here are contingent upon the survival of patients over a period of 5 years.Among the 22 patients, 7 were lost to follow-up, while 15 were successfully contacted for follow-up.The inclusion criteria were as follows: (1) HNSCC was diagnosed by a pathologist at the Department of Pathology, Affiliated Hospital of Binzhou Medical University.This diagnosis was based on pathological changes: squamous differentiated cancer cells exhibit both intracellular and extracellular keratinisation; these cells are organised into compact clusters, strands, or clusters, and lamellar keratin may be present within the central region of the cancer clusters.(2) Patients underwent surgery; (3) patients did not receive radiotherapy and chemotherapy before surgery; and (4) patients provided informed consent.The exclusion criteria were as follows: (1) patients without HNSCC; (2) patients who refused surgery; (3) patients who had received radiotherapy and chemotherapy before surgery.This study protocol was approved by the Ethics Committee of the Affiliated Hospital of Binzhou Medical College (Approval Number 2018-G010-01).
Immunohistochemistry
Paraffin sections 3 to 5 mm thick were obtained from the Department of Pathology of the Affiliated Hospital of Binzhou Medical University and stained using a typical SP staining kit (Zhongshan Golden Bridge Biotechnology Co, Ltd).The tissue sections were incubated at 65 °C, dewaxed 3 times in xylene for 10 minutes, and then dehydrated 6 times in a gradient of alcohol concentrations for 3 minutes.After antigen retrieval in ethylenediaminetetraacetic acid, the tissue sections were mixed with catalase and goat serum and then incubated at room temperature for 10 minutes.In addition, tissue sections were incubated with a primary antibody at 37 °C in an incubator for 1 hour, followed by incubation with biotin-labelled goat antimouse/antirabbit IgG for 15 minutes at room temperature, and further incubation with horseradish peroxidase-labelled streptavidin working solution for 10 minutes.Colour development was achieved using 3,3 0 -diaminobenzidine (DAB), and the process was observed under a microscope.The colour development time was approximately 15 to 30 seconds, and haematoxylin counterstaining was performed for 1 minute.Differentiation was achieved using hydrochloric acid and alcohol.Tissue samples were flushed with water until stained blue and a neutral gum adhesive sheet was employed.We defined diffusely positive p16 and p53 signals as positive, and the remaining samples were defined as negative, representing the final immune group.Two experienced pathologists interpreted the immunohistochemistry results.The antibodies used were human P16Ab-BF0580 (Affinity Biosciences), p53Ab-Af0879 (Affinity Biosciences), and Cyclin D1 Rabbit Polyclonal antibody (26939-1-Ap) (Proteintech).
Prognostic analysis
Kaplan−Meier (KM) analysis in the R language (survival package) was used to screen cut-off values for p16, p53, and cyclin D1 to define the expression status of related molecules.The data obtained for p16, p53, and cyclin D1 are quantitative.However, for analysis, we often used categories of high expression and low expression to define positive status.We used the KM analysis package to determine the cut-off value in an outcome-oriented manner for defining positive status.
The Cox proportional-hazards model is a statistical model that combines parametric and nonparametric elements to examine the associations among several risk variables and the timing and occurrence of event outcomes.This model addresses the limitations of single-factor constraints in basic survival analysis.Following the exclusion of prognosis-related indicators through single-factor Cox regression analysis, we examined the impact of each component on prognosis using multifactor Cox regression analysis. 31Univariate Cox analysis and log-rank analysis were used for univariate prognostic analysis.The former 2 statistical analyses were conducted using the statistical software SPSS (version 20.0), and the latter was conducted using the survival package and then graphed.After careful consideration, statistically significant indicators were selected for multivariate Cox analysis (SPSS version 20.0).
During construction of the multifactor regression model, scores were assigned to each value bin of the influencing factors.These scores were determined based on the contribution of each influencing factor, as indicated by the magnitude of the corresponding regression coefficient, to the outcome variable in the model.The individual scores were then aggregated to obtain a total score.Subsequently, the total score was converted into a probability of occurrence for the outcome event using a functional relationship. 32Finally, the predicted value of the individual outcome event was calculated based on this probability.Based on the hazard ratio (HR) and 95% confidence interval (CI) obtained from multivariate Cox analysis, a nomogram was drawn in the R language (R forest plot package) to construct the prognostic model.Model accuracy was assessed through calibration degree analysis (R survival package).
Patient characteristics
After cleaning the HNSCC data extracted from the TCGA database, 112 patients (83 males and 29 females) were included in the analysis, including 53 patients aged ≤60 years and 59 patients aged >60 years.In total, 27 deaths (24.1% of patients) occurred due to HNSCC during the follow-up period.The predominant sites of the primary tumour were the oral, laryngeal and hypopharyngeal regions (90.20%), accounting for a total of 102 patients, while only 10 patients had tumours located in the oropharyngeal region (9.8%).Thirty-three patients exhibited lymphatic invasion, while 79 patients had no lymphatic invasion.Ten cases of HPV infection and 102 cases without HPV infection were included.Based on pathologic TNM (tumour, lymph node, metastasis) staging, 39 patients were in the T1 to T2 stage, while 73 patients were in the T3 to T4 stage.Lymph node metastases were observed in 56 patients, while 56 had no lymph node metastases.A total of 52 patients were in clinical stages I to II, whereas 60 patients were in clinical stages III to IV.In addition, 67 patients received postoperative radiotherapy, while 45 did not receive radiotherapy.Expression levels of p16, p53, and cyclin D1 were assessed using quantitative data.Cut-off values of 433.15, 295.79, and 8792.4 were obtained from KM analysis.Values above the molecular cut-off value were defined as the positive expression state, and those below the cut-off were defined as low expression (Figure 1).Analysis revealed that 50 (44.6%)patients were p16-positive, 99 (88.4%) were p53-positive, and 25 (22.3%) were cyclin D1-positive.
Clinicopathological characteristics and prognosis
Single-factor and multi-factor analyses were conducted to assess the factors associated with prognosis.The HR, 95% CI, relationship was statistically significant.A nomogram and predictive model for HNSCC were developed based on the HR and 95% CI values obtained through multivariate Cox regression analysis (Figure 2B).The results showed that the scores for p16(+), p16(À), p53(+), and p53(À) were 0, 62, 0, and 100, respectively.Analysis of 50 randomly selected samples revealed that the calibration analysis aligned with the curve fitting-based correction, indicating that the results were consistent (Figure 2C).
External validation
We collected data from 22 patients with HNSCC in the Affiliated Hospital of Binzhou Medical College from 2015 to 2017, including laryngeal and oral cancer patients(Table 2).Oral cancer occurs in the floor of the mouth, palate, lips, gums, and jaws.Of the 22 patients, 7 were lost to follow-up.Immunohistochemical analysis was performed to ascertain the p16 (Figure 3A) and p53 (Figure 3B) status of the remaining 15 patients, of which 7 (46.7%)and 10 (66.7%) were positive for p16 and p53, respectively.According to the results of the predictive model, we scored the immunohistochemical indicators and found that as the score increased, the survival rate showed a downward trend (Figure 3C).We conducted a survival analysis based on the scoring results and found that the trend was not statistically significant, which may be due to insufficient sample size.However, we found that the observations and model results have similar trends; therefore, we preliminarily considered the predictive model to be reliable and proposed that p16 and p53 play vital roles in the prognosis of HNSCC and can be used as factors to guide treatment.
Discussion
HPV is considered an independent factor affecting the occurrence and development of HNSCC, especially OPSCC. 33HPV (+) often indicates a good prognosis. 34In 2009, the National Comprehensive Cancer Network identified HPV as an independent pathogenic factor for OPSCC. 35The pathogenesis of HPV-positive HNSCC involves the participation of p16, p53, and cyclin D1. 36 HPV is involved in carcinogenesis through the following pathways: E7 protein expression is involved in the pRB−p16−cyclin D1 pathway, causing abnormal cell growth; and E6 protein binds p53, reduces p53 expression, and causes cancer. 37However, these processes do not affect the TP53 structure, which can reactivate p53 expression after radiotherapy or other treatment. 11,38In HPV(À) HNSCC patients, TP53 is closely related to smoking and drinking, which often leads to mutant TP53. 39At present, many HPV detection methods exist, including in-situ hybridisation and polymerase chain reaction technologies. 40In clinical practice, p16, p53, and cyclin D1 are often used in combination as HPV surrogate immunohistochemical indicators. 36However, these markers are not perfect surrogates and cannot be used to accurately evaluate prognosis, therefore more accurate detection of HPV is needed.Whether detection of p16, p53, and cyclin D1 is necessary, as well as whether p16, p53 and cyclin D1 can be used as potential therapeutic targets, have become current research hotspots.Many studies have shown that expression of p16 and p53 affects the prognosis of oral squamous cell carcinoma, OPSCC, and laryngeal carcinoma.Plath assessed 313 OPSCC patients via immunohistochemical analysis of p16, p53, and cyclin D1 and found that high expression of p16, low expression of cyclin D1, and low expression of p53 were associated with better prognosis. 41owever, when cases occurring in the same local hospitals or units are investigated, the conclusions are bound to have regional impacts.The TCGA database is a public database that includes clinical data and genomic information from around the world, and its data is sufficiently comprehensive and objective. 28For these reasons, we performed statistical analysis using clinical and transcriptomic data in the TCGA database to improve the objectivity and reliability of the results.
In this study, we used various statistical methods to verify that HPV-related factors affect the prognosis of HNSCC.However, in the data obtained in this study, no deaths of HPV(+) patients were recorded.In that analysis, we unexpectedly found that p16 and p53 play important roles in HNSCC, with greater influence than T stage and lymph node metastasis.At present, the world's advanced hospitals use p16 and p53 as routine immunohistochemical indicators of HNSCC.However, quite a few prefecture-level city hospitals have not adopted such screening, and the concept that the pathological diagnosis of HNSCC is clear and that no immunohistochemical indicators are needed, has been expressed in such settings.We hope to further confirm the critical roles of p16, p53 in HNSCC through this study and promote their prognostic application.We collected data from laryngeal cancer and oral cancer patients in our hospital over 15 to 17 years to analyse the correlations of p16 and p53 expression with prognosis.Although the results were not statistically significant, the trend was consistent with the predictive model.
In summary, we propose that p16 and p53 can be used as important prognostic indicators in HNSCC to guide treatment methods, and are essential molecular indicators that should be promoted and applied in clinical practice.A few researchers have noted that the expression of p53 and p16 at the surgical margin is closely associated with tumour recurrence. 42,43Based on the theory of field carcinogenesis, that observation may be a manifestation of proto-oncogene activity at the edge of the tumour. 44he expression levels of p16, p53, and cyclin D1 in the models constructed in this study were determined using cutoff values derived from sequencing data.In clinical practice, according to the percentage of positive cells, the results of immunohistochemistry may be strongly positive, weakly positive, or negative.Therefore, some errors may affect this prognostic model.Likewise, acknowledging the limitations associated with the data gathered from the TCGA database is important.The occurrence of HNSCC and determinants of patient incidence vary among locations, resulting in disparities among demographic groups.Further research is needed to refine our prognostic model.At present, we have collected case information for 871 cases at West China hospital from 2019 to 2021, of which 471 cases have had successful followup, 515 cases were tested for p16, and 502 cases were tested for p53.However, model predictions for these cases cannot be made due to the model's age limit.After follow-up analysis is over, we will make model predictions based on the results and compare the 2 models through receiver operating characteristic analysis to further demonstrate the accuracy of the model introduced in this study.At the same time, based on the expression of p53 and p16 at the surgical margin, stratified analysis can be performed to construct a more accurate prognostic model.
Conclusion
p16 and p53 have been identified as significant prognostic markers for HNSCC.A favourable prognosis is frequently associated with elevated levels of p16 and p53 expression.A prognostic model developed using p16 and p53 exhibits a reasonable level of dependability.These data could serve as a valuable resource for informing and guiding future research efforts focused on deintensification of HNSCC.
The English in this document has been checked by at least 2 professional editors, both native speakers of English.For a certificate, please see: http://www.textcheck.com/certificate/KWAzUf
p 1 6
a n d p 5 3 c a n s e r v e a s p r o g n o s t i c m a r k e r s f o r h e a d a n d n e c k s q u a m o u s c e l l and P-values obtained from univariate Cox regression analysis are presented in Table1.Candidate variables with P < .2 in univariate analysis were included in the multivariable model.Lymph node metastasis, T stage, clinical stage, p53, p16, and cyclin D1 had significant prognostic value (P < .2),with HR (95% CI) values of 0.362 (0.158-0.828), 2.752 (0.949-7.984), 2.454 (1.038-5.805),4.728 (2.054-10.882),2.671 (1.128-6.324),and 0.334 (0.156-0.715), respectively.The multivariate Cox regression analysis included lymph node metastasis, T stage, clinical stage, p53, p16, and cyclin D1.The results demonstrated that p16 and p53 were statistically significant (Figure2A).p16 and p53 were closely associated with prognosis, and the
p 1 6
Fig. 2 -(A) Multivariate Cox regression analyses of selected clinicopathological characteristics and the contradiction of forest map.(B) Nomogram and prediction model for head and neck squamous cell carcinoma (HNSCC) based on the result of multivariate Cox regression analyses.(C) Calibration analysis of prediction model.
Fig. 3 -
Fig. 3 -The immunohistochemical analysis of the p16 (A) and p53 (B) expressions and the survival analysis of the score (C) based on the result of prediction model.
p 1 6
a n d p 5 3 c a n s e r v e a s p r o g n o s t i c m a r k e r s f o r h e a d a n d n e c k s q u a m o u s c e l l
Table 1 -
Univariate analyses on factors associated with HNSCC prognosis. | 2023-12-18T16:03:53.407Z | 2023-12-01T00:00:00.000 | {
"year": 2023,
"sha1": "30d642d5c5d95b7e1d5cb519d0ddf1e8dad63970",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9ea3ef59e6386d0166221bb6cc5177d4edfa6af1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
155600200 | pes2o/s2orc | v3-fos-license | Growth: A Discussion of the Margins of Economic and Ecological Thought
In the late 1960s a debate about the long-term feasibility and desirability of economic growth as a one-size-fits-all economic policy emerged. It was argued that economic growth was one of the underlying causes of ecological and social problems faced by humanity. The issue remained strongly disputed until the inception of the Sustainable Development discourse by which the debate was politically settled. Nevertheless, given that many ecological and social problems remain unsolved and somehave become evenmore severe, there are renewed calls for the abandoning of the economic growth commitment, particularly in already affluent countries. This chapter summarises the growth debate hitherto and examines two alternatives, the steady-state economy proposed by Herman Daly and economic de-growth proposed by Serge Latouche. In spite of recent disputes between theAnglo-Saxon steady-state school and the emerging continental de-growth school, it is argued, consistent with recent contributions on the issue, that steady-state and de-growth are not mutually exclusive but inevitably complements. The steady-state has the advantage of comprehensive theoretical elaboration, while de-growth has the advantage of an attractive political slogan which has re-opened the debate on the issue. Latouche is also a social thinker who gives a voice to the critiques of economic growth contained in the notion of development from outside Europe and the United States. The steady-state economy, and de-growth are held by some analysts to be beyond what is politically feasible. Although this argument is valid, it fails to recognise that past desirable societal changes were made possible through reflexive societal processes conducive to collective action and institutional change. It is concluded that the debatemust ultimately rest in the physical quantities that a given economyneeds for the ‘good life’ in the long run, how to decide on these quantities, how to achieve them, and how to maintain an approximate global steady-state. Finally, some recommendations for further research along with some reflections on the potential role of scholars are provided. A. Perez-Carmona (*) Research fellow at the Institute for Advanced Sustainability Studies, Potsdam, Germany, e-mail: alexandrop@gmx.net L. Meuleman (ed.), Transgovernance, DOI 10.1007/978-3-642-28009-2_3, # The Author(s) 2013 83
Introduction
The aims of this chapter are (1) to provide a summary and analysis of the growth debate hitherto and (2) to scrutinise and compare alternative policy proposals. The structure of the chapter is the following: summary and analysis of the growth debate from the late 1960s until present are dealt with in the next section. In Sect. 3.3, I describe the theoretical underpinnings, the basic model and some policy recommendations for institutional change in order to achieve and eventually to manage a steady-state economy. The steady-state economy was conceived by one of the founding fathers of Ecological Economics, Herman Daly. In Daly's conception, the optimal scale of the economy replaces economic growth as the overall goal of macro-economic policy. In Sect. 3.4, I explore the ideas of the principal intellectual figure behind the emerging de-growth movement, Serge Latouche. He argues for a cultural change that would, physically speaking, de-grow the economies of rich countries in order to 'make room' for development in poor countries, while at the same time severely criticising the very notion of 'development'. In Sect. 3.5 a comparative analysis of Daly's and Latouche's ideas are provided. Conclusions and prospects for the social sciences are dealt with in Sect. 3.6.
The Growth Debate: Its Sources and Contours
The discussions in this section will be set against two backgrounds: (1) the prevailing economic doctrine alone with some relevant events, and (2) the global ecological footprint metric (see Fig. 3.1). Two prevailing economic doctrines can be distinguished in this period. First, Keynesianism which was adopted and largely implemented after the great depression of 1930 as well as during the post-war period in the West. It lasted until the early 1970s. The application of the ideas of J. M. Keynes constituted incidentally the beginning of an active pro-growth policy after the great depression and the split of economics between macro-and microeconomics. 1 The 1970s saw the end of the convertibility of the dollar to gold (1971), high oil prices (1973)(1974)(1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983)(1984)(1985)(1986), a stock market crash, and an economic crisis (1973)(1974)(1975) in two core countries, the United States (US) and Britain. Following this, a political window of opportunity was seized by a revitalised laissez-faire or neo-liberal intellectual movement prominently represented by F.A. von Hayek and M. Friedman. Neo-liberal doctrines were partially implemented in the West, but even more in its zones of influence and later worldwide after the collapse of the Soviet Union (1991). This phenomenon was later labelled as 'globalisation'. 2 After the preceding economic crisis (2008)(2009), there was a temporary renaissance of Keynesianism including its 'greened' version that had been proposed to come to grips with the ecological predicament. In recent times, however, Keynesianism seems to have been reduced to a minor option given the current multi-crisis of high oil and commodity prices, US fiscal problems, and the Eurozone debt crisis, in which austerity forces seem to have won the overhand. This information is placed on the x-axis of Fig. 3.1.
The global ecological footprint is an aggregate index which measures the ability of the biosphere to produce crops, livestock (pasture), timber products (forest), fish, to host built-up land, as well as to uptake carbon dioxide in forests. 3 Carbon dioxide emissions are the largest portion of humanity's current footprint. The ecological footprint is less controversial than other ecological metrics. 4 Figure 3.1 depicts the ever rising global ecological footprint in the period . Humanity started to overshoot the world carrying capacity, or 'biocapacity ', roughly in 1975', roughly in . By 2007 In that year, the last one in which the metric was estimated, half of the ecological footprint was attributable to just 10 countries, whereby the US and China alone were using almost half of the earths' biocapacity with 21% and , eight relevant publications and two macroeconomic doctrines (Ewing et al. 2010. Modified by the author) 3 The ecological footprint is a metric developed in the early 1990s by William Rees and Mathis Wackernagel. For an extensive explanation of the metric see Wackernagel and Rees (1996). 4 It is less controversial in the official sense since it has been endorsed by the United Nations Development Programme (UNDP 2010) and the United Nations Environment Programme (UNEP 2011). 5 It is of course impossible to use planets that do not exist. Excessive carbon dioxide emissions are in reality accumulating in the atmosphere. 24% ecological footprint respectively (Ewing et al. 2010: 18). 6 Figure 3.1 also shows eight publications which are milestones in the economic growth debate. It is around the message of these publications that the discussion will be centred, whereby Steady-State Economics (1977) and Farewell to Growth (2009) will be dealt with in greater detail in two separate sections.
Scarcity, Pollution and Overpopulation
The origins of the economic growth debate lie in the late 1960s and the early 1970s, when a bundle of ecological concerns articulated primarily by natural scientists converged in rich countries. 7 A general public preoccupation with pollution and the political backdrop of the 'environmental revolution' was the book Silent Spring (1962) authored by Rachel Carson. Concerns about scarcity emerged with the dramatic increase in world population. This concern was epitomised by Paul Ehrlich's book The Population Bomb (1968). While the environmental discussion was primarily framed by natural scientists and the emerging political activism of the late 1960s, the most important social discipline was also taking position in that debate: economics. 8 In the US the think-tank Resources for the Future was established in 1952, which, in line with governmental concerns on potential shortages of raw materials published Scarcity and Growth authored by Barnett and Morse (1963). This study turned into economic orthodoxy (Daly 1991: 40, Dryzek 1997. The emphasis of the study was to show that resource scarcities do not impair economic growth. The authors revised the classical economic doctrines of resource scarcity and compared them with what they called the contemporary 'progressive world' (Barnett and Morse 1963: 234). They concluded that technological innovation, resource substitution, recovery and discovery of new resources 6 It must be also mentioned that these countries have within their borders a great portion of the global biocapacity, namely 10% in the US and 11% in China. 7 For reasons of convenience, I will split the world into 'rich' and 'poor' countries in the conventional terms of per-capita income. In other cases, I will use also the notions 'core' and 'peripheral' in the sense of material and discursive bargaining power of the latter with respect to the former. Rich or core countries are those located in North America, Western Europe, and the countries Australia and Japan. Poor or peripheral countries are the rest. When necessary, I will mention countries, regions or more recent categories grouping countries such as 'emerging countries'. 8 An economic system means, stated in its simplest terms, how a given human group attempts to stay alive, that is, how it acquires food (energy), build housing, organise labour, and how is what is produced distributed among the members of the group. Given the overwhelming importance for human affairs of economic systems, it follows that the social discipline that studies it, must be equally of overwhelming importance, in this case economics. A step further is the distinction between the dominant school in economics, that is, neo-classic economics, which is also sometimes labeled as 'economic orthodoxy', and the less influential schools, such as Ecological Economics, Economic Anthropology, Old-Institutional Economics, and so on. made Malthus' and Ricardo's doctrines basically obsolete. These mechanisms would function not only better within the free-market system, but also more rapidly as to broaden the availability of resources, even making the definition of 'resource' uncertain over time. Therefore: 'A limit may exist, but it can be neither defined nor specified in economic terms [. . .]. Nature imposes particular scarcities, not an inescapable general scarcity' (Barnett and Morse 1963: 11). With respect to pollution, economists were borrowing from the thought of its welfare economists' precursors. The concept of externality, already familiar from the writings of Cecil Pigou in the 1920s and Ronald Coase in the 1960s fitted nicely into pollution issues (Pearce 2002). The economist's mission became the design of allocation mechanisms capable of realising foregone costs and benefits. As leading environmental economists Baumol and Oates in their textbook observed: 'When the 'environmental revolution' arrived in the 1960s, economists were ready and waiting ' (1988: 1). For economists, doubts about the feasibility or desirability of economic growth were not raised. Beyond economic orthodoxy, human ecologists were further drawing attention to the world's population increase, mainly territorially restricted to poor countries, 9 while the expanding environmental movement was concluding that the mounting ecological problems were rather caused by 'consumerism', and more broadly by wasteful lifestyles. As wasteful lifestyles became synonymous with the pursuit of economic growth, the 'antigrowth' movement was born (Pearce 2002: 60).
However, it was not only the emergent environmental movement which perceived economic growth as the problem. The position of economists concerning the link between economic growth and the natural environment also began to show fissures. The discussion did not focus only on the concepts and relationships between a given set of assumptions, but also on the assumptions which themselves sustain the superstructure of macro-economic theories which made possible the belief in perpetual economic growth. In the list of economic assumptions nature was missing. 'Land' had been long since reduced to merely an input factor, deprived of all environmental functions and any traditional social meaning; and the newly re-emphasised 'externality' was seen rather as an exceptional case, therefore constituting a half-hearted ad hoc recognition of the sink function of nature in the economic process. As historian McNeill (2000: 335) put it: 'if Judeo-Christian monotheism took nature out of religion, Anglo-American economists (after about 1880) took nature out of economics'. The expansion of ecological problems was caused by the fact that economists were living in the 'cowboy economy' of the 'illimitable plains and also associated with reckless, exploitative, romantic, and violent behavior', while humanity were rather approaching the 'spaceman economy' in which the earth was a 'single spaceship, without unlimited reservoirs of anything, either for extraction or pollution' (Boulding 1966: 6). 10 In the spaceship economy, perpetual economic growth was physically unfeasible and given the ensuing social and ecological costs of post-war growth not even desirable, as British economist Mishan (1967) reasoned. Mishan condemned what he called the 'growthmania' suffered by his fellow economists and professional politicians. However, comprehensive explanation and modelling of growth-related problems would only be offered in the following years. Ayres and Kneese (1969) published their Production, Consumption and Externalities in which they showed, partially consonant with the arguments already made before by Kapp (1950) that externalities were not exceptional cases but rather an inherent part of the economic process. This seminal article would in due course give birth to the discipline Industrial Ecology. Similarly, Environment, Power and Society (1971) authored by pioneer ecologist Howard T. Odum attempted to frame the relationship between human and natural systems in terms of matter and energy analysis, equally showing the inherent production of waste/pollution which necessarily returns to the natural environment. This work would bring a number of young ecologists into what later would be called Ecological Economics. Nicholas Georgescu-Roegen a former student of Schumpeter published The Entropy Law and the Economic Process (1971), a book in which he explained from a historical perspective, the weak spot of economic orthodoxy in handling the issues of depletion and waste/pollution. In the formative years of economics as a science, it borrowed the mechanistic/circular outlook from Newtonian physics; hence the economist was failing to account for irrevocable linear processes occurring to energy/matter in the process of economic transformation. From this perspective, Herman Daly, a student of Georgescu-Roegen, proposed the stationary-state economy (1971) which he felt should replace the growth-policy as an overall societal objective. Georgescu-Roegen on the other hand would later insist on a de-growth policy.
This fertile intellectual activity and debate between 1966 and 1971 took place mainly in the limited arena of academia. Projecting the discussion beyond this was the achievement of a team of natural scientists at the Massachusetts Institute of Technology who published in March 1972 a small report entitled The Limits to Growth.
Understanding the Whole
According to the scientist team, the failure of adequate political responses to tackle environmental and resource problems were due to a lack of understanding of the 10 It is widely recognised that the picture of the earth taken by Apollo 8 in 1968, the 'earthrise', gave a massive boost to the environmental movement. The 'earthrise' made it possible to conceptualise the earth as a beautiful, fragile, floating in the middle of nowhere, and especially, finite planet. It is remarkable that Kenneth Boulding introduced the spaceship analogy 2 years before the earthrise picture shaped public imaginary.
human system as a whole: 'we continue to examine single items in the problematique without understanding that the whole is more than the sum of its parts ' (Meadows et al. 1972: 11, italics in original text). Using the new system dynamics methodology developed by Jay Wright Forrester and the computer model World3, the authors of Limits to Growth (LtG) examined the interaction of five key subsystems of the global system: population, industrial production, food production, pollution and natural resources. They assumed that population and industrial production were growing exponentially, in a world with absolute fixed available resources. The time scale of the modelling ranged from 1900 until 2100. As the team abundantly emphasised, the world model was not intended to make exact predictions (Meadows et al. 1972: 93, 94, 122) given the extreme complexity and uncertainties involved in the real world. Their aim was rather to understand the global system's behavioural tendencies and to offer a plausible answer to the question: are our current growth policies leading to a sustainable future or to a collapse? Figure 3.2a, b show two scenarios of the world model from a total of seven. Figure 3.2a plots historical values from 1900 to 1970 until 2100. It assumes no major changes in historical socio-economic relationships. It is the 'standard run' which illustrates that the world is 'running out of resources' in the first decade of the twenty-first century, while population collapse occurs in the middle of it. As industrial output increases exponentially, it requires an enormous input of resources. Resources becoming scarce led to a rise in prices which conversely left less financial capital to be re-invested for future growth. Ultimately, investment did not keep up with depreciation and the industrial base fell along with agricultural systems which became dependent on industrial outputs such as fertilisers, pesticides, and especially, energy sources for mechanised agriculture. Population continued to increase for approximately two decades and finally started to decline when the death rates were driven upward by a lack of food and health services. The (Meadows et al. 1972) team ran five more scenarios in which the initial assumptions made in the standard run were additively relaxed. Nonetheless, in each case the population inevitably collapsed during the twenty-first century due to an ever rising pollution, food shortages, and so on. Figure 3.2b plots an aggregate scenario of several technological and political responses to shortages. Technology is being implemented in every sector: nuclear power, recycling, mining the most remote reserves, withholding as many pollutants as possible, pushing further yields from the land and having 'perfect birth control '. 11 Population collapse has simply been delayed by several decades. In this scenario, three crises hit simultaneously, food production drops because of land erosion, resources are depleted by a prosperous population holding an average income per capita of close to the US level, pollution rises, drops and then rises again dramatically causing a further decrease in food production.
The study was presented at a perfect time, as the first United Nation Conference on the Human Environment was held in June of that year 3 months after the study was released. Nevertheless, the policy goal of stabilisation which the team proposed and which happened to resemble Daly's idea of the stationary-economy (zerogrowth) advanced 1 year before was largely dismissed. According to Beckerman (1972), delegates of poor countries made it profusely clear that they were not going to accept any policy arising from the study of some uncertain planetary limits that would hamper their future development. Henceforth, international relations could continue to operate under the frame of development set out by the US president Truman in his inaugural address of 1949, that is, actively reducing trade barriers and making the benefits of industrial progress available 'for the improvement and growth of underdeveloped areas' (Truman 1949). Additionally, LtG was unanimously rejected by leading economists (Beckerman 1972, Kaysen 1972Solow 1973Solow , 1974Beckerman 1974). The common argumentative line was that technological progress and the market mechanism could prevent scarcity and pollution from constituting a substantial limitation on long-term economic growth. In essence, their way of looking at the problem was identical to that established by Barnett and Morse a decade before. Cole et al. (1973) re-ran the world model, yet they eliminated absolute limits of resources and let them increase pari passu with population and consumption, assuming additionally total control of pollution. They claimed if 'the rates of (technological) progress are increased to 2% per annum collapse is postponed indefinitely' (Cole et al. 1973: 118).
The emerging economic heresy also contributed to the LtG debate. They were particularly emphatic about the incongruences and fallacies committed by their orthodox colleagues (Daly 1972: 949-950, Georgescu-Roegen 1975: 363-366, Mishan 1977. Georgescu-Roegen, for example, was impressed by the fact that many of the critiques made by economists on the methodology employed in LtG, was the very same which they themselves routinely used. 12 They condemned LtG for having used the assumption of exponential growth; nonetheless, economists themselves have always suffered from 'growthmania'. Economic plans have been designed with the explicit aim of obtaining the highest rate of growth possible and the very theory of economic development is firmly anchored in exponential growth models. Furthermore, some of them used the very same argument of exponential growth -but applied it to the 'increase' in technological progress in order to criticise LtG. This argument besides being circular, is fallacious on other grounds. Technology is a non-physical entity -unless it is embodied in capital -that as such cannot (exponentially) grow as a population does. Georgescu-Roegen concluded that economists proceeded according to the Latin adage: quod licet Iovi non licet bovi -what is permitted to Jupiter is not permitted to an ox (1975: 365).
Six years later, after LtG's release, Daly published his Steady-State Economics (1977). The book was a collection of essays which dealt with logical inconsistencies made by pro-growth proponents, and expanded on physical and economic motives for a stationary but developing economy. Chapter 4: 'A Catechism of Growth Fallacies' dealt with 16 fallacious arguments. Four of them were of particular significance to reproduce here given their endurance: (1) becoming rich through economic growth is the only way to afford the costs of cleaning up pollution: as Daly noted, this statement skips the relevant question of when economic growth will start to make a nation poorer and not richer. The problem is that economists do not attempt to compare costs and benefits of growth, apparently because it is tactility implied that growth is always 'economic'. (2) Growth is necessary to combat poverty: Daly argued that in spite of the growth of the preceding years in the US, there was still poverty. The benefits of the reinvested surplus which generates growth go preponderantly to the owners of the surplus, who are not poor and only some of the growth dividends 'trickle down'. For growth-economists, Daly further reasoned, growth has become a substitute for inequality concerns. Yet, with less inequality, less growth and consequently less ecological pressure would be required. (3) Growth can be maintained by further shifting the economy to the service sector: Daly argued that after adding the indirect aspects of services activities (inputs to inputs to inputs, that is, Leontief's input-output-analysis), we will likely find out that they do not pollute or deplete less significantly than industrial activities. Casual observation shows that universities, hospitals, insurance companies, and so on, require a substantial physical base. The reason why employment in the service sectors has grown relative to total unemployment is because of the vast increase of productivity and total output of industry and agriculture which conversely has required more throughput given the increased scale. (4) Oil is not recycled because it is still uneconomic to do so; humankind is less worried about the environment because it is currently not totally dependent on it, and nature imposes no inescapable scarcities: According to Daly these arguments can only be made given economist's illiteracy in basic natural sciences.
Notwithstanding these arguments -which were largely ignored -orthodox economists contributed to producing the general impression that LtG was simply pessimistic, and predicting something alone with the reaffirmation that technological progress would cope with all sorts of ecological problems. In contrast, LtG did contribute to popularising the sustainability debate which was emerging at that time by selling millions of copies and being translated into 30 languages (Meadows et al. 2004: x), even influencing the opinion of leading politicians in Europe. Sicco Mansholt, the president the European Commission (1972-1973 read LtG and concluded that growth in Europe should not only be stopped but even reduced, and replaced with another 'growth', that is, the growth of culture, happiness and well-being (Mansholt 1972).
In the late 1970s, the US was re-entering another economic crisis and successive efforts were focused on monetary policy in order to fight inflation at the cost of employment creation, thus risking a deeper recession. Almost simultaneously humankind was entering a global era of planetary overshoot ( Fig. 3.1). The oil embargo imposed by the Organisation of Petroleum Exporting Countries (OPEC) upon rich countries in 1973 helped to trigger not only an economic stagflation but also a debate on energy dependency. Subsequently, an energy policy embracing (1) nuclear power and (2) energy efficiency measures was discussed and partially implemented. As industrial growing economies need correspondingly increasing amounts of energy, and a part of the energy must be produced at home instead of being imported from countries located thousands of kilometres away, the vital but visible nuclear reactors rapidly produced a social response which had been in gestation years before: rejection. In 1969 physicist Starr had already proposed a risk-benefit analysis by means of 'historically revealed social preferences' (Starr 1969(Starr : 1232 with favourable results for nuclear power and speculated on the causes of the irrational risk perception by the lay public which was generating the opposition. 13 Later on, the social conflict was renamed the not-in-my-backyard syndrome (NIMBY), elevated into an analytical concept, and extended to all kinds of facility siting conflicts. Nonetheless, after the Three Mile Island incident of 1979, it was evident that the risk aversion and the nimbysm of the lay public could not be 13 As Otway (1987) explained, risk perception studies appeared as the public entered decisionmaking over technological risks, therefore turning upside down the fiduciary trust in public servants issuing the licenses and even more, antagonising the deep-grained notion of technological progress. As risk perception studies did not bring the expected results, communicative risk studies emerged in an attempt to bring public opinion in line with experts' assessments. It must be mentioned, however, that communicative risk studies turned out to be useful in dealing with, for example, occupational and natural risks. entirely dismissed as irrational. On the issue of energy efficiency and conservation policies, two energy economists were raising doubts about the effectiveness of such policies. They were resuscitating Jevons' conclusion made more than a 100 years ago that, contrary to common expectations, energy efficiency improvements would lead to more energy consumption, that is, such policies would 'backfire' (Brookes 1979, Khazzoom 1980. Hence, alone with the revival of the pessimism of the socalled Neo-Malthusians, the pessimism of Neo-Jevonians also came about. By 1980, another pessimist report was released in the US, the Global 2000 Report for the President that, as the title implies, did not look as far ahead as LtG. The major finding was that: If present trends continue, the world in 2000 will be more crowded, more polluted, less stable ecologically, and more vulnerable to disruption [. . .]. Despite greater material output, the world's people will be poorer in many ways than they are today. (Quoted by Dryzek 1997: 28) Georgescu-Roegen would have certainly said because 'of greater material output'. Nevertheless, the timing for pessimistic antigrowth positions could not be worse, for an era of exuberance would begin which could not handle the pessimism of the preceding years. In the core countries of the West, the US and Britain, a new formula for economic growth was proposed, (allegedly) away from state interventionism, and thus strong labour unions would be put in place: neo-liberalism. The optimism of the new era found its place in the ecological debate concerning economic growth through what would be later called 'cornucopianism'.
The Sustainable Development Discourse
In congruence with the rising optimistic era of neo-liberalism but acknowledging that there were real ecological issues at stake, the United Nations (UN) created the World Commission on Environment and Development. The commissions was established in order to investigate the links between the deterioration of ecological systems and economic growth in 1983, the same year in which the newly formed Green Party in West Germany managed to win enough votes to trespass the election threshold for federal parliament. The world commission was the follow up of the conference held in 1972, and it is better known by the name of its chairwoman, Mrs. Brundtland. The commission delivered the report Our Common Future in 1987, roughly a year after the optimism of infinite energy supply was shattered anew by the disaster of Chernobyl.
On the political consequences of conceptual ambiguities and the strong anthropocentrism of the report enough attention has been drawn. 14 For the aims of this chapter it is useful to highlight the origins of these ambiguities and the ambiguities specifically in relation to growth. If Sustainable Development (SD) was to have a chance of future implementation, it had to have an appeal of political acceptability in order to initially bring different interests to the table of negotiation. Nevertheless, and according to political scientist Dryzek (1997: 124), as it was recognised that sustainable development would become the global dominant discourse, powerful actors, mainly big businesses, made sure to cast it in terms which were favourable to them. Ultimately, sustainable development was politically successful, but it achieved this by sacrificing substance: 'lots of lobbyists coming together, lots of blurring going on -inevitably, lots of shallow thinking resulting' was the judgment of historian Donald Worster (1993: 143). To be sure, the difficulties lay in putting together the relatively well-framed 'sustainability' and 'development'. Sustainability was at the bottom an ecological concept traceable to the German enlightenment. What is to be sustained is the environment, although mainly for human purposes. 15 On the other hand the notion 'development', as previously noted, was established by the emerging leader of the West in 1949. 16 Given the ecological debate of the preceding years in the US, and the increasing appeal of the notion 'qualitative growth', that is, more leisure for family and hobbies during the 1970s and 1980s in Germany and France among others, 17 it was evident that the general economic policy goal of growth was at stake. The question to be solved was then: how to maintain the perpetual economic growth policy if the planet has ecological limits?
Although, as noted before, the report was (inevitably) a product of political bargain, it is necessary to understand how the report coped with the dilemma, 15 The concept appeared in Germany in the late eighteenth and early nineteenth century. As Germany's economy depended in essential ways on its forests that were rapidly declining, scientists were consulted to give advice. They started to talk about managing forests as to attain a sustained-yield so that periodic harvests would match the rate of biological growth (Worster 1993: 144). Southern notions of sustainability, however, had given forests a less anthropocentric meaning. 16 However, at this time the official meaning of development had undergone several changes. Development meant practically projecting the US model of society onto the rest of the world, but in the late 1960s and at the beginning of the 1970s too little advancements in this direction could be attested. As Sachs (1999: 6) explained: 'Poverty increased precisely in the shadow of wealth, unemployment proved resistant to growth, and the food situation could not be helped through building steel works'. Hence, in the 1970s and 1980s the meaning of development was broadened as to include justice, poverty eradication, basic needs, woman issues, and of course, ecological problems. 17 During the 1980s, the Green Party and the Social-Democratic Party of Germany had been advancing a change in the stability-act enacted in 1967 that basically reflected Keynesian doctrines of high employment through steady-growth and balanced terms of trade. The reform of the stability-act should aim rather at 'qualitative growth' in the sense explained above and ecological balance. In France, during the 1970s, the demand for more leisure was famously made by philosopher André Gorz (For the former insight I thank Dr. Angelika Zahrnt, and for the latter one, Dr. Giorgos Kallis). especially the arguments pertaining to needs and ecological limits so central to the growth debate. The emphasis was first placed on poor countries, who were after all the ones to be aided with their development. Here, essential needs were defined in conventional terms: food, clothing, shelter, and jobs. It was also accepted that beyond them, the poor have the legitimate aspirations for an improved quality of life (WCED 1987: 43). When the report switches into the realm of the rich, needs become perceived, socially and culturally determined what possibly drives up levels of consumption. Therefore, it is reminded that in the context of sustainability, values encouraging 'consumption standards within the bounds of the ecological possible and to which all can reasonably aspire' (ibid.: 44) are required. Although reaching ecological limits can be slowed through technological progress 'ultimate limits there are' (ibid.: 45). Since sustainable development also involves equity, equitable access to the constrained resources ought to be granted before the 'ultimate limits' are reached. From these premises relating to frugality, equity and time-bounded growth because of ultimate limits the conclusion was however: The Commission's overall assessment is that the international economy must speed up world growth while respecting the environmental constraints. (ibid.: 89) How to speed up world growth, that is, economic growth for both rich and poor countries, while respecting ecological limits? The solution advanced was a change in the quality of economic growth, but not in the sense advanced in Europe years before. Qualitative growth meant rather that growth must become less energy/ matter intensive and more equitable in its impact (ibid.: 52). On this general recommendation some comments are needed, for the official environmental discourse became locked in sustainable development until the present. 18 First, the report was advising something that one of the main drivers of global economic growth, the manufacturing sector, had been doing since the industrial revolution, namely becoming less energy/matter intensive. In Canada, the US and Germany, energy intensity (ratio of energy use to GDP) declined after about 1918, in Japan after 1970, in China around 1980and Brazil in 1985. The US used half as much energy and emitted less than half as much carbon per constant dollar of industrial output in 1988 as in 1958. For the world as a whole, energy intensity peaked around 1925 and by 1990 had fallen by nearly half (McNeill 2000: 316). However, these global happy trends of 'dematerialisation' and 'decabornisation' obscured the trends in industrial expansion. In fact, industry had been too successful in this domain, inasmuch as when consumers were not able to cope with what manufacturing industries were putting on the market, it started to produce consumers at home and to lobby for free trade abroad -a foreign policy already practiced by the first industrial nation Britain. What was happening entered the intellectual radar of economist John K. Galbraith (1958), who resuscitated the forgotten Say's law: a growing supply creates its own growing demand. Yet his arguments found little response from his colleagues, who two decades before restricted the boundaries of the study of economics as being unresponsive about the inquiry on the origins preferences. 19 The social-engineered cultural change partially accomplished by advertising techniques was investigated in the US by Vance Packard (1960). He described the birth of easy-credit and the general inculcation of self-indulgence in the management of money, as well as the commercialisation of virtually every aspect of life, and the technique of built-in 'progressive obsolescence'. Progressive obsolesce was introduced by both lowering standards of quality by design and psychologically outmoding products after a given time. 20 Growth became de facto a self-contained policy rather than a mean to achieve a societal goal, since the 'private economy is faced with the tough problem of selling what it can produce' (Packard 1960: 17). What is important to highlight from this process is what Packard and Galbraith troubled at that time, namely that the consequences for social welfare were neglected, let alone the political and ecological consequences of which Packard was not unmindful. The topic would be discussed years later by Erich Fromm (2007) and Fred Hirsch (1977), yet all of these growth caveats had little incidence in the Brundtland report.
Second, the fact that becoming even more efficient leads to an increase of throughput (input + output) went rather unnoticed. This was presumably because the revival of the Jevons' paradox was accomplished a couple of years before it became irrelevant at the political level as oil prices returned by the mid-1980s to their customary level. Third, the rationale that already rich countries must further pursue economic growth by consuming even more was that of helping poor countries with their economic growth as they are 'a part of an interdependent world economy' (WCED 1987: 51). The alternative that poor countries could create their own markets by selling necessities to each other instead of selling 'even more extravagant luxuries to the jaded and harried rich' (Daly 1991: 151) or allowing for import-substitution as had been put forward by Latin American economists in the 1970s and practiced with some success in the region, was entirely neglected. The mainstream doctrines of economic development that prevailed at the time in which the Brundtland report was embedded did not permit this. The policy of perpetual economic growth for the entire planet remained virtually intact in spite of discussions regarding the issue in the preceding years. Indeed with SD, the intellectual debate was politically settled (Du Pisani 2006: 93) -with one single exception: population growth. The report mentioned as a 'strategic imperative' the realisation of a 'sustainable level of population' (WCED 1987: 49). The combination of free trade and population control policies in poor countries were indeed, mildly put, suspicious.
After the UN Conference on Environment and Development (UNCED) held in Rio de Janeiro in 1992, sustainable development became gradually operationalised. The firmly established 'qualitative growth' has made it possible to talk ever since about 'patterns' of consumption and production, and to carefully avoid less consumption and production. This is despite the fact that during the earth summit which endorsed the Agenda 21, it was argued that global ecological problems arose as a result of profligate consumption and production in rich countries. 21 When the report was launched, the global economy required roughly 1,1 planets, hence, humanity had started to live from the natural capital, and not from its income. By the publication date of the report there was of course no ecological footprint metric, but LtG had been around for 15 years. Additionally, just 1 year before the report's publication, a group of natural scientists had published another study showing that humans were already appropriating 25% of the global potential product of photosynthesis (terrestrial and aquatic), and that when only terrestrial photosynthesis was considered, the fraction increased to 40% (Vitousek et al. 1986).
By 1989 the Washington consensus was formulated and the receipt was applied to poor countries which had previously become over-indebted; partially as a result of the pressures to reinvest the so-called 'petro-dollars' gained from the OPEC embargo in the 1970s which flooded development banks. The Washington consensus contained items such as the redirection of public spending from subsidies into pro-growth services, namely primary education, health care and infrastructure; trade liberalisation and privatisation of state enterprises; in short, the well-known Structural Adjustment Programs of the International Monetary Fund (IMF). In the same year, the Berlin wall fell and the process of German reunification began, thus shifting attention away from the previous discussions of reforming the Keynesian stability-act (1967) for the purposes of 'qualitative growth' -in the West German sense. After the Soviet Union collapsed in 1991 and the 'end of history' was proclaimed, neo-liberal doctrines conquered not only the Soviet Union but also 21 Recently, the nineteenth session of the UN commission on sustainable development concluded in disappointment as governments were unable to establish a consensus to produce a final outcome text. Apparently, one of the main reasons for the lack of consensus was the failure to agree over the 10-Year Framework Programme on Sustainable Consumption and Production. To this shortcoming, the UN secretary general Ban Ki-Moon stated: 'Without changing consumption production patterns -from squandering natural resources to the excessive life-style of the rich -there can be no meaningful realization of the 'green economy' concept'. its former influence's zones as to transform them into a more efficient growth machines than they had been previously (McNeill 2000: 334). The world entered the era of globalisation institutionally rounded up in 1995 when the World Trade Organization (WTO) emerged out of the culmination of the Uruguay Round of negotiation of the General Agreement on Tariffs and Trade.
By 1992 the World Bank (WB) published its World Development Report entitled Development and the Environment embracing without conceptual difficulties as the following anecdote shows: during a session in which the schematic representation of the economy was being discussed, the WB's chief economist Lawrence H. Summers refused to draw a larger box around the smaller box representing the economy. 22 The larger box would represent the natural environment as suggested by Herman Daly, who was serving as senior economist at the WB's environment department. Why refuse something so simple and evidently true? As Daly explains, it was because of the subversive iconographic suggestion that the economy could not grow in perpetuity given the limits that the environment imposes. Moreover, 'a preanalytic vision of the economy as a box floating in infinite space allows people to speak of 'sustainable growth' -a clear oxymoron to those who see the economy as a subsystem' (Daly 1996: 7. Italics in original text).
Between 'Cornucopians' and Cautious Optimists
According to Dryzek (1997: 30-31), the fact that an economist of Kenneth Arrow's intellectual calibre and reputation co-authored a paper stating that the resource base is finite and that there are 'limits to the carrying capacity' (Arrow et al. 1995: 108) is an effect of the field of Ecological Economics pioneered by Kenneth Boulding, Nicholas Georgescu-Roegen and Herman Daly. The authors focused on unravelling the fashionable claim that economic growth and free trade (export-led growth) in poor countries (development 23 ) are in the long run beneficial for the environment, a claim that, as noted before, had already been made in the 1970s. During the 1990s it came to be known as the Kuznets' curve hypothesis. It postulated an inverted U-shaped curve which described the relationship between per-capita income and indicators of natural and resource quality, that is, when a poor country becomes rich through export-led growth, only then will its population start to become 22 In the same year of the WB's publication Summers attracted international attention through an internal memo that was leaked to the public. Using impeccably the doctrine of comparative advantage, he suggested that many poor countries were 'underpolluted' and that dirty industries should be encouraged to move to them (for a retrospective analysis see Johnson et al. 2007). 23 The differentiation of economic growth and development gained support in some sectors of the development community during the 1990s (see for example Sen 1999), while other sectors where rejecting the notion outright (see for example Escobar 1992, andSachs 1992). preoccupied with environmental quality. As Arrow et al. explained, the Kuznets' curve hypothesis had been shown just for a selected set of pollutants, yet orthodox economists have conjectured that the curve applies to environmental quality in general. Moreover, they were neglecting the export of pollutants from rich to poor countries effectively done by offshoring highly polluting industries, the purposeful policy implementation to reduce environmental impacts in rich countries and finally, that sometimes environmental concerns are not only about increased demands for environmental 'quality', as the resilience of ecosystems upon which communities depend can be irreversibly damaged. 24 Two years after the article appeared, and 10 years after the launching of the Brundtland report, the influential British magazine The Economist published in its Christmas special edition an article with the title Plenty of Gloom (Anon 1997). The article attempted to show their readers by means of time-series graphs the predictive errors made in the past by Malthus, concluding that there was no reason to believe in their modern proponents. The article was important as it epitomised reasonably well another persuasive position going beyond the trend set by Barnett and Morse in 1963. 25 The so-called 'cornucopians', famously represented by economist Julian Simon. The cornucopian rationale is the following: minerals, food production have been made plentiful in the past, standards of living and life expectancy have been risen, and technological substitution has taken place many times. By extrapolating these past trends into the future, in which the basic metric of scarcity are market prices, it is concluded that the reason for growth pessimism is without substance. For example, on the issue of oil which is the 'master resource', Simon stated that we will never run out of it (Simon 1996: 179). His argument was however, subtler and the phrase misleading. In his view, it is not the oil that is important, but its service: energy. Indeed, the service of energy can be delivered by other sources rather than oil (substitution). As we will never run out of oil (energy), and energy will become increasingly cheap as in the past, it . . . would enable people to create enormous quantities of useful land. The cost of energy is the prime reason that water desalination now is too expensive for general use [. . .]. If energy costs were low enough, all kinds of raw materials could be mined from the sea. (Simon 1996: 162) All of this is possible because the 'ultimate resource' is after all human inventiveness (technology), which is 'unlimited'. Prominent orthodox economists such as Beckerman never went so far as Simon, but Beckerman had also been using timeseries in order to show that there is little reason to attend the warnings of natural scientists and derailed economists -the former ones have been wrong too many times (Beckerman 1974(Beckerman , 1995. Beckerman additionally disdained the sustainability 24 The argument that only rich countries are preoccupied with the environment was also refuted by Martínez-Alier. He coined the term 'environmentalism of the poor' (Martínez-Alier 1995). 25 In a subsequent study called Scarcity and Growth Reconsidered (1979), Barnett reaffirmed his position. Nevertheless, many others authors including Georgescu-Roegen and Daly commented on the issue. discourse for being 'morally repugnant' (Beckerman 1995: 125). He argued that needs are subjective, and poverty is the contemporary world malady to be tackledcertainly through economic growth, for the entire world, and using the standard instruments of neo-classic economics to tackle scarcity and ecological problems.
The Economists' article presented a set of figures taken from the Food and Agriculture Organisation and the WB, showing declining price of metals and food. It was argued that despite the fact that the world population almost doubled from 1961 to1995, food production had more than doubled, even resulting in falling food prices. Other tragedies predicted but which turned out to be wrong, according to the magazine were rising cancer rates because of pollution, forest decline in Germany in the 1980s caused by acid rain and famines due to population increase. Later, the journal of Environment and Development Economics called for a response to the Economist's article. It was attended by 12 scientists: 9 environmental economists, 2 ecologists and a climate scientist. They responded in the Policy Forum section of the journal and argued about the absence of markets and property rights on environmental services but also about the complexity and uncertainty in socioecological systems, and the non-linearity of numerous ecological processes. I will go into some detail regarding two arguments which reflect, in my view, the gained influence of Ecological Economics and Industrial Ecology upon Environmental Economics. The arguments are: (1) the problem with time series statistics versus processes and (2) the 'Heisenberg Principle' (Portney and Oates 1998: 531) which is at work when a prediction is made.
Time series statistics versus dynamic processes
Using time series to show that natural scientists were wrong is a weak argument because it does not take into account the natural resource-base upon which production depends (Dasgupta and M€ aler 1998). In agriculture, for example, increased food production (green revolution) had been achieved by monocultures, pesticides, fungicides, soil depletion, and so on (Krebs 1998). Hence, the question to be asked is not only if we can produce more food, but what are the long term ecological/ social consequences of doing so in the way it is done. On scarcity, Dasgupta and M€ aler (1998) pointed out that price can be a very bad indicator. In fact, prices can decrease while the resource in question also becomes scarcer. 26 Krebs (1998) argued that for predictive purposes, the understanding and modelling of underlying dynamic processes are more promissory than simple time series statistics. Portney and Oates (1998) and Polin (1998) stated that the act of observing and forecasting social events is likely to affect the outcome. Hence, the previous predictions made by natural scientists raised awareness of looming problems, namely exponential population growth, ozone layer depletion, the effects of acid rain on German forests, and so on. The raised awareness was conducive to political action which prevented the prediction from coming true and which stopped damaging activities. Levin (1998: 527) affirmed that 'the greatest reward for one predicting catastrophe is to stimulate the implementation of measures that invalidate the predictions'.
Heisenberg principle
These answers were very significant, and as far as I know, The Economist did not refute them -although it might have shaped opinion more effectively than the responses of a scientific journal with a specific and limited audience. As one of the main targets of ridicule was LtG, several scientists' responses sadly repeated the distortions made years before, for example, on the alleged predictions that LtG made (Hammitt 1998: 511, Perrings 1998, and the supposed failure of taking into account technical change (Portney and Oates 1998: 530). On predictions, the following is one of the many phrases written by the LtG's authors: This process of determining behaviour modes is 'prediction' only in the most limited sense of the word . . . these graphs are not exact prediction [. . .] They are only indications of the system's behavioral tendencies. (Meadows et al. 1972: 92-93. Italics in original text) With regards to the fact that technical change was not taken into account ( Fig. 3.2b). Finally and as previously mentioned, Krebs (1998) maintained that the understanding and modelling of underlying dynamic processes is superior to simple time series statistics. Nevertheless, Krebs failed to give proper recognition or to defend the LtG team who inaugurated these types of studies. 27 The attention on LtG also raised the central question concerning economic growth, since after all, LtG's central tenet is that economic growth (and population growth) is in the long run simply impossible and a failure to recognise that would be calamitous. The only comment in this direction was made by environmental economists Dasgupta and M€ aler (1998: 505) who expressed that: By concentrating on welfare measures, such as GNP and life expectancy at birth, journalists, political leaders and, frequently, even economists, bypass the links that exist between population growth, increased material output, and the state of natural-resource base.
They argued later that environmental problems are sometimes correlated by 'some people' with wrong sorts of economic growth. On the other hand, Kneese (1998) expressed gratitude to the magazine for reminding the readers that the impacts of economic growth on natural resources can and have been cancelled by technological progress. He explained that with endogenous growth theory, national 27 I bring LtG to the end-1990s again because the widespread idea that LtG was 'refuted' contributed to several issues being left unattended for many years. Presumably, this widespread perception also meant that the two last updates published in 1992 and 2004 correspondingly were largely ignored. More recently, Turner (2008) published an analysis of 30 years of historical data and concluded that they compared favourably with the key features of the 'standard run model' reproduced in Fig. 3.2a. economics do not growth like balloons, for efficiency in the use of energy/matter prevents them from doing so. Similarly, Kristr€ om and L€ ofgren (1998: 525) asserted that endogenous growth theory 'promises us permanent growth, due to constant returns to capital'. It may be worth reminiscing that endogenous growth theories simply attempt to account for the origins of technological progress which was previously treated as given, that is, 'exogenous' to the neo-classical growth models. However, exogenous or not, it does not handle the issue of scale or the Jevons' paradox already mentioned, resulting in an impact on the natural environment and related social conflict. When this discussion was taking place, the global economy was already necessitating 1,2 planets, from which the largest share was what The Economist's author dismissed as the 'mother of all environmental scares': global warming.
Climate Change
From the 1990s on, the focus of the debate on ecological problems shifted progressively from depletion to pollution, more specifically to greenhouse gas emissions (GHG) causing an increase in global average temperature. 28 Climate change was put on the international political agenda at the Earth Summit in 1992 when the United Nations Framework Convention on Climate Change (UNFCCC) was created. The ultimate objective of the convention (article 2) was the 'stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system', and in line with sustainable development it re-affirmed the objective of 'sustainable economic growth' within the context of the 'open international economic system' (UNFCCC 1992). The convention acknowledged several principles, such as the precautionary principle, the protection of the climate system on the basis of equity, the necessity that rich countries take the lead in combating climate change, and a consideration of the circumstances of developing countries. After 5 years of negotiations, the Kyoto Protocol with legally binding commitments was agreed in 1997. Thereafter, a political process of ratification began. The protocol included three international mechanisms in order to facilitate its implementation: International Emissions Trading, Joint Implementation Mechanism and the Clean Development Mechanism. According to Munasinghe and Stewart (2005: 2) these mechanisms were 28 An emphasis was also set to the state of ecosystems and development/poverty. It resulted in the release of the Millennium Ecosystem Assessment in 2005 and in the Millennium Development Goals in 2000. It is however my belief, that climate change has been more at the centre of public attention in rich countries than the bad shape of ecosystems and global poverty with reference to the disposition for real action at the international level. The reason might be that climate change is logically related to the most sensitive geostrategic concerns of rich and emerging countries: energy. developed to specifically satisfy the conditions required by the US, yet the progress initially made suffered a reverse when the US government refused to sign the Bonn agreement -an extension of the Kyoto protocol -in July 2001. Two months later the US suffered a terrorist attack and the attention of the entire West shifted away from the climate change issue.
The visibility of the subject was again given a massive boost in 2006, when British economist Lord Nicholas Stern published his Stern Review. That the attention on climate change was brought back to the forefront by an economist indicated once again the extraordinary power of the profession. 29 As Jackson (2009: 11) put it: 'it's telling that it took an economist commissioned by a government treasury to alert the world to things climate scientist [. . .] had been saying for years', namely that humanity is at crossroads. Climate change is a global and serious threat -and there is no doubt that it is anthropogenic. Climate studies have been compiled by the Intergovernmental Panel on Climate Change (IPCC) created in 1988. It has delivered four comprehensive reports thus far: 1990, 1995, 2001 and 2007. The following information is taken from the synthesis of the last IPCC report (IPCC 2007a).
Global atmospheric concentrations of greenhouse gases (GHG) such as carbon dioxide (CO 2 ), methane (CH 4 ), nitrous dioxide (N 2 O) and halocarbons have clearly increased since 1750 (pre-industrial times) as a result of an expansion in 'human activities', whereby halocarbons did not even exist in pre-industrial times. For example, the global atmospheric concentration of carbon dioxide, the most important anthropogenic GHG, increased from 280 parts per million (ppm) to 379 ppm in 2005. The major growth in GHG emissions between 1970 and 2004 has come from energy supply (fossil fuels), transport and industry. There is convincing evidence that the rising levels of GHGs emissions have a warming effect on the climate because of the increasing amount of heat energy (infrared radiations) trapped in the atmosphere: the greenhouse effect. In fact, the earth has become warmer since around 1900 by 0.7 C and it will continue to do so for the next two decades at a rate of 0.2 C for a range of emission scenarios, and 0.1 C per decade even if the concentration of GHGs is kept constant at 2,000 levels.
Increases in temperature estimates depend on specific emission trajectories for stabilisation which have been provided by the IPCC since 2001. They show, for example, that a doubling of pre-industrial level of greenhouse gases is likely to raise global average temperature by between 2 C and 4.5 C, with a best estimate of approximately 3 C, and that it is very unlikely to be less than 1.5 C. 30 Presently 29 Climate change was arguably not the only factor for a revived preoccupation with the topic. Since 2003 oil prices had been on the rise. 30 The IPCC used three different approaches to deal with uncertainty which depended on the availability of data and experts' judgment. Uncertainties' estimations concerning the causal link between increased concentration in the atmosphere of GHGs and the rising of temperature consisted of expert judgments and statistical analysis. Likelihood ranges were then constructed to express assessed probability of occurrence from exceptionally unlikely <1% to virtually certain >99%. In this paragraph: likely >66%, and very unlikely <10%.
neither adaptation nor mitigation can avoid climate change and expected impacts at all. Adaptation is necessary in both the short and the long term to the warming which will occur even for the lowest estimated stabilisation scenario: 445-490 ppm CO 2 e. 31 Indeed, this will increase global average temperature by 2.0 C and 2.4 C. The stabilisation of GHGs' concentrations in the atmosphere would need to peak and decline thereafter, and the lower the stabilisation level chosen, the faster the peak and the decline will occur. By now, humanity has years rather than decades to stabilise emissions of GHGs. 32 The expected impacts of global warming are unevenly distributed according to sectors and regions. In the following paragraphs a summary of expected effects taken from the working group II (IPCC 2007b) is provided.
Ecosystems: The resilience of many ecosystems is likely to be exceeded this century. Climate change will lead to increased flooding, drought, wildfire, pest outbreaks, ocean acidification, land use change, pollution, and overexploitation of natural resources. With an increase in global temperature which exceeds 1.5-2.5 C, 20-30% of plant and animal species assessed thus far are likely to be at risk of extinction. In Latin America increases in temperature and associated decreases in soil quality and water availability are projected to lead to gradual replacement of tropical forest by savannah in Eastern Amazonia. In Asia, climate change will compound the pressures on natural resources associated with rapid urbanisation and industrialisation. In both Polar Regions, specific ecosystems and habitats are projected to be vulnerable as climatic barriers to species invasions are lowered.
Food: Globally, the potential for food production is projected to increase in some regions by an increased local average temperature in the 1-3 C range. Above this range food production will decrease. In seasonally dry and tropical regions, crop productivity will decrease for even small local temperature increases (1-2 C). It will augment the risk of malnutrition and weaken political efforts to attain food security, whereby Africa will be especially affected. By 2030, production from agriculture and forestry is projected to decline in Southern and Eastern Australia, and over parts of eastern New Zealand because of increased drought and fire. Similar projections are made for Southern Europe. 31 The totality of GHGs is usually converted into CO 2 equivalents (CO 2 e). 32 A '2-degree goal' was agreed by G8 leaders in Italy in July 2009. They committed to cutting their GHGs emissions by 80% by 2050. Nevertheless, they left the baseline year vague. On December 2009, the fifteenth conference of the parties (COP15) took place in Copenhagen resulting in a non-binding agreement (Copenhagen Accord). Later, Annex-I-countries, roughly speaking rich countries, submitted their quantified emission targets for 2020 with baselines which ranged from 1990 (EU) to 2000 (Australia) and 2005 (US and Canada). One year later, at the COP16 in Cancun rich countries agreed on a Green Climate Fund worth USD 100 billion a year by 2020. The declared purpose of the fund was that rich countries assist poor countries in financing GHGs emissions' mitigation and adaptation. How the Green Climate Fund will be raised is still an open question. The overall assessment of the COP16's achievements depends of course on whether the analysts use political criteria or rather criteria oriented to the mitigation and solution of the climate problem.
Coasts: Settlements located in coastal and river flood plains will be severely affected as sea level is expected to rise due mainly to the thawing of the Greenland ice sheet. In the meantime, gradual sea level rise is expected to exacerbate inundations, storm surge, and erosion, therefore threatening vital infrastructure, and facilities which support the livelihood of island communities. Coastal areas, especially the heavily populated regions in the South, East and South-East Asia, will be at the greatest risk due to increased flooding from the sea and rivers.
Health: The health of millions of people is projected to be affected because of increased malnutrition, deaths, diseases and injury driven by extreme weather events such as floods and higher concentrations of ground-level ozone in urban areas. Some health benefits from climate change are projected in temperate areas, such as fewer deaths from cold exposure. However, it is anticipated that these benefits will be outweighed by the negative health effects of rising temperatures. In Europe and North America climate change is also projected to increase health risks due to heat waves and the frequency of wildfires.
Water: Climate change will exacerbate current pressures on water resources from population growth and land use change such as urbanisation. Many semi-arid areas such as the Mediterranean Basin, Western US, Southern African and Northeastern Brazil will suffer a decrease in water resources. Runoff from changes in precipitation and temperature will increase by 10-40% by the mid-century at higher latitudes. Drought-affected areas are projected to increase in extent, with the potential for adverse impacts on multiple sectors such as agriculture, water supply, energy production and health. In Southern Europe, climate change is projected to worsen conditions due to high temperatures and drought in a region already vulnerable to climate variability.
It is worthwhile to mention that many causal chains are not completely understood by climate scientists. For example, the understanding of important factors driving sea level rise is limited, hence, the IPCC does not provide a best estimate for sea level rise, in part because sea level projections do not include uncertainties arising from carbon cycle feedbacks which can amplify the warming effect. Warming amplifying effects are, for example, that natural carbon absorption will be further weakened as severe increases in global temperature could be caused by the liberation of methane from peat deposits, wetlands and thawing permafrost. It means that some effects in their likelihood and magnitude can be underestimated. An increase in the global average temperature of more than 5 C would lead to major disruption and large-scale movement of population. Catastrophic events of this magnitude are difficult to capture with current models as temperatures would be so far outside human experience. What is already well understood is that past and future anthropogenic GHG emissions will continue to contribute to the warming and sea level rise for more than a millennium because of the time scales required for the natural removal of the gases from the atmosphere. Although the prospects of climate change are appalling, let alone the limited capacity of the relevant political actors at the international arena to deal with it, even more appalling is that the warming of the atmosphere is not the only sharpened ecological problem which humanity is facing. Indeed, other problems are plentiful and include ecosystem liquidation, unprecedented biodiversity loss, the collapse of fish stocks, water scarcity, loss of productive soil and impoverished communities. These ecological and social problems will simply become more acute through climate change.
The magnitude and urgency of the problem is evident; a notion which was conveyed by Stern. Nonetheless, his message was one of hope. Taking as the target the stabilisation of carbon emissions in the atmosphere at 550 ppm CO 2 e, it would cost approximately 1-2.5% of annual GDP (Stern 2007: 227). The cost is modest ($1 trillion by 2050) with respect to the level and expansion of economic output expected over the next 50 years which is likely to be over 100 times this amount (Stern 2007: 265). He argued that in order achieve that target, strong policy would be required as to redirect research and investments in green technologies away from carbon intensive technologies, especially in the area of energy provision. Unfortunately, Stern took as the target the stabilising of carbon emissions in the atmosphere at 550 ppm CO 2 e, yet the IPCC's Fourth Assessment Report showed 1 year later that a 450 ppm CO 2 e will be needed if climate change is to be restricted to an average global temperature increase of 2 C. In fact, the target may be even more punishing. Jackson (2009: 83-84) explained, drawing on two articles published in the journal Nature, that 350 ppm target offers the best hope of preventing dangerous climate change. Stern could not have known this writing 3 years before and using largely IPCC's information published in 2001 -even though there was already an international 350 ppm movement and the European Union (EU) had already proposed the 450 ppm goal.
When Stern published his review in 2006, the global economy already required almost 1.5 planets, yet a discussion on the causality's direction between economic growth and ecological obliteration so fervently debated prior to the Brundtland report was completely absent in Stern's work. Economic growth was Stern's default assumption for the entire globe. Finally, some of Stern's ideas would be eventually brought to the international political arena after a global shock, which instead of slowly worsening environmental conditions, expeditiously and decisively set political forces in motion.
Greening the Economy
The financial turmoil caused by the housing bubble burst in the US which almost resulted in a fully-fledged global economic recession between 2008 and 2009 and which greatly shattered the food crisis of the preceding months, opened a political window of opportunity for a greened version of neo-Keynesianism worldwide. In September 2008, the Political Economy Research Institute (PERI) at the University of Massachusetts proposed a fiscal expansion of USD 100 billion (bn) which would create two million green jobs in key areas such as building retrofitting to improve energy efficiency, expansion of mass transit/freight rail, the building of a 'smart' electrical grid, wind power, solar power and biofuels (Pollin et al. 2008). A month later, the executive director of the UNEP, Achim Steiner argued for a 'Global Green New Deal' as to redirect a substantive portion of the stimulus packages and bank bailouts prepared at the time to the green sector. The green sectors were the same areas already proposed by the PERI but adding ecosystem 'infrastructure' and sustainable agriculture (Nuttall 2008). A month later, a group of investment advisors of the Deutsche Bank revealed the 'green sweet spot' for green investment formed from the junction of three factors: climate change, energy security and the financial crisis (DB 2008). Finally, in January 2009, the US president raised the development of a 'green economy' to the top of the US political Agenda (Goldenberg 2009). Since the EU had for a long time been making active use of fiscal policy to 'decarbonise' their economies so as to meet their emission targets, 33 a green consensus among rich countries was achieved. From the global stimulus plans worth nearly USD three trillion, over USD 430 billion went to the green sector (almost 16% of the total), primarily for energy efficiency (buildings, rail, and so on), water infrastructure and renewables (Robins 2009). In absolute terms, the green stimuli in China and the US took the lead, with USD 221 billion and USD 112 billion respectively. Yet, the real green new deal took place in South Korea, with more than 80% of the total stimulus package (USD 38 billion) allocated for the green fund (ibid.).
In the following years, as the dust of the economic crisis temporarily settled, the idea of the green economy turned into a firmly established notion in the official environmental discourse through the Green Economy Report: Towards a Green Economy (UNEP 2011). In this report, the UNEP broadened the focus on green investments in energy efficiency as to include the main raison d'etre of SD: development and poverty. It also added many important elements of Ecological Economics in all the green-investment scenarios such as investment in natural capital, eco-taxation, shifting away subsidies from harmful industry, and so on. The topic played a central role during the United Nations Conference on Sustainable Development (Rio+20) in June 2012. Despite the fact that the definition of the green economy is as broad as the definition of SD, 34 the authors of the report made a concise statement about why so little has been achieved in the years since the inception of the sustainability discourse. Their answer was: 'there is a growing recognition that achieving sustainability rests almost entirely on getting the economy right' (UNEP 2011: 16), and getting the economy right means in this new context of Keynsianism active state intervention in order to achieve sustainable or, by now, green growth.
Although laissez-faire proponents condemn this shift to green neo-Keynesianism, the authors of the report explain that markets' instruments alone cannot deal with pervasive externalities such as climate change in order to globally achieve an economy less dependent on fossil fuels. On the other hand, green technologies also need public 33 Germany has been the forerunner with the enactment of the Renewable Act from the year 2000. The government introduced feed-in tariffs encouraging the deployment of onshore and offshore wind, biomass, hydropower, geothermal and solar facilities. 34 The 'green economy [is] one that results in improved human well-being and social equity, while significantly reducing environmental risks and ecological scarcities' (UNEP 2011: 16). procurement so as to protect them against the brutal competition of the market. Many technologies and public facilities which are taken for granted today, contrary to neoliberal beliefs, have been created and built under the tutelage of state such as aviation, internet, roads and schools. It also seems clear that poor countries, especially the largest and rapidly growing ones such as China and India must be locked into an energetic path different from fossil fuels so as to meet their energetic requirements. Indeed, this is vital if humanity is to have a chance to tackle at least global climate change -whether this is doable given the gigantic and increasing energetic requirements, price uncertainties and the changing geo-strategic game remains an open question.
By and large the report has historical relevance. It captures the changes in the direction of environmental policy which had been taking place within the borders of global players such as members of the EU and China, later joined by the US out of a financial crisis and with a president less hostile to spend taxpayer money for green investment. These factors might explain the swiftness with which the green economy became environmental mainstream discourse. To climb up to this status sustainable development has taken almost 20 years, while the green economy made it in just 3 years. 35 The question that arises and which will be shortly examined is whether this response is adequate in view of the truly civilisational shift needed to cope with a worsened ecological and social crisis.
First at all, the report maintains the growth commitment for the globe, after all growth is also the goal of Keynesianism. 36 Keynes made stimulated public or private demand-driven growth a policy objective in the past century after 1945 (or before, in Roosevelt's New Deal) as a mean to overcome the vicissitudes of the Great Depression. However, Keynes himself saw it as a time-limited policy and not intended to be a perpetual endeavour as implied since the Harrod-Domar growth models of the 1950s. 37 Second, the authors of the report maintain that the 'fundamental' reason for the social and ecological crisis is 'the gross misallocation of capital' in the last two decades (UNEP, 2011: 14). Certainly, subsidising heavily polluting industries or failing to respect the regenerative capacity of ecosystems has been a grave mistake. However, it hardly follows that the fundamental reason for the ecological and social crisis is because of the misallocation of capital in the recent past. The general preoccupation with both ecological problems and even less with poverty did not start with the inauguration of sustainable development, for this 35 The authors of the report assert that the green economy is not meant to replace SD (UNEP 2011: 2). 36 The rationale of Keynsianism is that fiscal stimulus funded by deficit spending will create employment, employment will generate income, income will generate private spending and savings, income will spur consumption and savings investment, and consequently employment. With the revenues raised from a reinvigorate economy the government will pay off the debt. The whole purpose of the mechanism is economic growth. 37 See in particular his essay Economic Possibilities for our Grandchildren written in 1930 (Keynes 2009). was a response to the joint-effects of these problems within the constraints of the political possible. An alternative fundamental reason would be that ecological and related social problems exist because of the metabolism of the industrial economy, and the economic policy of perpetual economic growth largely driven by the search of profits and rents in a non-growing planet. Third, the projections of the report reach as far as 2050. Assuming that through green investments -which are absolutely necessary -and further improvements in energy/matter efficiency we maintained global growth until 2050 what will happen thereafter? It is highly probable that humanity will end up simply doing the same or even more of the things which became cheaper because of the very same improvements in energy/ matter efficiency. This is the Jevons' paradox which has been mentioned several times in the last sections and which now requires more elaboration.
William S. Jevons in his The Coal Question (1865) was concerned about Britain losing her economic dynamism and worldwide position because of a foreseeable depletion of coal reserves. On the one hand, while other countries were living on the annual regular income from harvest, Britain was living on capital which would not yield interest as it was being turned into heat, light and power, that is, that capital was disappearing forever (Martínez-Alier, 1987: 161). On the other hand, he doubted that gains in technical efficiency with regards to the use of coal would lead in the future to less coal consumption as was argued at that time: It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the true [. . .] new modes of economy will lead to an increase in consumption. (Jevons 1865. Quoted by Polimeni et al. 2008) The topic remerged almost 100 years later after industrial economies had largely switched from coal to oil and later on to nuclear power for electricity as a result of partial oil-demand destruction caused by the OPEC's embargo during the 1970s and early 1980s. The article of Khazzoom (1980) elicited a renewed interest on the issue as he explained that some mandated standards for energy saving would even 'backfire' (Khazzoom 1980: 35). From then on an enlargement of the Jevons' paradox, which has been renamed as the rebound effect has been taking place. Theoretical and empirical studies have attempted to trace, for example, micro-to macro-economic effects. Nonetheless, the results of these studies remained unconvincing. For example, increased energy/matter efficiency would make a given commodity cheaper, what conversely would free household's income which would be spent on either more consumption of the same product or on other products in case of low-demand elasticity. Eventually it will pull up economic growth, and economic growth will mean, ceteris paribus, more resource extraction (inputs) and waste/pollution (output). The unconvincing part of this argumentative line is related to the insurmountable empirical task of following income effects up to the macro-economy, also aggravated by the different theoretical growthapproaches and the terminology used (see the following reviews Herring (1999), Biswanger (2001), Alcott (2005) and Jenkins et al. (2011)). However, and as already shown when discussing SD, the Jevons' paradox seems not to be a paradox at all. It was after all a major component in the pattern of development of the Westat least in its own terms.
The authors of the report fully recognised the Jevons' paradox in the green investment scenarios for the manufacturing sector (UNEP, 2011: 257-258), energy efficiency in buildings (ibid.: 357-361) and green cities (ibid.: 461, 479). Nevertheless, the policy implications which have followed from its recognition are by and large inconclusive. In the context of increasing energy efficiency in buildings the report could only simulate power demand and not overall energy use due to a lack of data. Power demand accounts only, according to the report, for roughly 30% of total energy used in buildings. In spite of the partial but highly positive results of the simulation, it is stated that 'economic growth in the green investment scenarios, approximately offsets the savings in power demand ' (ibid.: 357). This is the Jevons' paradox. However, policy implications are left rather inconclusive. It is simply stated that it 'highlights the importance of accompanying new technologies with appropriate behavioral and institutional change' (ibid.: 357), without specifically mentioning what kinds of behavioural and institutional changes are needed.
In the context of green cities, an example of a current green community in Britain is given in which households have achieved 84% of energy reduction and decreased 36% of their ecological footprints. Nevertheless, it is specified (in a footnote) that although the residents of the community have reduced their footprint on site: A lot of their ecological impact is made outside of it, in schools, at work, and on holiday . . . [they also] fly slightly more frequently than the local average, presumably due to their higher average income. (ibid.: 461) This is the Jevons' paradox. The authors argued that these limitations do not undermine the achievements of the local development, which is utterly correct. They finally suggested the need for 'scaling up energy efficiency measures in wider urban settlement systems' (ibid.: 461). The problem is that scaling up efficiency measures will necessarily culminate in efficiency measures for the entire world, that is, from what is called relative decoupling (energy/matter efficiency gains) to absolute decoupling. That is precisely what is proposed for the manufacturing sector. In the context of manufacturing, or green investment scenarios, the report states that overall emissions, energy and material use have been growing in spite of efficiency gains. Figure 3.3 depicts a global trend in increasing resource extraction, population and GDP, while the use of materials has markedly declined (increased efficiency) in the period 1980-2007. The dilemma is settled by stating that 'what economies world-wide need is absolute decoupling of the environmental pressure with resource consumption from economic growth' (ibid.: 257). Absolute decoupling will imply that worldwide total resource extraction is held constant, while GDP still increases, as the report maintains the growth commitment. This conclusion may have the following problems. First, resource extraction as depicted in Fig. 3.3 is an aggregate of metal ores, industrial and construction minerals, fossil fuels and biomass. Resource extraction could be limited in one of these sectors because of substitution effects caused by scarcity. However, this would increase resource extraction in other sectors which conversely may still increase overall resource extraction. This is at least the pattern which the historical evidence has shown so far. Second, and provisionally setting aside increasingly political and ecological conflicts associated with extractive industries, the problem seems not to be that the earth's crust does not contain enough minerals to maintain customary growth levels in the long run, but in waste/pollution. In other words, currently and only physically speaking, the problem does not lie in the input-side but in the output-side of the global economy. This observation does not disclose any recondite truth. Georgescu-Roegen stated, or rather prophesised 40 years before we became so concerned with issues such as climate change, that because: Pollution is a surface phenomenon which also strikes the generation which produces it, we may rest assured that it will receive much more official attention that its inseparable companion, resource depletion. (1975: 377) Thus, it can be argued that once absolute decoupling is achieved, then waste/ pollution problems will be gradually solved, but before this can be concluded, policy instruments facilitating absolute decoupling should be discussed and proposed. This is what is largely left inclusive in the report. A proposal would be to restrict the quantities of the resources according to the more stringent ecological or social necessity, and to let market prices fulfil their function. This proposal will be examined in detail in Sect. 3.3.5. However, it can be stated in advance that the chances for its implementation are rather low -as any other alternative whose implementation necessarily requires international governance structures dealing with constraints.
Foreseeable political difficulties at this level are perhaps an approximate explanation of why the report left largely unresolved the Jevons' paradox and even included as a major finding that the 'trade-off between economic progress and environmental sustainability is a myth' (UNEP 2011: 622). Industrial ecologist Robert Ayres, who was one of the chapter coordinators of the report, stated a couple of years ago that: None of the important economic actors, whether government leaders or private sector executives, has an incentive compatible with a 'no growth' policy. No economic growth is evidently not a politically viable proposition for a democracy, at least in a world with enormous gaps between poverty and wealth. But 'no growth' is an imperative as regards extractive materials, energy and pollution emissions because economic activity is based on a material function. (Ayres 2008: 290) And yet, unviable policy proposals do not transform theory and evidence into a myth.
Wither Economic Growth?
Over the last 40 years economic growth has not only been assiduously cherished, but it has been elevated from time to time to a truly panacea: unemployment, development/poverty, overpopulation ('demographic transition'), and even ecological degradation ('environmental quality') have been claimed to be solved by economic growth, nay, by export led-growth.
Of course the problems of unemployment could be, at least partially tackled in rich countries by working-less/work-sharing. Poverty in rich countries could also be overcome by using other instruments such as a basic citizen income, and to effectively tackle the gap between rich and poor which is increasing even in Western Europe (Jackson 2009). The citizens of the poorest countries in the world could also be relieved from this malady by a global minimum wage, or if it is held to be an illusion opposed not only due of ideological concoctions but also due of foreseeable implementation problems, then at least by a better distribution of the gains of economic growth that hardly anyone claims they do not need. As economist Andrew Simms (2008: 49) observed: During the 1980s, for every $100 added to the value of the global economy, around $2.20 found its way to those living below the World Bank's absolute poverty line. During the 1990s, that share shrank to just 60 cents. This inequity in income distribution -more like a flood up than a trickle down -means that for the poor to get slightly less poor, the rich have to get very much richer. It would take around $166 worth of global growth to generate $1 extra for people living on below $1 a day.
From this perspective, claiming for more export-led growth as a mean for development and poverty alleviation is misguided and it has been long before recognised as such.
On the problem of population, China for instance, did not wait for the effects of a 'demographic transition' which should automatically happen once she becomes rich through growth, instead she preferred active top-down population policy. Contrastingly, the poor and working class in Western Europe and the US, and in some countries of Latin America, were practicing a century ago what Martínez-Alier and Masjuan (2008) called 'bottom-up neo-Malthusianism'. This was a popular movement which helped to bring down fertility rates in Western Europe against the pro-population growth policy of the state. 38 Respected demographer Carl Haub explained that 'well organized family planning campaigns are much more important than economic growth' (Hickman 2011). On the other hand, and although population growth still constitutes a problem for development in some poor countries, the truly global ecological problem is overconsumption in rich countries and its increasing emulation in emerging ones, as the very same author of the Population Explosion Paul Ehrlich maintains nowadays. 39 Frugality or sufficiency (less consumption) is still a necessary condition for environmental sustainability as it was acknowledged 40 years before. Indeed, it is increasingly accepted today by social scientists who in the recent past have focused primarily on technological progress (Weiz€ acker et al. 2009: 346). Based on the same rationale, there is a call to draft the 'Millennium Consumption Goals' (Assadourian 2011) and to implement, in line with democratic traditions and environmental justice, the 'One Man -One Vote -One Carbon Footprint' (T€ opfer and Bachmann 2009).
However, since 'growthmania' is still in place and ecological problems continue to rise as expected in a world subjected to the laws of thermodynamics and ecological limits, the afore-mentioned scattered proposals are barely taken seriously by the social agents who matter: decision-makers in rich and by now emerging countries. The only way to maintain the growth commitment is to forcefully presuppose that only technological progress will drastically reduce the impact of growth on the biosphere. Technology is still 'the rock upon which the growthmen built their church' (Daly 1972: 949) in spite of recent historical evidence showing that technological progress can bring severe risks (EEA 2001), that it makes societies prone to fall into 'progress traps' (Wright 2005), 40 and that therefore, technological faith encompasses a great deal of utopianism which must be denounced as such (Jonas 1979: 9). As will be shown later, these caveats do not 38 The arguments for voluntary population control were women's freedom, relieving pressure on wages ('womb strike'), anti-militarism, impeding migration overseas and the natural environment. Not surprisingly governments at that time harshly repressed the movement on grounds of religion and national interests (See Martínez-Alier and Masjuan 2008). 39 Ibid. 40 The notion of 'progress trap' coined by anthropologist Ronald Wright means that the problems created by technology can usually be solved only by more technology, and the new problems created by the latter must be solved by even more technology, and so ad infinitum. He also explained how 'too much progress' can be made. For instance since the Chinese invented gunpowder, there has been great progress in the making of bangs, but 'when the bang we can make can blow up the world, we have made rather too much progress ' (2005: 5).
involve a rejection of technological progress altogether -the problem is (still) 'growthmania' and growth.
Although it is probable that the green economy will dominate the environmental official discourse for the following years, it is convenient to examine less-political realist but 'imperative' proposals which could replace economic growth.
Intellectual Foundations: Mill and Georgescu-Roegen
Classical economists were growth economists. 41 Material progress 42 was not only the source of national power -the interests of kings and merchants, but also a source of prosperity to the population at large (Arndt 1978: 7). Nonetheless, they all expected with pessimism an economic stationary-state. For Adam Smith the 'stationary [state] is dull; the declining melancholy' (Smith 1991(Smith [1776: 86). In the hands of Malthus the stationary-state is not only melancholic but dreadful given the propensity of humans to increase in numbers faster than the ability to produce food. Hence, population checks would inevitably arrive either by the 'vices of mankind' such as wars; and in the case it fails then by 'sickly seasons, epidemics, pestilence [. . .] plague [and] famine [. . .]' (Malthus 1998(Malthus [1798: 139-40). The Ricardian stationary state was not attractive but at least it did not have the horror portrayed by Malthus, for it can be postponed through laissez faire policy, developing free trade and the exploitation of the resources in the new world (Hicks 1966: 260). In general, however, the normal expectation of the individual was to live on the brink of starvation, and material progress would improve the conditions of those who were already wealthy. Political economy was indeed, as Thomas Carlyle once judged it: 'the dismal science'.
It was Mill who introduced a radically different view of the stationary-state. In his view the stationary state is highly desirable and as such, it deserves to be put as an overall policy objective. His line of reasoning anticipated many of the ecological and social arguments made against the perpetual growth policy from the late 1960s up to now. He saw no reason why the natural environment should be sacrificed 41 Reducing Daly's intellectual foundations to Mill and Georgescu-Roegen is an arbitrary choice for his views were also shaped by the works of John Ruskin, Frederick Soddy, Kenneth Boulding, and Irving Fisher among others. Nevertheless, as it will be shown, Mill's and Georgescu-Roegen's ideas constitute Daly's strongest foundations. 42 'Progress ceased to be an issue of metaphysics as understood in the middle ages, and came to be a material issue in the early eighteenth century. Material progress or 'raising standards of living' became the mean to achieve the greatest happiness for the greatest number, as the utilitarian principle proclaimed (Pollard 1968). through the combined forces of affluence and population growth. His arguments are worth quoting at length: Nor there is much satisfaction in contemplating the world with nothing left to the spontaneous activity of nature; with every rood of land brought into cultivation, which is capable of growing food for human beings; every flowery waste or natural pasture ploughed up, all quadrupeds or birds which are not domesticated for man's use exterminated as his rivals for food, every hedgerow or superfluous tree rooted out, and scarcely a place left where a wild shrub or flower could grow without being eradicated as a weed in the name of improved agriculture. If the earth must lose that great portion of its pleasantness which it owes to things that the unlimited increase of wealth and population would extirpate from it, for the mere purpose of enabling it to support a larger, but not a better or happier population, I sincerely hope, for the sake of posterity, that they will be content to be stationary, long before necessity compels them to it. (Mill 2004(Mill [1848: 692) Although his advocacy for conservation was specially directed at his home country, Britain, his vision can be enlarged as to encompass today's rich countries for: It is only in the backward countries of the world that increased production is still an important object; in those most advanced, what is needed is a better distribution, of which one indispensable means is the stricter restrain of population. (ibid.: 691) Mill, differing from Ricardo, viewed birth controlling measures as the most important public policy, so that population becomes the fixed factor of production, and in so doing, ensuring that a large portion of the production surplus flows to wages. With regards how to attain distribution Mills stated that: Mill also addressed what Fred Hirsch 120 years later would call the Social Limits to Growth (1977), whose ideas Daly integrated into his model. Mill could not conceive as the most desirable state of social life the one in which the norm is: 'struggling to get on; that the trampling, crushing, elbowing and treading on each other's heels' (ibid.: 690).
The second main intellectual source of Daly's thought was the work of the mathematician and economist Nicholas Georgescu-Roegen, 43 who rigorously treated the implications of thermodynamics in the economic process. He disclosed the fallacy of misplace concreteness in which the marginalists, and later neoclassical economists have incurred by forgetting the resource base of the economy and in viewing the economic process through the lenses of Newtonian mechanics. 44 43 For a review of Georgescu-Roegen's thought see Maneschi and Zamagni (1997) and Daly (1996: 191-198). 44 Georgescu-Roegen maintained that the fallacy of misplaced concreteness was the cardinal 'sin' of orthodox economics from which only Marx, Veblen and Schumpeter offered substantial ways to transcend it (1971: 231). The fallacy, formulated by philosopher Alfred Whitehead, consisted of 'neglecting the degree of abstraction involved when an actual entity is considered merely so far as For the authors of the marginalist revolution, 45 the problem of land -until recently the economic term encompassing all natural resources -was abandoned, and economic growth ceased to be the central topic. They became rather concerned with the allocation of given resources (Screpanti and Zamagni 2005: 165), in spite of Jevons' energy analysis. Neglecting the role of resources in the economy was so intriguing, that, as Georgescu-Roegen observed: 'Not even wars [. . .] for the control of the world's natural resources awoke economists from their slumber' (Georgescu-Roegen 1971: 2).
On the other hand, the ambitions of the marginalists in making out of economics a scientific discipline led them to adopt the Newtonian mechanistic worldview into their modelling. Nonetheless, while the marginalist revolution was taking place in economics through the adoption of Newtonian mechanics from physics, a revolution was taking place in physics which was abandoning Newtonian mechanics. The revolutionaries were Rudolf Clausius, Robert Mayer, and Herman Helmholtz who grounded the new branch of physics thermodynamics (Georgescu-Roegen 1971: 141-195, Martínez-Alier 1987 and from which the law of conservation of energy and the entropy law were postulated. They are correspondingly the first and the second law of thermodynamics. 46 For Georgescu-Roegen the entropy law was the most relevant physical law in economics, which leaves no room for the mechanistic view of modern neo-classical economics so clearly implied in macro-economic books' charts depicting the economic process as a circular flow of national product and income in a perfectly competitive market. Entropy means that in an isolated system, energy would move towards a thermodynamic equilibrium in which energy is equally diffused throughout the closed space. The relation of the two thermodynamic laws and the economic process can be exemplified as follows: in the combustion chamber of the modern car engine the fuel is burnt. The resulting heat and the pressure of the gases apply force to the components of the car engine such as the pistons and the wheels. The evident result of the combustion process is locomotion: the car moves from A to B. According to the first thermodynamic law, the quantity of energy has not changed, yet a qualitative change has taken place. Before the fuel entered the combustion chamber, its chemical energy was available for producing mechanical work. After the fuel leaves the combustion chamber the chemical energy loses its quality and dissipates into the atmosphere where it becomes non-available energy, that is to say, it can no longer be used for the same purpose. This strict linearity and irrevocability from order to disorder represents the entropy law. The entropy law has enormous it exemplifies certain categories of thought. There are aspects of actualities which are simply ignored so long as we restrict thought to these categories' (Whitehead 1978: 8). 45 The figures were mainly William Stanley Jevons, León Walras and Carl Menger. For a detailed account see Screpanti and Zamagni (2005: 163-195). 46 The third law of thermodynamics is less relevant for economics. It states that the entropy of any pure, perfect crystalline element or compound at absolute zero is equal to zero. relevance, from the human perspective, to non-renewable resources. 47 If uranium, petroleum or coal could be re-used ad infinitum, scarcity would cease to be an economic problem and the resource pressures arising from a growing population and affluence could simply be solved by more frequently using the flows of the existing stocks. As much as we might believe in human inventiveness with respect to technological progress and semantics, it cannot reverse this linearity.
Georgescu-Roegen was also very clear in stating that the dictates of the entropy law happens whether or not humans are around, for the economic role of humans is simply that of 'pushing or pulling' (Georgescu-Roegen 1971: 141). In other words, the economic process consists of accelerating the transformation from low entropy energy/matter into high entropy energy/matter, 48 that is, from speeding up depletion to speeding up waste/pollution. It also follows, ceteris paribus, the greater the size and intensity of the economic activity the more depletion/pollution which occurs. From this perspective it is not surprising that the greatest ecological problems have been caused by industrial economies based on fossil fuels in spite of continued efforts in 'ecological modernisation'. It is worthwhile to emphasise again that Georgescu-Roegen's central point is that these physical facts are not accounted for in economics: Had economics recognized the entropic nature of the economic process, it might have been able to warn its co-workers for the betterment of mankind -the technological sciences-that 'bigger and better' washing machines, automobiles, and superjets must lead to 'bigger and better' pollution. (Georgescu-Roegen 1971: 19)
Unravelling Fallacies of Misplaced Concreteness
Drawing upon the ideas of Mill and Georgescu-Roegen, Daly further pursued the revision of economic theory disclosing and correcting further fallacies of misplaced concreteness (FMC). In the next paragraphs, I will discuss two of these fallacies which are central to understanding the theoretical tenets of steady-state economy: markets and technology. 49
The Market
Daly fully recognised the superiority of the market-economy in allocating scarce resources among alternative uses compared to a planned economy; nonetheless 47 As the earth is not an isolated system (it receives and reflects solar radiation) but a closed system (it does not exchange relevant amount of matter with the outer space), nonrenewable resources (fossil fuels and minerals) are in absolute terms finite. 48 Georgescu-Roegen latter extended the entropy law as to include matter and proposed the fourth law of thermodynamics. It has been disputed whether a 'fourth law' can be formally enunciated. It is however, not disputed that matter inherently tends toward disorder too (see Daly and Farley 2011: 66). 49 The following paragraphs rely heavily on Daly (1991: 281-287), Daly and Cobb (1994: 25-117) and Daly (1996: 38-44). there are some negative features which require correction. They are (1) the tendency for competition to be self-eliminating, (2) the corrosiveness of self-interest on the moral context of the community that is presupposed by the market, (3) the existence of externalities which can be localised or pervasive, (4) an implicit amoral position on the issue distribution, and (5) the lack of defining the optimal scale of the economy relative to the natural system.
1. The tendency for competition to be self-eliminating.
Competition is cherished by orthodox economists on the grounds that it improves allocative efficiency, keeps profits at the normal level and avoids, at least theoretically, the emergence of monopoly which can negatively influence market prices. The slogan is 'the more buyers and sellers the better'. Nevertheless, in the middle run many firms become few firms and monopoly power increases. In addition, in the long run giant conglomerates appear with their correspondingly giant corporate bureaucracies making the market economy hardly indistinguishable from a planned economy. Within a single country this development is economically and politically damaging, and even more so within the relentless pursuit of a global integrated economy.
As explained in the last section, as the laissez-faire intellectual movement gradually gained strength, free trade and capital mobility doctrines were (selectively) re-adopted and re-implemented. In this context, the enforcement of antitrust laws of individual nations became more costly, if not impossible. One of the reasons is that the accumulation of wealth tends to increase pari passu with political power. Agri-business, energy provision, media-entertainment organised as transnational corporations along with financial institutions are today in a position to influence polities and politics at different levels through many direct or indirect means. It ranges from structurally having become 'too big to fail', effectively lobbying for favourable legislation, to simple unspoken and direct threats of offshoring production or capital flight. Under these circumstances, not only the credibility but even the actual functioning of representative democracy erodes.
The theoretical foundation of free trade draws from the theory of the comparative advantage as formulated by David Ricardo. However, one of the many assumptions upon which the comparative advantage was formulated was capital immobility, an assumption which was taken for granted by Adam Smith prior to David Ricardo, 50 in spite of his famous invisible hand thesis. 51 The capitalist would 50 Capital immobility is certainly not the only assumption that does not hold today. Understandably Ricardo could not think of environmental costs (pollution). On the other hand, he also did not consider transport costs, the costs of specialisation, and more fundamentally, the loss of freedom of not to trade. For a detailed review and analysis see Daly and Farley (2011: 355-363). 51 The often-quoted passage of the invisible hand of Adam Smith portraying the capitalist as a simple egoist who through his actions indirectly increased total wealth sometimes overlooks the very beginning of the quote: 'By preferring the support of domestic to that of foreign industry, he intends only his own security [. . .] he is in this, as in many other cases, led by an invisible hand [. . .]' (Smith 1991(Smith [1776: 351. Emphasis supplied). not invest abroad even in view of larger profit margins, since according to Smith and Ricardo, the capitalist is primarily a member of the national community which forms his very identity. She/he would consequently avoid living under customs alien to her/him. This assumption clearly does not hold in today's globalised world of cosmopolitan money managers and global corporations. As Daly and Cobb observed: 'it is clear that Smith and Ricardo were considering a world in which capitalists were fundamentally good Englishmen [and] Frenchmen' (1994: 215).
2.
The corrosiveness of self-interest on the moral context of the community which is presupposed by the market.
During the LtG-debate, Fred Hirsch authored Social Limits to Growth (1977). He believed that the growth discussion emphasising distant and uncertain physical limits was inappropriate, as it was overlooking closer and more certain limits, namely social limits. Social limits is a dual social phenomenon caused by economic growth. They are (a) the increasing importance of positional goods and services, and (b) the decreasing morality of individuals. As economic growth increases, affluence also increases, and with increasing affluence, individuals tend to value goods and services rather in relation to the valuations made by other individuals. In this process individuals are trapped in a spiral of social competition ('keepingup-with-the-Jones') which conversely makes the social position attached to those goods and services 'scarce'. From this process a 'paradox of affluence' results (Hirsch 1977: 175). When the growth process is sustained and generalised the outcome is frustration instead of happiness. The other social limit is the weakening of social values. Hirsch argued that the social foundations upon which the contractual economy works such as truth, trust, acceptance, restraint and obligation are undermined by the individualistic and competitive ethos nurtured by economic growth. Both arguments are taken up by Daly and put into the box of FMC's cases. It is the fallacy of homo economicus. Orthodox economists abstracting from community forgot that there are also a homo ethicus, homo politicus, and more broadly the 'person-in-community' (Daly and Cobb 1994: 159).
3. The existence of externalities that can be localised or pervasive.
The standard market argument runs as follows: in a perfectly competitive market self-interest seeking individuals voluntarily exchanged goods and services. However, as some of the elements neglected in reality became evident to economists' experience, their existence had to be somehow acknowledged. It was noticed that many transactions between self-interest seeking individuals unintentionally affected other parties which were not involved in the exchange. This acknowledgement was integrated through the concept of externality. While Alfred Marshall was the first to draw attention to externalities, it was his pupil Arthur Cecil Pigou who developed a rigorous treatment of the issue in his The Economics of Welfare published in 1920. As previously mentioned, the concept gained relevance in the 1960s when concerns with environmental degradation emerged, especially those captured with the label 'pollution'. Pollution was then integrated in economic theory with the formerly introduced concept of externality. The concept externality primarily suggests that the phenomenon is external to the market, and therefore, measures to internalise them are proposed, namely Pigovian taxes/subsidies and Coasian property rights and markets. What is more important is that the phenomenon is also external to the theoretical edifice that builds on the market as an economic concept. Hence, the ad hoc introduction of the externality served to circumvent the revision of the entire theory, just as the ad hoc introduction of epicycles permitted Ptolemy to not reconsider his astronomy. However, and as Daly reasoned, when externalities are exceeding the absorption capacity of the biosphere, and threatening human life support-systems, it is time to rethink the whole theory and re-start with different abstractions.
4. An amoral position on the issue of distribution.
Markets criterion in the distribution of, for example income, is allocative efficiency rather than justice. People have no rights excepting the ones which they can buy according to what they can sell in the labour market. It can be seen as a sort of morality which was seen as inevitable by Malthus and Ricardo ('iron law of wages'), when they, among many other intellectuals at that time, were intellectually overwhelmed in trying to explain why Britain was becoming so wealthy while at the same time generating so many poor people. This sort of morality is however, hardly tenable within the humanistic tradition inherited to and preached by Adam Smith. For that reason, and as in the case of antitrust laws, societies have crafted institutions such as minimum wages and income tax progressivity as a societal mechanism of self-protection (Polanyi 2001). However, as in the case of antitrust laws, such social institutions have been gradually eroding in the second wave of globalisation.
5. The lack of defining the optimal scale of the economy relative to the natural system.
Markets do not have an 'organ' which tells us when to stop the demands made from the biosphere. This is the organ that Daly introduced. It is the notion of a macro-economic optimal scale of the economy, relative to the natural environment. The optimal scale is at the heart of the steady-state economy, and is what ultimately gives a sense to any concept of environmental and economic sustainability.
Technological Progress
Daly is not a neo-luddite, but equally not a believer in promethean gifts. He claims that the standard practice of attributing to technology all sorts of mystical faculties has its origins in 'growthmania'. The issue of technology is itself broad, so that only the relationship between scarcity, substitution and technology will be addressed.
Scarcity is the raison d'etre of economic thought. In production, scarcity of a given input factor is relative to the scarcity of other input factors, such as the fact that oil has largely substituted coal, aluminium has largely substituted iron and copper, and perhaps uranium will be substituted on a larger scale in the future by thorium. Nevertheless, in Daly's conceptualisation, this line of thinking is only the half-truth, and is what makes it a FMC. Resources were and are indeed substituted; however, substitution occurs within the strictly limited total of low-entropy stock. In the context of SD, orthodox economists advanced the idea of maintaining aggregate capital constant, that is, natural, man-made, human and social capital (Pearce 2002: 63-66). It implies that these forms of capital are substitutable, specifically, that natural resources can be substituted by reproducible man-made capital. The strongest position on this issue was once formulated by Nobel-prize winner growth-economist Robert Solow (1974: 11): If it is very easy to substitute other factors for natural resources, then there is in principle no 'problem'. The world can, in effect, get along without natural resources, so exhaustion is just an event, not a catastrophe.
In the hands of Daly, man-made and natural capital are complements and only marginal substitutes (Daly 1996: 76). The reason is plainly obvious: there are no other 'factors' apart from natural resources. Producing more of the allegedly substitute (man-made capital) requires more of what it is substituted for (natural capital). On the other hand, and as already noted, the overemphasis sometimes placed on the input-side fails to recognise that abiotic resources (fossil fuels and in general minerals) do not disappear when they are used up, they return to the biosphere as waste/pollution causing acid rain, global warming, oil spills, discarded plastics and e-waste. By now it seems that 'the sink will be full before the source is empty' (Daly and Farley 2011: 81) -as Georgescu-Roegen explained in 1971, and one of the LtG scenarios suggested in 1972.
Daly saw technological progress as necessary pertaining to what we can get out of the entropic direction of the flows arising from stocks, that is, energy/matter efficiency, but not within the paradigm of economic growth. Within the economic growth paradigm, technological progress will necessarily aggravate ecological and social vicissitudes.
From Social and Physical Limits to Growth Toward a Steady-State Economy
Daly departed from the pre-analytic vision that the economy is a sub-system of the larger environmental system. This pre-analytic vision implies, first, that there are physical limits to the smaller system with respect to the larger system. Since the latter does not grow, then the former cannot possibly grow beyond the physical limits imposed by the larger system. Second, since such physical limits exist, albeit not always straightforwardly knowable, it is also possible to derive a desirable (economic) limit of the smaller sub-system. 52 Therefore the question is: what is the optimal scale of the economy? Concerning physical limits, and as previously mentioned in Sect. 3.1, natural scientists have been working for a long time on indexes which measure both the relative and absolute impact of the economic activities on the biosphere, such as LtG, the percentage of human appropriation of the total world products of photosynthesis, the footprint aggregate metric, IPCC estimations, and more recently, the planetary boundaries (Rockstr€ om et al. 2009). The rationale concerning the optimal scale of the economy is illustrated in Fig. 3 of past costs and benefits. At point D, the marginal costs of growth tend to be infinite, so even in the case that marginal benefits are still great, economic growth will cease. On the whole, a sensible policy recommendation would be to stop economic growth at point A. Beyond point A, economic growth ceases to be 'economic' and starts to be 'uneconomic', that is, it starts making a country poorer, not richer. Note that this argumentative line is far from radical or even novel; the principle that economic agents should expand the scale of a given activity up to the point where marginal costs equal marginal benefits is the principle around which microeconomic theory gravitates. In macro-economics the principle of optimality is dropped, which is what Daly called the 'glittering anomaly ' (1996: 60). Given the physical and economic limits to growth, Daly proposed a simple overall policy objective: the steady-state economy (SSE). The SSE is the intellectual response for a world which is no longer empty but full, 53 which strongly resembles the cowboy/ spaceship analogy of Boulding.
The SSE has three important components: (1) the stock of capital composed of people and artefacts (consumer and producer goods), (2) the flow of energy/matter throughput and (3) the service. The economy, just as animals, lives from its metabolic flow, beginning with extractions from the biosphere, and ending with the return of waste/pollution back to the biosphere. Input and output are conflated into the term 'throughput' coined by Boulding, and as already explained, throughput is entropic (linear, irrevocable and irreversible). The stock of capital needs throughput because capital is also entropic. The stock of capital is composed by dissipative structures, that is, structures which decay, rot, die and fall apart. Although waste materials can be recycled by biochemical processes powered by solar energy, such recycling is external to the animal or economy whose life depends on the services provided by the natural environment. Even though the SSE is primarily a physical concept, Daly acknowledged that the purpose of the economy is the satisfaction of human needs/wants (Daly 1991: 16), or as Georgescu-Roegen called it the 'immaterial flux, the enjoyment of life ' (1971: 18). This is conceptualised as the service. The SSE is defined as: 'an economy with constant stocks of people and artifacts, maintained at some desired, sufficient levels by low rates of maintenance 'throughput' (Daly 1991: 17). Hence, the service is the final benefit of the economic activity, while the entropic throughput is the final cost. The quality and quantity of services are strictly provided by the stocks and not by the flows. The relationships of the three components are depicted in the following definitional equation taken from Daly (1991: 36): The ratio (3) represents the maintenance efficiency of the throughput and the ratio (2) the service efficiency of the stock. Stocks cancels out as in real life they exhaust, hence the ultimate benefit is the service efficiency of the sacrificed ecosystem caused by throughput (1). Each component requires a mode of behaviour: regarding stocks, a level must be chosen which is sufficient for a good life and is sustainable in the long run. Throughput is to be minimised, while service must be maximised. Both throughput and service are subject to the maintenance of the chosen levels of stock. If the SSE's goal is to maintain constant the stock of people and artefacts, what is the part which should not be held constant? Daly's answer was straightforward: culture, morals, knowledge (technology), distribution, mix of capital, and so on, that is, qualitative change. Here, Daly differentiated between economic growth and economic development. Economic growth is quantitative change, whereas development is qualitative change. A SSE 'develops but does not grow' (Daly 1991: 17), just as the planet does. Daly in line with Mill maintained that humankind, especially rich countries, should be more concerned with being better (development) than with being bigger (economic growth).
ISEW/GPI Instead of GDP
The conception of the SSE necessarily led to a proposal which would replace the most important national account used to measure economic growth: GDP. The new metric would attempt to measure human welfare, and not simply unqualified market activity. Daly and Cobb developed the Index of Sustainable Economic Welfare (ISEW) in 1989 which was improved 5 years later (Daly and Cobb 1994: 62-83, 443-507). It originated in an extensive range of similar studies during the 1990s up to the present. The ISEW was first tested for the US in the period 1950-1900. It was shown that from 1975 until 1985, the ISEW started to decline even when GNP 54 was rising. From 1985 until 1990 the ISEW raised slightly but much slower than GNP (Daly and Cobb 1994: 464). Instead of showing numbers and figures, I will instead discuss the conceptual differences between GDP and ISEW.
GDP is the total monetary value of the goods and services produced annually with the factors of production located in a particular region, usually the country. GDP is held to measure only market activity and not human welfare -although it is widely believed and acted upon the premise that it does. 55 This is the idea which was disputed by Daly and Cobb on the following grounds: (1) GDP considers 54 At that time, Daly and Cobb (1994) were using Gross National Product (GNP). GNP measures the same as GDP, with the difference that what counts is not the location of the factors of production but their ownership (the residents of the country). GNP became outdated in the beginning of the 1990s. 55 On the issue the United Nations System of National Accounts (UNSNA) states the following: 'GDP is often taken as a measure of welfare, but the SNA makes no claim that this is so and indeed defensive expenditures and other social costs as contributions to welfare and (2) GDP is a poor measure of income and wealth. Therefore, Daly and Cobb deduct defensive expenditures and other social costs from the ISEW (Table 3.1, items I-P).
Regarding (2) the prime aim of Daly and Cobb was to produce a metric that tells us something about human welfare. Since in constructing the components of human welfare many controversial issues arise, the concept of income is preferred as it has a stronger theoretical foundation. Additionally, as it is supposed that income positively relates to human welfare, the ISWE departs from it. Two complementary conceptualisations of income are used for the ISEW, the first one is from the British economist John Hicks who explained the purposes of income and offered a workable definition. The second one is from the US economist Irving Fisher who mentioned another dimension of the income concept. For Hicks the 'purpose of income calculations in practical affairs is to give people an indication of the amount which they can consume without impoverishing themselves' and the practical there are several conventions in the SNA that argue against the welfare interpretation of the accounts' (UNSNA 2009: 70). US (1950US ( -1990 Contribution to the ISEW Personal consumption expenditures -A Distributional inequality -B Weighted personal consumption (A/B) -C Services: Household labour -D + Services: consumer durables -E + Services: highways and streets -F + Improvement health and education public expenditures -G + Expenditures on consumer durables -H À Defensive private expenditures/health and education -I À Cost of commuting -J À Cost of personal pollution control -K À Cost of auto accidents -L À Costs of water pollution -M À Costs of air pollution -O À Costs of noise pollution -P À Loss of farmland -Q À Depletion of non-renewable resources -R À Long term environmental damage -S À Cost of ozone depletion -T À Net capital growth -U + Change in net international position -V + Index of sustainable economic welfare -ISEW (Sum) Per capita ISEW Gross National Product -GNP Per capita GNP purpose is 'to serve as a guide for prudent conduct'. Income is then defined as 'the maximum value which he can consume during a week, and still expect to be as well off at the end of the week as he was at the beginning' (Hicks 1948, quoted by Daly andCobb 1994: 70). The same practical purposes of income, prudence and economic sustainability should be applied for GDP. Yet, GDP does not measure it, as it excludes capital depreciation while capital depreciation impoverishes a country. Hence, GDP does not offer a prudent guide as to avoid impoverishment. In this sense, Net Domestic Product (NDP) would be superior to GDP (NDP ¼ GDPcapital depreciation). 56 On the other hand, NDP is also not sufficient, for it includes only man-made capital, and ignores natural capital.
The reason is that orthodox economists, as previously shown, have taught that human made capital is a near-perfect substitute for natural resources, when in fact they are complementary. Therefore, resource depletion and environmental losses are included in the ISEW (Table 3.1, items Q-T).
The notions of capital and income of Irving Fisher are of greatest importance for the SSE, and consequently for the ISEW. For Fisher, capital or wealth is the stock of physical objects owned by human beings in a period of time, and income is the flow of service in its psychic magnitudes yielded by the capital owned (Daly 1991: 32). For example, an LCD television purchased this year is not part of this year's income, but an addition to man-made capital from which psychic income flows. It implies that a proper accounting of income will only reflect the flow of services of man-made capital enjoyed in the subjective stream of people's consciousness. As previously explained, the SSE requires that man-made capital accumulation is minimised, hence expenditures on consumer durables are accounted as costs, while their services are accounted as benefits (Table 3.1, items E,F,H).
Finally, since GDP does not include the value of household labour, performed mainly by women and the welfare effects of income inequality, they are also included in the ISEW (Table 3.1, items B,C,D). The value of some public expenditures are also imputed (Table 3.1, item G). Net capital growth (increases in fixed reproducible capital minus the capital requirement, see item U) means that for economic welfare to be sustained over time, the supply of capital must grow to meet the demands of a growing population. However, it is expected, in line with the SSE, that at some point the population will stabilise. 57 Change in net international position (Table 3.1, item V) is national investment overseas minus foreign investment in the nation. If the change is positive, the nation has increased its capital assets. The final ISWE value is then divided by the population yielding ISEW per capita, and the same operation is conducted with GNP. Finally, both are compared. 56 It must be mentioned the UNSNA recognises the inferiority of GDP to NDP. The problem is that not all countries make such calculations, and when they do it, it does not meet the requirements of the UNSNA. Nonetheless, it is acknowledged that NPD should be calculated (UNSNA 2009: 34). Whether the same considerations were made when Daly and Cobb were working on the issue is beyond my knowledge. 57 The assumption of a growing population is made in the context of the US.
The ISEW, with some variations in content, and later called the Genuine Progress Indicator (GPI) has been calculated for the majority of Western European countries, Canada, Australia, Chile (for a review of the studies see Lawn, 2003) and more recently in countries of the Asia Pacific region such as New Zealand, Japan, India, China, Thailand andVietnam (Clarke andSardar 2005, Zongguo et al. 2007;Lawn and Clarke 2010). The frequent result of these studies has been that increasing GDP stops being economic at a certain point. The index remains either constant in spite of an increasing GDP, or begins to decline. When the index starts to decline, it simply shows that additional growth is uneconomic. When GDP is growing, while the ISEW or the GPI remains constant, it is not only economically irrational but ecologically irresponsible to continue GDP-growth.
Institutional Change for the Steady-State Economy
An economic crisis is today understood as any threat to economic growth. If an economy fails the perpetual growth promise, it will produce social instability. It is fairly clear and well-documented: no increase in GDP means no jobs, no revenues, the collapse of the pension system, hence the rise of radical ideologies and social conflict. Since the SSE would presumably maintain GDP constant, is the SSE then a threat to the social fabric? The answer offered by Daly is the following: The fact that an airplane falls to the ground if it tries to remain stationary in the air simply reflects the fact that airplanes are designed for forward motion. It certainly does not imply that a helicopter cannot remain stationary. A growth economy and a SSE are as different as an airplane and a helicopter. (Daly 1991: 126) In this section, ten broad policy recommendations for institutional change required to achieve and eventually manage a SSE are discussed. 58 They are shown in Box 3.1.
Policy recommendations one and two are intended to restore the autonomy of the 'community of communities' (nation-states). Re-regulating international commerce means that we should move away from the ideology of global economic integration: free trade, free capital mobility (financial globalisation) and export-led growth, in short, the core constituents of what is called globalisation. Daly is not against international trade, international treaties, international alliances, and so on. However, as the word suggests international relations are between nations, and they should remain the basic unit. Global economic integration implies national economic dis-integration, the progressive erasure of national boundaries, in order to be reintegrated into the new whole: the globalised economy. 58 Some of the policy recommendations discussed here can be fairly understood through Daly's theoretical tenets explained in the previous sections. There are other policy recommendations which would require extensive explanation. As extensive explanations are impossible in the limited scope of this chapter, the reader may consider consulting Daly and Farley (2011). Apart from the theoretical flaws upon which this policy is based, globalisation makes nation-states too dependent even for their basic survival, especially poor countries. It also pre-programmes international tensions and conflict. Poor countries should re-direct their efforts to build their agricultural capabilities for their domestic food demands rather than growing cash crops for unstable and highly speculative international markets. By and large, the development of domestic production for internal markets deserves priority. According to this policy of development, poor countries should use, for example, protective tariffs against subsidised agricultural products from rich countries. 59 Conversely, rich countries should also adopt protective tariffs in order to remain able to enforce rational national policies in the environmental and labour realm, that is, from standard-lowering-competition of poor countries with laxer environmental laws and lower wages. In organisational terms it means the downgrading of the IMF, WB and the newer WTO, perhaps reconsidering the original idea of Keynes at Bretton Woods. Keynes' original plan in Bretton Woods was to create an International Clearing Union, which would charge penalty rates on trade surpluses as well as on deficits in order to avoid imbalances of trade among their members. 60 Box 3.1: Institutions and the Steady-State Economy 1. Re-regulate international commerce. 2. Downgrade the IMF, WB, and the WTO. 3. Move to 100% reserve requirements. 4. Free up the length of the working day, week, and year. 5. Limit the range of inequality regarding income distribution. 6. Reform national accounts. 7. Enclose the remaining commons of rival natural capital in public trusts. 8. Use cap-auction-trade systems for basic resources. 9. Use ecological tax reform. 10. Stabilise population. 59 The position of Daly also supported by a new generation of so-called 'post-autistic' development economists such as Ha-Joon Chang. He showed that today's developed countries, beginning with Britain, promoted their industrial basis and became rich through all sorts of protectionist measures, for example tariffs and subsidies, and later on 'kicked-away the ladder' for development in poor countries. Chang attempted to show the little empirical basis of the claim that development was achieved through free trade embodied in IMF and WB policies. Interestingly enough, the same argument, based on historical evidence was also formulated by Karl Polanyi (2001) in his critique of the classical liberal economists. He speculated on the dire consequences for Britain, had she ever followed the doctrines of Ricardo. I will come to Polanyi later. Chang's policy recommendation for development is roughly to repeat this pattern followed by rich countries in the past and maintained in many respects in the present (see Chang 2003Chang , 2008. 60 Keynes blamed impoverishment, wars and revolutions for trade's imbalances. The International Clearing Union would be a similar institutional arrangement which governs payments within Policy recommendation 3 is primarily concerned with putting an end to the fractional reserve banking system (money creation) and implementing 100% requirements. The reasons are the following: first, and most evident, because money creation is one of the many institutional arrangements which fuels economic expansion and increases cyclical instability. 61 Money and debt can expand exponentially ('the magic of compound interest') while man-made capital cannot do so. According to Daly there also exists a conceptual confusion between capital and money: 'money fetishism'. The abstract symbol (money) came to dominate the concrete reality being symbolised (man-made capital). Daly treats it as a FMC (Daly 1996: 38). Second, we came to accept the idea of money creation as normal, yet 'the leading economists of the early twentieth century, Irving Fisher and Frank Knight, thought it was an abomination' (Daly and Farley 2011: 290). If money fetishism cannot be avoided, Daly prefers to conceptualise money as a public good (a non-rival 'resource'). It follows that seigniorage would be public revenue, instead of the money supply being privately loaned into existence at interest. Third, allowing private banks to become too big to fail has always been ill-advised on the same grounds of allowing industrial monopolies to emerge. Banks and other private organisations which are too big to fail are simply too big to exist.
Policy recommendation 4 is of overriding importance with reference to the intentions behind the growth policy, namely, to combat unemployment. The consumption-driven growth policy was the cure which Keynes proposed to tackle the disastrous consequences of mass unemployment after the great depression. Hence, the current disproportionate reliance on economic growth certainly has historical reasons which explain the 'glittering anomaly' noted previously. Nevertheless, other feasible economic policies also exist to combat unemployment in a SSE, the most obvious one is the shortening/sharing of working hours. Implementing this policy should be understood in rich countries more as a great benefit than a cost. It would allow for more options arising for leisure, such as hobbies, family, friendship, and community -in short, time for all those other activities which make a human life worth living. I will return to this issue in Sect. 3.5. It is worthwhile to underline, that this policy is probably the most amenable of gradual implementation and testing. It is a reminder for those social thinkers and politicians who are genuinely concerned with the possibility that nations and would manage an international monetary unit (bancor). Clearance of balances between countries would be carried out by central banks through the accounts at the ICU. See for a recent discussion on the issue Piffaretti (2009). 61 For a rigorous treatment of the issue see Biswanger (2009). Biswanger proposes also an interesting set of policy reforms to cut off the dependency of modern societies to grow. His monetary explanation of growth led him to propose a change in the stock and bond markets along with further changes in the institutional setting of joint-stock companies (corporations). Corporations are in his view the main drivers of economic growth nowadays. The support for corporations that for economic, political and ecological reasons should be directly challenged ought to be transferred to other legal forms of entrepreneurship less subjected to growth that are typical in small-and medium-sized firms. He also proposes to encourage the formation of cooperatives and foundations. people would dedicate their increased leisure time to socially damaging activities. This policy also offers a possible solution in rich countries facing the problem of aging populations instead of the highly doubtful 'productive ageing'. 62 Policy recommendation 5 is believed to have direct consequences for the general welfare of the community and is complementary to the former one. A minimum wage has popular support and already exists in many countries. What is missing in Daly's view is a debate and eventually an agreement upon a maximum wage. Recall that growth is celebrated as the main means with which to eliminate poverty, and yet, rich countries categorised as such many decades ago, are experiencing increasing levels of poverty. Growth is by no means an economic policy which replaces policies fostering equality, such as tax progression. True, complete equality would be unfair, but unlimited inequality is also unfair even if a country could even approximate the normative purpose of 'equality of opportunity'. Furthermore, in the middle run gross inequality is politically damaging for any society. We might lack a clear-cut scientific standard which tells us how much inequality is 'gross', yet the same clear-cut scientific standard is missing regarding of how much equality of opportunity really exists in a given society.
Reforming national accounts is the sixth policy recommendation. The main message of this policy is to separate GDP into a cost and a benefit account, and to then compare both accounts at the margins. It is what Daly and Cobb did with the ISEW, which also operationalised the central tenet that capital drawdown should not be counted as income. The remaining global commons of rival natural capital, such as the Amazon basin, should be priced and enclosed in public trusts. This is policy recommendation 7. At the same time, the non-rival commonwealth such as knowledge and information should be freed from patent monopolies. The guiding principle of this policy is to stop the treatment of the scarce as if it was non-scarce, and the non-scarce as if it was scarce. Intellectual progress is customarily a collective process. In academia and arts people have freely shared and built upon the ideas of other's for centuries. Great thinkers and artists have been driven by the habitual 'making a living', but also by curiosity, intellectual satisfaction and glory rather than by the profit-motive. 63 Copyrights and Patents which were initially awarded for 14 years have been extended under corporate lobbies up to 95 years 62 This specific argument is advanced by H€ opflinger (2010). 63 In 2001, 41 pharmaceutical companies took the South African government to court for importing cheaper 'copy' drugs from countries like India and Thailand to deal with its severe HIV/AIDS problem that could not be properly tackled given the high costs of these drugs. After international social uproar that showed the companies in a bad light, they withdrew the lawsuit. The companies argued that, without enforceable patents, there would be no more incentive for innovation. The argument that seems compelling is in reality only the half-truth. Many researchers all over the world come up with new ideas all the time, many government research institutes and universities even explicitly refuse to take out patents on their inventions. At the height of the HIV/ AIDS debate, 13 fellows of the highest scientific society of Britain, the Royal Society, stated the following: 'Patents are only one means for promoting discovery and invention. Scientific curiosity, coupled with the desire to benefit humanity, has been of far greater importance throughout history' (The Financial Times 2001quoted by Chang 2008. and in so doing hampering further intellectual progress. On the other hand, since technologies change so fast, the over-extension of patents keep technologies out of the public domain until they are obsolete: The irony is that patent rights are protected in the name of the free market, yet patents simply create a type of monopoly -the antithesis of the free market. (Daly and Farley 2011: 177) Policy recommendations 8 and 9 are closely related, however, Daly strongly prefers cap-auction-trade systems over ecological taxes. He gives two reasons: first it gives the correct order for institutional design: (1) environmental sustainability, (2) social justice and, (3) market efficiency. This order is superior to making environmental sustainability and social justice dependent on market efficiency, which is too often considered to be an end itself. The cap (or quota) effectively limits the scale of economic activity according to the resource limitations or natural sinks constraints, the auction captures scarcity rents for equitable redistribution, and trade allows for efficient allocation. This greatly resembles the concept of 'embeddedness' coined by Karl Polanyi in 1944 which he expected to be operative in industrial societies. I will address Polanyi's ideas in Sect. 3.4.1. In addition, it is worth noting that cap-auction-trade systems cut off the Jevons' paradox by starting with a quantitative limit which would raise relative resource prices but not quantities. Second, caps or quotas are effective in other contexts, that is, protecting ecosystems or in general renewable resources from liquidation, especially as long as the global financial system remains unstable and speculative. For instance, if a country depending on wood exports becomes the victim of an economic crisis, it might have to devaluate its currency, and consequently it may be forced to overexploit the forest beyond its sustained-yield.
Whenever the cap-auction-system is not necessary or difficult to enforce then an ecological tax reform could perform better. The principle underlying this proposal is to shift the tax base away from the value-added (labour and capital) on to which the value is added (entropic throughput). This procedure will internalise negative externalities and will raise revenue. Population control is a central measure for poor countries when the problem still exists. For the US, which is the exception of a rich country whose population is still growing and which is the focus of Daly, the policy is to achieve a balanced population so that births and immigrants are equal to deaths and out-migrants. Daly, following Mill's doctrines, has asserted long ago that the reason for the pro-population attitude of commercial elites is due to the effect on wages (Daly 1970). When a given population does not grow, commercial elites will tend to favour laxer migration policy or the moving of production to where labour is abundant and therefore cheaper. In present circumstances, it moves well-paid jobs and high environmental standards from the North to bad-paid jobs and lower environmental standards in the South. It is a transaction which makes the air cleaner in the North, dirtier in the South, but which effectively warms the atmosphere for both.
This short list of policy recommendations for institutional change leaves aside physical and political complexities which Daly is aware of. 64 It is also worth noting that some of these policy recommendations have been gradually implemented over the last few years, at different levels and in different regions, namely cap-auctiontrade systems for GHG such as the EU Emission Trading Scheme and ecological tax reforms in Europe, both certainly not without dispute. 65 Other policy proposals have been made in the past, and partially implemented. Freeing up the length of working time in Western Europe was articulated by French Philosopher André Gorz in the 1970s and was a central condition for attaining the German 'qualitative growth' demanded by labour unions during the 1980s until they became weakened during the triumphing march of global laissez-faire policies (Loske 2011: 25, 27-28). Equally, the proposal of putting a part of the Amazon in an international public trust was launched by the Ecuadorian president in 2007: The Yasuní ITT initiative. Yasuní is a biosphere reserve with oil reserves in the ground. Enclosing the national park would tackle three policy goals at the same time: (1) the reduction of 407 million tons of carbon dioxide (Gobierno Nacional de la República de Ecuador 2009), (2) the protection of biodiversity and the Amazonian forest, and (3) the protection of indigenous communities' rights to live at Yasuní in 'voluntary isolation'. The initiative which was gradually gaining international support seems now to be in a deadlock after the German federal minister for Economic Cooperation and Development withdrew his initial commitment to co-finance the trust fund in September 2010.
There have also been similar policy proposals which are fairly old and highly controversial not only due to the theoretical background which supports them, but because of the scale of vested interests involved. For instance, the move to a 100% reserve requirement is one of the main proposals made by the ex-congressman Ron Paul in the US, who was running for president in 2008 and who is also in the presidential race of 2012. He wants to bring the gold standard back which would put an end to what he sees as an inflationary monetary policy of the Federal Reserve. Limiting the range of inequality distribution and reversing or at least slowing down the pace of globalisation is a popular demand which took shape almost immediately when globalisation was pursued in the early 1990s by the IMF and the WB (Stiglitz 2002). The reform of national accounts, especially GDP has been a frequently discussed topic for almost 40 years, if one set the seminal paper Is growth obsolete? by Nordhaus and Tobin (1972) as the starting point of the debate and the final one, the report of the Commission on the Measurement of Economic Performance and Social Progress in France published in (Stiglitz et al. 2009). All in all, there have been similar policy proposals which have been partially implemented and which have been the subject of on-going political disputes. Nevertheless, with regards to all of these policy proposals the overall policy objective has stayed undisputed: economic growth. The general intellectual contribution of Daly is having coherently subsumed these policy proposals from the standpoint of Ecological Economics into an overall policy objective, the SSE.
Intellectual Foundations: Illich, Bookchin and Polanyi
Serge Latouche is the most visible French anthropological economist behind the contemporary promotion of de-growth in Western Europe. 66 Economically and politically speaking, his theories reach back to the French utopian socialists and the following socialist libertarian views -an ecumenical body of anti-authoritarian ideas -inspired in the writings of Jean Jacques Rousseau and Pierre Joseph Proudhon in its individualist version, and later in Michail Bakunin and Pjotr Kropotkin in its collectivist version. The latter social thinkers differ from popular Marxists by favouring and theorising on economic decentralisation and cooperation rather than the planned economy and a centralised state. Latouche takes the libertarian socialist ideas from Murray Bookchin's libertarian municipalism and Social Ecology, both conflated into ecomunicipalism.
Libertarian municipalism challenges parliamentary democracy as a means for public representation and policy formulation. In its place, the citizens of the municipality (the town or the village) through direct democracy should formulate policy; therefore, decision-making is not a hierarchical activity left to professional politicians, bankers, or in general, technocrats, but to the municipal assemblies. It does not mean that expert's knowledge is discarded; it simply means they do not take decisions which have overarching impacts on the community. Furthermore, Bookchin claims that ethics based on the values sharing, cooperation, and solidarity can only be pursued through direct democracy and politics as was practiced in classical Athens: Direct democracy, the formulation of policies by directly democratic popular assemblies, and the administration of those policies by mandated coordinators who can easily be recalled if they fail to abide by the decision of the assembly's citizens. (Bookchin 2007: 48-49) On a more aggregate level he sees a confederation of eco-communities instead of the state. For Bookchin the ecological crisis has its roots in the hierarchical mode in which society currently functions, and he extends this insight for the relationship to human-nature. In his view, the ecological crisis can neither be understood let alone solved without this understanding.
Another important part of Latouche's thought is the European cultural critique of modernity. Four years before Schumacher's Small is Beautiful was published, French philosopher Bernard Charbonneau published his Le Jardin de Babylone (1969) in which he deplored the 'gigantism' and the power of the 'technique' in the industrial world (Martínez-Alier et al. 2010: 1742. His reflections on the technique were further developed by French philosopher Jacques Ellul who pointed out its alienation effects, whereby humans became the instruments of their own instruments. According to both philosophers, escaping the dark-side of modernity requires cultural change, in which the values of productivity and individualism are replaced for quality of life, solidarity, frugality and voluntary simplicity (Martínez-Alier et al. 2010: 1742-1743. Another highly influential author of the cultural critique to modernity is the Austrian philosopher Ivan Illich. In the assessments of Latouche on the notion of development, he would prefer to see 'convivial societies' in rich and poor countries, rather than rich 'developing' the poor (Latouche 2001(Latouche , 2003a. The notion of convivial societies is taken from Illich. Illich argued along the lines of Ellul, that machines were created under the hypothesis that they would replace slaves. As this hypothesis proved wrong it must be discarded: 'neither a dictatorial proletariat nor a leisure mass can escape the dominion of constantly expanding industrial tools' (Illich 1973: 10). One of the effects of the hegemony of the machines was the degrading of humans as mere consumers. From this perspective, it follows that this expansion must be limited and the positions of dominance must be inverted if the values of survival, justice and self-defined work are worthwhile to be fostered and protected. Conviviality is the opposite of industrial productivity, and it means: Autonomous and creative intercourse among persons, and the intercourse of persons with their environment; and this is in contrast with the conditioned response of persons to the demands made upon them by others, and by a man-made environment. I consider conviviality to be individual freedom realized in personal interdependence and, as such, an intrinsic ethical value. (Illich 1973: 11) Illich believed that reversing the direction of dominance between machines and humans would set in motion an evolution of new life styles and political systems. Illich then moved on to outline his programme for a convivial reconstruction.
Although Latouche drew on both intellectual traditions of apparently similar lineage, there exist irreconcilable tensions. Bookchin is a believer of reason and Hegelian dialectics, therefore he disdains the anti-rational bias of post-modernism, its anti-technological attitudes, and anti-civilisational tendencies of the central European cultural critique which emerged in the 1960s. For instance, commenting on the best known book of Ellul The Technological Society published in 1964 he stated: Ellul advanced the dour thesis that the world and our ways of thinking about it are patterned on tools and machines (la technique). Lacking any social explanation of how this 'technological society' came about, Ellul's book concluded by offering no hope, still less any approach for deeming humanity from its total absorption by la technique. (Bookchin 1995: 30. Italics in original text) Although Illich later corrected this fatalism; he ended up with innocuous recommendations for lifestyle changes which were rather conducive to inwardness, narcissism and individual mysticism, therefore nipping in the bud any social cooperation needed to produce any real social change. Bookchin disliked the wide-spread anti-technological attitude of the time for two additional reasons: first it veils the cause of social dislocation and ecological destruction, which are in his view the hierarchical social relations of capitalism; and second, those thinkers forget that the same technology which extraordinarily raised productivity could be harnessed in a more 'rational' society as to meet unsatisfied needs and to free humans of mindless toil for more creative and rewarding activities (Bookchin 1995: 29-30). Bookchin also explained that the minor changes which the cultural critique of the late 1960s and 1970s produced were easily absorbed and channelled in the economic and political market, whilst the structures they attempted to change remained intact.
The economic and environmental vision of Latouche was further enlarged by his experience as an anthropological economist in Africa being under the intellectual influence of Polanyi's Great Transformation written in 1944. Polanyi's book offered a vivid description of Britain's social dislocation during her early paths towards industrialisation, and examined the idea of the self-regulating market envisaged by political economists, developed to its fullest by David Ricardo. Ricardo conceptualised humans ('labour') and the environment ('land') as commodities to be exchanged in an ideal self-regulated market. Polanyi insisted, had Britain and later other European powers ever followed Ricardo's doctrines, they would have destroyed themselves -literally speaking. Thus, prior to the Great War they did so in only limited time spans, given the emergence of mechanisms of self-protection such as mandated improvements in working conditions, pension systems, embryonic environmental legislation, and the like, which laissez-faire ideologues condemned as market distortions. Strikingly, these mechanisms emerged uncoordinatedly in countries with exceptionally different cultural and political outlooks such as the liberal Victorian England and the strong Prussian state of Bismarck. Another uncoordinated self-protection mechanism which emerged was the export of social conflict. Polanyi explained the renaissance of colonialism, outmoded between 1770 and 1880, as an additional societal mechanism of self-protection against the attempts to forcefully implement free trade doctrines: The difference was merely that while the tropical population of the wretched colony was thrown into utter misery and degradation, often to the point of physical extinction, the Western country's refusal to trade was induced by a lesser peril but still sufficiently real to be avoided at almost all cost [. . .] to expect that a community would remain indifferent to the scourge of unemployment, the shifting of industries and occupations and to the moral and psychological torture accompanying them, merely because of economic effects, in the long run, might be negligible, was to assume absurdity. (Polanyi 2001: 224) It is worth emphasising that Polanyi was neither vilifying Ricardo nor arguing against markets. Ricardo (and Malthus for that matter) honestly believed that he was discovering the 'laws' which British society should respect for her own long term benefit. The market, Polanyi explains, is an institution that has existed virtually since the Stone Age. He was merely warning against renewed attempts to subordinate the substance of society (humans and nature) to market 'laws', for they will necessarily culminate in catastrophe once again. In the 1930s, the laissezfaire movement and its counter movement found themselves in political stalemate, until fascism seized power and broke with laissez-faire, democracy and peace. The self-regulated market was a strong utopia which Polanyi hoped to see transcended after the Second World War -as indeed was greatly accomplished in central and North Europe the years thereafter. Given Polanyi's insights, Latouche saw the attempt to replicate Britain's pattern towards industrialisation in poor countries under the heading of 'development' and 'progress' as socially and environmentally ill-advised, including the persistent practice of exporting social conflict.
Latouche was also aware of Georgescu-Roegen's work (Latouche 2004a: 63). It was Jacques Grinevald and Ivo Res who introduced into the Francophone world Georgescu-Roegen's writings. The title of the French translation in 1979 of some of Georgescu-Roegen's writings was 'Demain la de´croissance'. 67 They translated the English verb 'decline' into the French substantive la de´croissance, and that word was translated back to the English language as 'de-growth' (Grinevald 2008: 15). The back-and-forth translations that Georgescu-Roegen himself agreed upon, given his literacy in the French language and personal relations with French Philosopher Grinevald, 68 fully reflected his opinion that 'the necessary conclusion [. . .] is that 67 De-growth for tomorrow. 68 I am indebted to Prof. Martínez-Alier for this biographical note. the most desirable state is not a stationary, but a declining one [. . .] Undoubtedly, the current growth must cease, nay be reversed' (Georgescu-Roegen (1975): 368-369. Italics in original text). In this passage he was arguing against the SSE proposed by his former pupil Daly. This debate will be further examined in the next section. Latouche also embraced LtG reasoning, especially the immense destructive forces of exponential growth expressed in the ever expanding ecological footprint and carbon emissions. For Latouche the major problem of rich countries was that of overconsumption, and for emerging and poor countries was that of aspiring the overconsumption of the rich, encouraged by the policies and cultural dominance of the North and the elite's corruption of both.
From 'Developmentalism' to the Virtuous Cycles of Rs
As previously noted, Latouche belongs to the short but growing list of social scientists and practitioners who have criticised the so-called 'developmentalist' project. Indeed, in their view, this project destroys viable societies by uniform development and the imposition of the utopic market-society. His critique can be fairly summarised with the following quote: As long as hungry Ethiopia and Somalia still have to export feedstuffs destined for pet animals in the North, and the meat we eat is raised on soya from the razed Amazon rainforest, our excessive consumption smothers any chance of real self-sufficiency in the South. (Latouche 2004b: 2) Since the developmentalist project slipped in the sustainable development discourse, Latouche rejected sustainable development altogether. In his view, it was not only a contradiction in terms as Georgescu-Roegen and Daly previously claimed from an entropic point of view, but also a pain-relieving discourse in view of the harsh socio-environmental realities that economic growth delivers, which were further deepened by the progressive re-implementation of globalisation (Latouche 2003a, b). The political question is then: how to escape the iron cage of growth which is destroying both nature and humans?
Latouche's strategy began at the bottom, with localism as a response to development and globalisation. At this level a transition process or a 'virtuous cycle of quiet contraction' would be initiated (Latouche 2009: 33). The reason for starting with the local was simply because it was the only space of political action left by the overwhelming financial and corporate power of today's world which have severely limited the scope of action of politicians. Placing the emphasis on political action, the term de-growth was the political slogan intended to defeat current pro-growth ideologies. As advocators of economic growth share a religious belief in it: 'we should be talking at the theoretical level of 'a-growth', in the sense in which we speak of 'a-theism', rather than 'de-growth" (Latouche 2009: 8).
An important step which was central to Latouche's thought was what he repeatedly called the 'decolonisation of the imaginary' from 'economicism' and the economy (Latouche 2003a(Latouche , 2004a. This means the pro-active liberation of the mind from economic thinking which is so hegemonic in social life, 69 and the pro-active liberation in the material sense which is creating new autonomous spaces of social interaction and production, in which frugality and voluntary simplicity can be practiced. The requirement is that of a cultural revolution across all levels which may reach politics. The cultural revolution in politics would reduce the need for politicking, and will likely re-establish the dignity of the political profession. The ultimate end is the convivial and sustainable society. The intermediate means are the serene contraction which is composed of eight interdependent and, so expected, self-reinforcing R-guiding concepts (Latouche 2009: 33-43).
Re-evaluate: The re-evaluation of social values which are admired but hardly practiced, namely altruism and cooperation instead of egoism and competition. Other values re-directing preferences ought to be re-evaluated: local over the global, autonomy over heteronomy, and appreciation of good craftsmanship over productive efficiency. A sense of justice, responsibility, and solidarity must be won back. An example of how the sense of justice has been so badly distorted by economicist thinking is the accepting of almost everything which creates employment (growth) as inherently good, such as exporting pollution into poor countries, 'land grabbing', exaggerated expenditures in the military and the like. According to Latouche, re-evaluation along with re-localisation are the most important Rs in strategic terms.
Re-conceptualise: This means to deconstruct and reconstruct the meanings of wealth, poverty, scarcity, and needs.
Restructure: When values are changing then the productive apparatus must be changed accordingly. As the restructuring is on-going, the question about going beyond capitalism will inevitably be raised.
Redistribute: Within and among the countries. Rich countries should restore or, depending on the specific situation of the country, improve a system of fair taxation and the gains of economic booms. Redistributing from the North to the South is confronted with the 'payability' problem of the immense ecological debt accumulated by the North. Nonetheless, the mechanism is not too much in giving away but in taking less. Ecological footprints are a good metric for determining each country's drawing rights, hence through the mediation of markets, an exchange of quotas and permits to consume could be made possible.
Re-localise: It deals not only with the re-localisation of productive activities, but with culture and politics. The strategic importance of re-localising is to show that the 'concrete utopia' is doable in political and economic terms. Of great importance is the existence of the collective project which is territorially rooted, for example the town, the village and so on, hence fostering the sense of belonging which will allow the protection of the common good and the emergence of other values. Latouche mentioned several examples of on-going projects with differing scales 69 In the texts reviewed Latouche blames the economic discipline on the whole.
in Europe, such as the province of Milan or in the Tuscany region. In fact, there are hundreds of on-going local projects which have emerged since the localisation movement appeared.
Reduce: This means especially reducing consumption. Nonetheless, reduce is also directed at reducing health risks, working hours, and mass tourism. Less working hours and work sharing is one of the formulas against one of the main arguments to keep the growth machine. On the other hand, we must overcome the 'tragedy of productivism', that is, our addiction to work (Latouche 2009: 40). It makes us unable to rediscover the repressed dimensions of life, such as the pleasure to engage or develop our talents and to practice our hobbies, to play, to enjoy conversations or to simply enjoy being alive.
Re-use/recycle: It is about the reduction of waste, to fight in-built obsolescence and recycle waste which cannot be reused. Latouche mentioned examples of firms which through product design make almost full recycling possible.
Resist: It is said to be the central R of the Cultural Revolution expected to be triggered and carried out by the rest of the Rs. Resist is contained in the rest of the Rs.
Against the accusations of the potential intransigent characteristics of the eight -R-guiding concepts, Latouche defended himself by claiming that they are a response of the system excesses with all of its 'overs ': over-development, over-production, over-abundance, over-extraction, over-fishing, over-grazing, over-consumption, over-supply, and so on. 3.5 Steady-State or De-growth?
The ideas of Daly and Latouche differ mainly in their grade of theoretical elaboration and completeness. This asymmetry might be explained by the dissimilar time span in which each of them has been involved in the economic growth debate. Daly has been writing on this issue for 40 years with remarkable scholarship and in a holistic fashion. In this time he has covered practically all of the topics related to the issue. Latouche, on the other hand, started to write about de-growth in the early 2000s although he has already been arguing against the notion of development for a long time. He also seems less interested than Daly in the growth-debate with economists, and more interested in broader political, social and cultural aspects. Although both social thinkers follow largely different intellectual traditions, similar policy proposals arise, albeit with different wording. This is perhaps a result of the exchange of ideas in the 1970s between the US and Europeans thinkers, and recently from social thinkers of southern countries such as India, Ecuador and Bolivia. Another potential explanation may be that some of these proposals seem to be sheer common sense.
To make this point, a short review of the most convergent policy proposal shared not only by both thinkers, but also by a number of their intellectual mentors and many others should be provided: gaining leisure by working less, what conversely might free up jobs for others in the community. Keynes (2009: 198) was already, one might retrospectively say, dreaming in 1930 that the main problem of the worker would be 'how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well' and 2 years later Bertrand Russell even felt the necessity to praise idleness. The very same architect of the 'German miracle', economist Ludwig Erhard anticipated in 1957 a 'correction in economic policy' which would become necessary once people started to ask about the value of accumulation and eventually conclude that more leisure is more valuable (Radkau 2010: 46). In the 1970s French philosopher André Gorz was demanding leisure for French workers, while disenchanted Austrian philosopher Ivan Illich was blaming 'the machine' for robbing it, and even for enslaving humans. On the other side of the Atlantic, Georgescu-Roegen felt that we needed to realise that an important prerequisite 'for a good life is a substantial amount of leisure spent in an intelligent manner ' (1975: 378). Daly transformed it in a policy proposal that perfectly fits into his SSE, while Latouche wanted us to move away from the 'tragedy of productivism'. From these reflections, it seems clear that the underlying question has always been: what is after all the purpose of a society classed as materially rich if not liberating the majority of the population from the 'toil and trouble' of work? Not even some societal sectors in peripheral countries which have recently gained voice are particularly enthused at the thought of becoming 'developed' by means of owning all sorts of gadgetry while losing leisure -especially due the bleak perspectives of becoming 'developed' under the present global status-quo.
Pertaining to SD, both thinkers are troubled with the current meaning of development which so strongly implies export led-growth. Nonetheless, Daly sees the necessity to maintain the sustainability political forum. Sustainable development is dialectic more than an analytic concept of the sort of justice or love; therefore it is preferable to continue the political attempt to shape its meaning. There is after all no other option in the international realm. Beyond Daly's ideas of development, he formulated in the 1970s the 'impossibility theorem': US or Western European highmass consumption style economy for a world of nearly four billion people (at that time) is impossible. Even more impossible is the prospect of an ever growing standard of consumption for an ever growing population. The physical limits of the earth will not support a world population in a 'developed' state (Daly 1991: 151). Latouche on the other hand, sees no reason why poor countries should follow the development path of rich countries even if globalisation was inexistent. He introduces four additional Rs: Renew, Rediscover, Reintroduce and Recuperate: Renew contact with the thread of a history that was interrupted by colonization, development and globalization. Rediscover [. . .] cultural identity. Reintroduce specific products that have been forgotten or abandoned [. . .]. Recuperate traditional technologies and skills. (Latouche 2009: 58) Thus, Latouche formulates the Rs in terms of cultural emancipation. The only sharp difference in their conceptions of development is population control policy. Latouche rejects it for political and pragmatic reasons. He maintains that the intention behind the population control consensus that emerged in the 1970s was based on hegemonic intentions (Latouche 2009: 25). Furthermore, as the population rapidly increased in poor countries because of high fertility rates over the last decades, it will equally rapidly decline with lower fertility rates in the next ones. 70 On the question of technological progress, Daly understands it as a fundamental component in the SSE but as highly destructive within the growth paradigm. Latouche emphasises the alienating characteristics of technology and believes in simpler and more manageable tools. By and large, Daly's SSE subsumes virtually all of Latouche's Rs in his theoretical framework, excepting his re-use/recycle of waste, especially recycling of waste. Daly does not reject the practice of waste recycling, but this activity, independent of how we re-define 'waste', does not defeat the strict linearity of the entropy law. Daly also seems to be more specific about the ideas on the role of the state and the market-economy than Latouche.
Recently and Kerschner (2010) also attempted to compare de-growth and the SSE, drawing not only on Latouche but on other de-growth advocates. He addressed the following specific issue: de-growth proponents reject Daly's SSE and blindly cling to Georgescu-Roegen's judgment that, as mentioned in the last section, de-growth is preferable over the steady-state. Georgescu-Roegen was himself indeed highly critical of the SSE developed by his former pupil (see Georgescu-Roegen 1975: 366-369, 19771979: 102-105). Before examining Kerschner's arguments in depth, it is convenient to first consider Georgescu-Roegen's arguments against the SSE. They can be reduced to: 1. Growth, zero-growth (SSE) and even de-growth (declining) cannot exist forever in a finite environment. 2. The SSE offers no basis for determining even in principle the optimum levels of (a) population and (b) man-made capital. 3. The SEE does not offer a guide with which to determine the appropriate stock of capital for human's 'good life'.
Argument 1 arises simply by the strict application of the entropy law; therefore a SSE will be of finite duration. 71 Yet the same fate is also shared by a declining path. 70 The United Nations' estimations on global population growth have been recently readjusted. The current world population (7 billion) is projected to reach 10.1 billion by 2100, reaching 9.3 billion by 2050 according to the median variant. This increase is projected to come from highfertility countries (an average woman has more than 1.5 daughters), which are mostly located in sub-Saharan Africa, but also in nine countries in Asia, six in Oceania and four in Latin America (UN 2011). On the other hand, as made clear several times, Daly and Latouche agree that current global threats stem rather from consumption in rich and emerging countries. After all, population is just one variable which put pressure on the natural environment. Those poor countries with still high-fertility rates consume virtually nothing compared to the consumption levels in rich countries. 71 Beckerman (1995) also pointed out that the LtG stabilisation path scenario was cut off in the year 2100, hence omitting the declining availability of resources that would, according to the same model, arrive beyond 2100. Argument 2a is true, 2b is somehow true and 3 was true when Georgescu-Roegen was writing on the issue. Concerning 1 the discussion must ultimately rest on the duration and on the ethics implied, consciously or not, in any economic and ecological policy in the time span that humanity at large will live, which, needless to say, is highly speculative. However, for most people it might be unacceptable to ground public policy for 150 years, that is, roughly the life time of three generations, and for the majority it would be ludicrous to ground public policy for 5 billions years, which is the estimated time before the sun becomes a red giant which vaporises all life on earth, and the earth herself. With this said, one must also have a sense of proportion. As Daly observed more than 30 years ago: In the very long run of course nothing can remain constant, so our concept of an SSE must be a medium run concept in which stocks are constant over decades or generations, not millennia or eons. (1979: 80) The question which then arises is: why did Georgescu-Roegen prefer declining economies if after all humanity is doomed to disappear? His answer was because 'the population is too large and part of it enjoys excessive comfort ' (1975: 368). His persistence on extravagant wants (rich's problem) were more frequent than the population problem (poor's problem): 'If we understand well the problem, the best use of our iron resources is to produce plows or harrows as they are needed, not Rolls Royces [. . .]' (1971: 21). However, this did not come out of a capricious personal taste, but out of his proposed ethical principle: 'Love thy species as thyself ' (1977: 270) that would require the overcoming of the 'dictatorship of the present over the future', and replacing the maximisation of present utility by the minimisation of 'future regrets ' (1979: 102). 72 The latter requirement is, in his view, not served by discounting rates arbitrarily set by present people -usually economists -against the future generations who cannot bid for the choice of present resources. Once this principle is internalised, 'right' prices, production, distribution, and pollution will naturally emerge. Therefore, in spite of the critical comments on the SSE, he judged as an improvement the ultimate-means (low entropy matter-energy) -ultimate-ends (religion) spectrum which Daly (1979: 70) was elaborating: This paper thus strengthens the impression emerging from his previous writings that the essence of Daly's conception is not economic or demographic, but, rather, ethical -a great merit in a period in which economics has been reduced to a timeless kinematics. (Georgescu-Roegen 1979: 102) Argument 2a is true, but this is also only the case when Georgescu-Roegen fills the void with a proposal. Indeed, the SEE offers no guidance as to determine the optimum level of population and the guidance Georgescu-Roegen offers is quite concrete. In his 'minimal bioeconomic program' consisting of eight points (1975: 374-379) he advises that world population should gradually be reduced to a level at which it can adequately be fed only through organic agriculture. He considered modern agriculture (and most modern technologies) as energy squanderers. 73 Arguments 2b and 3 are thornier. Concerning the optimum level of man-made capital Georgescu-Roegen insists that natural resources must be governed by quantitative regulations/restrictions, strictly rejecting the aiding of market efficient allocation through Pigovian taxation/subsidies, for these measures would simply end up benefiting the already wealthy and the political protégés (1975: 377). Quantitative regulations/restrictions are also Daly's preferred political instrument, but he does not reject taxation/subsidies as shown in Sect. 3.3.5. In the case of quantitative regulations/restrictions, Daly's aim is to induce substitution through technological progress once restrictions are set. This is also stubbornly rejected by Georgescu-Roegen as it precisely implies that an economy cannot possibly be in a steady-state (argument 1) and that Daly's proposal would mean 'joining the club of the believers in exponential progress ' (1979: 104).
Regarding the appropriate stock of man-made capital for human's good life (argument 3), this argument was true, as at that time there was no refined framework with which to handle the issue. Nevertheless, Daly's notion of optimal scale was polished and operationalised years later through his ISEW. Here, Daly allows for the freedom of the individual to decide about the 'good life', which will be eventually shaped by the forces of market once quantitative regulations are in place. 74 Additionally, Daly is also highly critical of extravagant wants stimulated by advertising and clings to the old economic principle of declining marginal utility: 'if nonsatiety were the natural state of human nature then aggressive want-stimulating advertising would not be necessary, nor would the barrage of novelty aimed at promoting dissatisfaction with last year's model' (Daly and Cobb 1994: 87-88). In other words, if want-stimulating advertising was absent then individual choices aiming for the good life would ultimately require far less throughput. Georgescu-Roegen also advised us to educate ourselves to despise fashion and to make durable goods even more durable and to design them so they are repairable. In spite of argument 3 and as noted before, he wanted us to understand the importance of leisure for the 'good life'. Similarly, with the resources freed by the prohibition of the production of all instruments of war, it would be possible to help poor nations to arrive as quickly as possible at 'a good (not luxurious) life ' (1975: 378).
The last paragraphs thoroughly cover Georgescu-Roegen's criticisms of the SSE. Coming back to Kerschner's paper, he is basically troubled with argument 1, especially with the word 'annihilation' used by Georgescu-Roegen (1975: 367): 73 By the same token, his advice was to master the harnessing of solar energy which is the most abundant source of energy (albeit flow-limited) and by so doing lowering the increasing rate of terrestrial entropy. Additionally, he saw it as the safest technology compared to other technologies such as nuclear power. 74 As hopefully noticed, Daly is not a radical individualist.
The crucial error consists in not seeing that not only growth, but also a zero-growth state, nay, even a declining state which does not converge toward annihilation, cannot exist forever in a finite environment. (emphasis supplied) The first problem Kerschner sees is that de-growth proponents have adopted Georgescu-Roegen's position against the SSE while conveniently omitting the word 'annihilation' when they cite him (ibid.: 548). The second problem is that Georgescu-Roegen's position is 'a path without a constructive goal for policy making [. . .]' (ibid.: 547). This assessment has, however, been previously softened when Kerschner speculated that Georgescu-Roegen was referring to the entropic death of the universe. My own assessment is that the issue is to some extent overestimated. Clearly, neither Georgescu-Roegen, nor de-growth proponents (at least Latouche) want us to de-grow humanity to death, therefore it is somehow immaterial whether they quote the annihilation-part of the sentence or not. 75 As mentioned before, Georgescu-Roegen was simply reminding us that humanity is mortal. Finally, and differing from Kerschner's view, Georgescu-Roegen did offer several policy options in his minimal bioeconomic programme. If 'constructive' meant 'politically possible' that would be a separate discussion which I will return to later. It is also useful to recall the time at which the debate was being conducted (Fig. 3.1).
A very interesting point raised by Kerschner is that Georgescu-Roegen appeared inconsistent in his critique against the SSE, and that he even implicitly supported it. Kerschner mentioned his organic agriculture/population proposal which would ultimately imply a stabilised population, as Daly's SSE requires. One may add the proposal of helping poor nations to arrive as quickly as possible at a good life, which would mean a movement towards a SSE -through growth! It is difficult to accept that Georgescu-Roegen's erudite mind was being inconsistent, 76 but one must agree with Kerschner's assessment. Judging from his proposals on population and aid for the poor, Georgescu-Roegen views favourably fit Kerschner's observation that both movements: growth for the poor, de-growth for the rich, and a stable population for all towards a (quasi) steady-state is required. 77 75 Admittedly, Kerschner has been far more involved in the de-growth discussions than I have been. This experience may have told him that the point ought to be made. 76 Georgescu-Roegen was a trained mathematician, economist, and philosopher of science with ample knowledge of physics and biology. He was the social thinker who was once called by Paul Samuelson 'a scholar's scholar, and economist's economist' and included in economist historian Mark Blaug's book Great Economists since Keynes published in 1985(Daly 2007. In Blaug's view it was this erudition and complex style that led his colleagues to ignore him so persistently hitherto. Daly adds as a reason that he, the mathematician, was severely criticising the excessiveness of mathematics in economics, the very element of orthodox economists' proudness which confers the scientific status of the profession -or at least the appearance thereof (Daly 2007: 126). 77 The term 'quasi' steady-state was used by Georgescu-Roegen (1975) when he was explaining that such societies indeed existed in the past but they were rather culturally and technologically stagnant.
Globally understood, both schools are indeed complementary as Kerschner concluded -in fact they must logically be so. In order to reinforce Kerschner's conclusion it should re-emphasised that Daly's SSE has been continuously refined and expanded in the last 30 years. He built upon Georgescu-Roegen's bioeconomics (the lowest part of Daly's spectrum), improved the body of economic theory, integrated political/institutional insights and fully handled the 'ultimate end' (the highest part of his spectrum) in For the Common Good co-authored with theologian John Cobb and published in 1989. The book was available only a year later after the institutionalisation of the merge between ecology and economics through the inauguration of the journal Ecological Economics. This was a merge which Georgescu-Roegen already saw as inevitable in the 1970s -although he was not fully satisfied with it as his goal was rather to replace the, in his view, fatally flawed mainstream neo-classic economics with bio-economics and not to be relegated merely as a school of secondary importance (Levallois 2010). Finally, the metrics which came out in the late 1980s and have been refined and tested ever since, could replace GDP and thus must also be mentioned. Indeed, they are the kind of metrics which would tell us how much growth/de-growth is economic for the 'good life'. 78 After all, the SSE, as the name implies, is an (approximate) state, while de-growth, as well as its antithesis growth, are processes. In a nutshell, it would be incomprehensible if today's de-growth proponents, for whichever reasons, deliberately neglected 40 years of intellectual work which is in any case the outgrowth of Georgescu-Roegen's ideas, humanism as well as the sense of responsibility and urgency which flows from his intellectual legacy.
The preceding discussion brings me back to the comparison of Daly's SSE and specifically on Latouche's de-growth. As previously mentioned, Daly's SSE is theoretically more elaborated and comprehensive than Latouche's de-growth, yet apart from Latouche's own theoretical contributions, his significance lies undoubtedly in the resonance of de-growth as a political slogan which has been capable of re-launching an academic and public debate in Europe, and to some very limited extent in the US. In the academic domain three international conferences on degrowth have been held, the first in April 2008 in Paris, the second in March 2010 in Barcelona and the third in June 2011 in Berlin. A fourth will be held in Montréal in 78 For sure an index does not do justice to the richness of the meaning of the 'good life' that varies in different cultural settings, but this fact does not make indexes superfluous. As previously note, Daly is not particularly motivated with the shaky ground upon which the notion 'development' rests, and even less is he an enthusiast of bringing it to the rest of the world. This is not only because of ecological but also because of cultural reasons. After all, he wants to transform economic thought so that it serves specific communities (see for example Daly and Cobb 1994: 133-137). On the other hand, while developing his ISEW with Cobb, he was keeping an eye on custom, or 'path-dependency' using a more fashionable term. If GDP is the national account in which statistical efforts have been invested in the last decades one has two options: (1) to disregard it and to force the introduction of several indexes -as it has been proposed several times, or (2) to replace it with a better index building on available information currently collected by statisticians, hence building upon the general obsession with a single index. Daly preferred the latter option.
2012. Additionally, a new academic journal based in France called Entropia was launched. A number of books on the issue have recently been published in other core European countries such as Prosperity without Growth by ecological economist Tim Jackson (2009) in Britain and the edited book Postwachstumsgesellschaft 79 by Irmid Siedl and Angelika Zahrnt (2010) in Germany. In the political domain Latouche explains that by now at least the French public is familiar with the slogan 'de-growth' (Latouche 2010: 201). In Germany, green politician Reinhard Loske (2011) has argued (again) for abandoning 'growthmania', thus proposing a set of political reforms which would enable this, some of them very similar to those proposed by Daly and Latouche. Furthermore, the European Commission published in September 2010 an article dealing with what was called sustainable de-growth through their news alert service (EC 2010). They drew on an article authored by ecological economists Martínez-Alier et al. (2010). Although the intensity of the debate on economic growth originated mainly in the US, it remains today a strong taboo not only for professional politicians but also for the public at large; nonetheless, some minor ripples from the growth discussion in Europe have spread back to that country (Schor 2010).
The de-growth slogan advanced by Latouche was also debated in the journal of Ecological Economics, particularly its feasibility for political implementation. Van den Bergh (2011) has asked what should de-grow: GDP?, consumption?, throughput? or work-time? These questions can be answered with what has been written so far. What is important to highlight is his assertion that GDP de-growth, consumption de-growth and 'radical' de-growth 80 are likely to meet strong resistance in democratic systems. He was certainly correct in his judgment that striving 'for political feasibility nationally and internationally is an important precondition for getting such as policy package implemented' (ibid.: 888). He then proposed what he sees, an effective policy package of five items, one of them was regulating commercial advertisement more stringently and the other one was taxing status goods. It seems difficult to realise how such policy proposals will not meet strong resistance in a democratic system. He also argued in favour of 'a-growth', that is, to encourage economists, politicians and media to 'ignore' GDP. In this case, it is also implausible that propagating an attitude towards GDP will automatically reduce the ecological impact of effectively growing economies structurally designed to do so. A comprehensive and appropriate response was put forth by Kallis (2011). The fact of the matter is that any policy package which challenges growth will receive strong opposition and it will be rated as politically 'impossible' -as has been the tenor of the last 40 years. On the other hand, a broader and dynamic understanding of democracy could be helpful. The preconditions which Van den Bergh accurately identified can only be created bottom-up, be it for enacting his or Daly's proposals, or for that matter any proposal aiming at de-growth towards a (quasi) steady-state. 79 The post-growth society (traduced by the author). 80 A notion too ill-defined as he himself admits.
It is the role that Latouche and others are playing, which incidentally constitutes another dimension which makes de-growth and the SSE complementary. Perhaps it is useful to bring to back to memory that in the West, the ideas upon which social institutions such as slavery and patriarchy were based went on millennia without being challenged -not a few decades as in the case of growth -and when challenges emerged they were held by some to be politically impossible. Finally, slavery was ended and patriarchy was undermined. In choosing between tackling a political 'impossibility' and a biophysical impossibility, reason tells us to judge the latter to be more impossible and to take our chances with the former.
Conclusions and Prospects
The aim of this chapter was to study the economic growth debate hitherto, and to review and compare two alternatives to it: the SSE of Herman Daly and de-growth of Serge Latouche. The growth debate emerged out of the convergence of several ecological and political factors in the late 1960s in rich countries. The position of economists became divided on the issue with the majority maintaining the growth commitment. It was, however, the Limits to Growth report published in 1972 which projected the debate well beyond academia. The debate remained strongly polarised until the Brundtland report was published in 1987 settling the issue at the international political level. The Brundtland report which recognised the natural environment in essential ways was however, a (inevitably) product of political compromise that as such, neglected many important issues, namely the phenomenon of socialengineered wants already well-documented at that time. It also ended up making recommendations such as improvements in energy/matter efficiency, while ignoring scale effects (Jevons' paradox) which were also widely known, albeit strongly disputed. In spite of these disputes, the world economy continued to expand as measured, for example, by the ecological footprint. Years later, laissez-faire doctrines took over the world with a new formula for growth which was expressed in the ecological domain by the radical optimism of economists such as Julian Simon. From the 1990s onwards the public focus shifted towards climate change which by the beginning of the 2000s evolved into a political stalemate. Climate change was given a boost through the Stern report in 2006, whose proposals became politically feasible only after the last economic crisis prompted a renovate interest in Keynesianism. The new circumstances allowed for the notions of the 'green economy' and 'green growth' to find its way into the official environmental discourse which have been commonly used in Europe. The Green Economy report launched by the UNEP in 2011 was more coherent than the Brundtland report and reflected the tendency of a gradual shift away from market-fundamentalism and the integration of many elements of Ecological Economics such as state investments in green research, ecological restoration, public goods and more generally, investments in the global commons such as the atmosphere. Nonetheless, the report failed to get to grips with the issue of scale which logically allows for the preservation of the growth commitment.
The two alternatives beyond the environmental official discourse remain the SSE of Herman Daly and the cultural change called upon by Serge Latouche to realise de-growth in affluent countries. Daly drew on the ideas of Mill and Georgescu-Roegen. For Mill the stationary-state was highly desirable because of ecological and social reasons. Georgescu-Roegen examined the implications of the first and the second law of thermodynamics in the economic process, and concluded that the growth policy had become untenable. Indeed, he even criticised the SSE which was being developed by his former student Daly. Daly proposed a SSE in which low-entropic throughput is minimised but the service maximised. He put forward economic (qualitative) development instead of economic (quantitative) growth, and cogently demonstrated that the latter can also be 'uneconomic'. A set of policy recommendations for institutional change consistent with the SSE was suggested covering virtually the entire spectrum of economic and environmental policy. It is useful to underline his policy of quantitative restrictions proposed in order to tackle the Jevons' paradox, a topic left inconclusive in the Green Economy report. He proposed quantitative limits selected according to the most stringent necessity (depletion or pollution) and letting production/consumption to adapt to the new prices. Serge Latouche built upon the cultural critique to modernity of central European thinkers such as Jacques Ellul and Ivan Illich. From the Economic Anthropology of Karl Polanyi, Latouche derived his critique against uniform patterns of development, and from Murray Bookchin's ecomunicipalism, he strengthened his cause for the local as a starting point. Latouche advocated for a cultural revolution which should expand gradually whilst being guided by a set of interrelating R-guiding concepts.
The set of policy recommendations which arose from both approaches were greatly similar, such as working-less and work sharing, but using different lines of reasoning and wording, given their, to a degree, dissimilar intellectual traditions. The only marked differences were: the waste recycling practices which Latouche advanced but that Daly saw with reservation given the entropy law; and the population control policy which Daly supported as a still legitimate means of development, but that Latouche rejected on political and pragmatic grounds. Excepting these differences Latouche's Rs could be subsumed in the detailed theoretical elaboration on which the SSE rests. The SSE and de-growth are not mutually exclusive approaches but necessarily complementary, unless we do not value human existence on the planet. At the bottom, the SSE is, as the name indicates a state, while de-growth indicates motion. The discussion will ultimately rest in: 1. The physical quantities which economies need (population and man-made capital) for the good life in the long run; 2. How to decide on them, that is, biophysical limits, Daly's metrics, and Georgescu-Roegen's organic agriculture/population proposal; 3. How to achieve them, that is, Latouche's cultural change and a dynamic understanding of democracy; and 4. How to maintain an approximate steady-state.
It is nearly impossible to add anything novel to the statements of Boulding, Mishan, Schumacher, Daly, Georgescu-Roegen, to mention only the most prominent scholars. Indeed they stated with unparalleled clarity, after having understood that consumption and production became bad things that as such they should be minimised instead of maximised in countries which had already achieved an unprecedented level of material comfort. Others such as Hardin placed greater emphasis on population growth, an emphasis which was frowned upon by many. The rich were blaming the weakest members of their societies and, at some point of world's poor for the calamities they saw looming. The honest mistrustful (and Hardin himself) too often missed the point that it was the combination of policies and not singled out policies which mattered. However, the conclusion remains fundamentally the same. By the present state of things, 'growthmania', world economic growth and population growth must cease, or even be globally reversed. With the latter objective some advancement has been made, while with the two formers virtually no strides have been taken in the arena and area which matters: the political arena of core countries -and newly, the emerging ones.
It has long since been well-understood that with the perpetual quest for economic growth instead of for example economic development -in Daly's sense -or what Europeans thinkers once referred to as 'qualitative growth', everything becomes more complex, vulnerable and, therefore intractable for human management. It has the effect of pushing societies to resort to doubtful plan B's such as geoengineering proposals and the additional scaling-up of institutions. The increasing acceptance of geo-engineering proposals such as injecting sulphur into the atmosphere strongly correlates with the failure of getting a necessary international binding agreement on climate change. It is easy to note that this plan B fits perfectly well within the predominant cultural belief of humans dominating nature through technology, which conversely allows for the maintenance of the growth commitment. One can almost imagine installing a switch on the planet for when it gets too hot, similar to calibrating an air conditioning system; while running the risk of forgetting that climate scientists have not, and maybe never will, completely understand the wide array of dynamic interconnections between the climate and life-support systems; that we may run the risk of falling again into a progress trap, and that at this stage, we just begin to anticipate the potential consequences for international relations.
Scaling-up institutions which began with mandatory 'end-of-pipe' treatments of waste and pollution can be grasped as the reflexive societal response to tackle bigger ecological problems in an almost hopeless attempt to cope with increased entropy and overwhelmed ecosystems. Yet, scaling-up of institutions must be necessarily accompanied by scaling-up governance structures for the purposes of enforcement -the rub of the issue. Institutions devoid of feasible enforcing mechanisms will remain, at least at the international level, simply in good formalised intentions. At this juncture it should be acknowledged that this societal response, albeit necessary, further jeopardises parliamentary democracy and freedom, for the institutional scaling-up tends to shift decision-making away from the sub-institutional units of the nation-states. Additionally, the bargaining costs of coshaping the content of these institutions will tend to increase proportionally. If bargaining costs tend to increase pari passu with scaling-up institutions, it implies that greater bargaining costs will likely be more easily borne by correspondingly bigger players, which include not only big states but most importantly nowadays, big private organisations. This trend conversely reinforces the trend already set by globalisation -not a natural law but a myth encroached by mere repetition, and in some instances certainly by deliberate cultivation. From this perspective, the aim of global de-growth, in terms of energy and material throughput; and de-globalisation, in terms of free trade and free capital mobility, are perhaps not sufficient but clearly necessary conditions for achieving environmental sustainability and for protecting, and in some cases even restoring freedom and democracy.
Contrastingly, arguments for freedom and democracy are raised against policy proposals aiming at de-growth/growth towards a steady-state. It is believed that by allowing too much intrusion of the state in the ecological realm, we will be on the road of serfdom, in which a tyranny may emerge in the form of an 'eco-dictatorship'. It would be foolish to deny this possibility. Although societies may have latent totalitarian forces waiting for their political window of opportunity to curtail freedom in the name of ecological salvation -or in the name of other societal goals for that matter -the arguments laid down above indicate that, the more accelerate entropy through unnecessary growth, the more likely are the chances opened to a potential 'eco-dictator'. Indeed, causal empiricism shows that a strict hierarchical control of throughput and therefore, of social life is often witnessed in places where resources are extremely scarce, for instance in small ships, space shuttles and the like.
On the other hand, those arguing against the intromission of the state for the reasons of preserving freedom will have a difficult time in arguing against some of most the famous philosophers of the subject, such as J.S. Mill. Furthermore, a classical liberal less known than Mill in Anglo-Saxon countries, but from who Mill took inspiration was Wilhelm von Humboldt. In his inquiry on the Limits of State Action written in the late eighteenth century, he stressed that theory must be guided by attempting to achieve the greatest freedom possible, while coercion must be guided by reality, hence: Either man or the situation is not yet adapted to receive freedom, so that freedom would destroy the very conditions without which not only freedom but even existence itself would be inconceivable [. . .]. (1993: 144-145. Emphasis supplied) It can be argued that the meaning of freedom has progressed ever since, but if this progress is meant to be the purposeful conditioning of the human mind as to disregard the natural tendency of satiation in order to have the freedom of choice between hundreds of brands given growth-necessities, then this progress would appear to be rather a regress in the conceptualisation of freedom. Under this frame, 'consumer sovereignty' becomes a cynical notion.
Another argument often put forth against the policy proposals which emerge from de-growth and the SSE is that their proponents want industrial societies to go back to the caves, or rather to the trees. This argument overlooks that the challenge consists precisely of institutionally channelling technological progress, that is, innovation and efficiency, in a manner which leads us to a material steady-steady state. This order is necessary, for innovation and efficiency first will not yield frugality second -unless frugality is dismissed as a precondition to cope with ecological problems. Besides, there are already hundreds of local projects attempting to live up frugality in which high-tech is used, thus encouraging selfsufficiency in energy, that is, photovoltaic and small farms for bio-fuels; democratic participation and cooperation facilitated through social media; but also urban gardening, co-housing, local monetary policy, and so on. All of this requires technical knowledge in agronomy, architecture and economics. It must also be mentioned that these local projects are not only a product of the bucolic romanticism of the rich, as it is sometimes portrayed, but an act of reflexive self-protection and justice. It is an act of reflexive self-protection if it holds true that we are on the downside path of the Hubbert's curve -let alone the threats of climate change; and an act of justice if we resist to rationalise under the label of development the emerging trend of buying large tracts of land in poor countries ('land grabbing') for the purpose of securing future fuel for the globally increasing and constantly renewed automobile fleet. Those who value tremendously human ingenuity in the realm of technology, too often do not value human ingenuity in the social realm. True, 'social experiments' have desolately failed in the past, yet the same judgment can be made on certain technological experiments.
From the previous discussion, what are the emerging prospects for scholars, at least for those sharing the view that global growth must cease and converge towards a SSE? In recent years, there has been a renewed interest in re-evaluating GDPgrowth. An example of this can be found in France where a commission led by Stiglitz published a report on the issue in 2009, and presently a similar commission is working on the same topic in Germany. Prior to these reports, there was an increasing number of publications dealing with the measurements of the many aspects of human happiness and welfare. These studies can be added to the vast of body on green indexes' research which emerged from the interrelated debates on sustainability and growth. Concerning de-growth and the SSE there is still room for research regarding potential combinations with previous indexes and for different regions. It is important, however, to highlight that although such indexes are undoubtedly needed, they must be complemented with the additional study and evaluation of alternative institutional arrangements. These alternatives may take the form of encouraging other judicial forms of companies such as cooperatives, familiar firms and foundations which, different from joint-stock companies, are more interested in a steady-income stream than in profits and expansion, as Binswanger (2009) explains. The assessment of these institutional forms which could make economies less dependent on economic growth with reference to factors such as employment is of vital importance. Otherwise, the discussion on metrics will remain in modern Platonism.
As previously mentioned, there are already hundreds of on-going local experiments consciously practicing frugality which may require closer study regarding for example, how they function and what is the potential for extending these models regionally and beyond. These 'experiments' are not only being pursued in local villages in rich countries using sophisticated tools, but also in poor countries -poor in income terms. In Latin America there are larger attempts to re-build sustainable societies, which are guided to some extent by autochthonous notions. They are, needless to say, highly controversial and even antagonised from inside and outside. For instance, a couple of years ago the constitutions of Bolivia and Ecuador introduced the indigenous notion 'sumak kawsay' (good life) as an overriding societal goal instead of economic growth and development. Regardless of the difficulties of understanding this notion, it is enough to state that it gives nature or 'Pachamama' (mother earth) an overriding place, in which human life and other sentient beings are contained. It follows that Pachamama cannot possibly be abused for insatiable human wants. Whilst being cautious with comparisons, it may resemble the line of argument of Polanyi with his term 'embeddedness' unlike the disembedded spheres or quasi-independent pillars of SD. If this comparison was allowed, it would support the theory that in the history of humanity nature once had a sacred place in culture, and that the deviation of this pattern is, by historical standards, rather novel. Anyhow, the study of these attempts, their on-going successes and failures open up the possibilities of research for cross-national comparison and broadly understood, on international research cooperation.
Retrospectively seen, it seemed naïve when ecologists and some economists in the 1970s assumed that the product of small scientific revolutions, evidence, logic, refined modelling and common sense, would be enough to induce decision-makers to actually make rational decisions, thus ignoring the inherent messiness of human affairs. Although disciplinary research has become more holistic in methodology and content, it still aims almost solely at the provision of advice to decision-makers. To tackle this deficit, a new concept has been attracting attention in recent years: transdisciplinarity. In 't Veld (Chap. 1 in this book) presents a concise definition: Transdisciplinarity is to be defined as the trajectory in a multi-actor environment from both sources: from a political agenda and existing expertise, to a robust, plausible perspective of action.
From this definition the notion 'political agenda' should be underscored. In line with what has been written thus far, the understanding of political agenda should not be restricted to the agendas that professional politicians at the regional or national level and at a given moment happen to have, especially because these are usually pro-growth agendas. This argument is also supported in the schematic representation of the 'knowledge democracy' also detailed by in 't Veld. The third order of the scheme connects transdisciplinarity with participatory democracy and bottom-up media. These connections support the cause for the local. From this perspective, the action of 'boundary workers' should also include, and maybe even rather focus upon the boundary-work between science and community. This is what is habitually referred to as education, bearing in mind that modern educators mostly recognise the reciprocal character of their activities, that is, in the act of educating, they are also educated.
This idea is far from exceptional. In recent history, in the realm of economics and in a core country, it was Milton Friedman who initially understood that the role of the scholar should not be restricted to talking or giving advice to professional politicians, but directly to communities by means of numerous conferences and videos; in a time when social media was inexistent. 81 The redirection of at least a portion of the academic resources and efforts spent on advising established decision-makers to educate and learn from non-partisan representatives of civil society is also necessary for the following reasons: the almost immediate effects of an economic crisis (no-growth or 'negative' growth) mobilise societies in a direction -whatever it may be -while most of ecological problems seem distant. These problems happen in slow motion, sometimes not even discovered given many nonlinear processes and middle term uncertainties; and impacting first and predominantly powerless nations. These features allow for adaptation and oblivion. Moreover, a cornucopian promising Eden on earth by letting business go as usual, and the neo-classical economist insisting that the only need is to get the prices 'right' in order to internalise social costs, will win over the 'pessimist' preaching the oldfashioned frugality and prudence. Indeed, this will meagrely counteract the enormous advertising budgets and the large adherence of the 'top-down media' to the growth call.
The former reflections do not imply a replacement of disciplinary/interdisciplinarity science (in the sense discussed in this book). It would be a mistake to become too enamoured with the local and transdisciplinarity for the following reasons. In the social realm, the preference for the local is merely because it is hoped that the constituents of professional politicians may be able to find new democratic ways of compelling them to abandon 'growthmania' and to correspondingly make policy proposals for a SSE. In other words, for the social researcher, as a member of the community, the hope rests in the ability to co-trigger a wide reflexive process or, being momentarily Hegelian, to further advance the de-growth anti-thesis. However, any local, regional or even national attempts can be easily discouraged at the international level which feeds back to the national one given the present forces of competition under which the current world functions. This case is crystal clear in the failed ratification of the Kyoto protocol and the uncertainty of the process in the following years. The same problem could be predicted if a serious attempt was undertaken to tackle the Jevons' paradox in the way proposed by Daly. On the same grounds, in the realm of the social sciences, it would be a mistake to become too obsessed with transdisciplinarity. This is because, as usual, any given methodology must be subjected to the scope and nature of the research problem. At this level, it is disciplinary/interdisciplinary science which must tackle the most formidable question of policy: how to transcend the international growth-race?
Mill, the intellectual grand-father of the stationary-economy explained that although this state was necessary, for 'the safety of national independence it is essential that a country should not fall much behind its neighbors in these things ' (2004 [1848]: 690), 'these things' being increased production and accumulation. Back in the 1970s, Daly and Dutch politician Sicco Mansholt saw as a potential and promising 'deal'. This deal was the negotiation of economic de-growth in affluent countries for population de-growth in poor countries. This door seems to be by now entirely closed. Would China, for instance, who has saved a great deal of GHGs through the one-child-policy, who invests vast amounts of capital in green sources of energy agree with a view to becoming 'frugal'? Would the Chinese re-vive the habit of bicycle transportation gradually lost in the last years and stop growing their car fleet while the most important overgrown countries do not even consider degrowing their economies arguably for the reasons given by Mill more than one and half centuries ago? From this angle, it is difficult not to succumb to real pessimism on the international political ability to reverse what is truly new under the sun: the disproportionate space taken by humankind within the natural world. Georgescu-Roegen (1975: 379) with his usual causticity once speculated: Will mankind listen to any program that implies a constriction of its addiction to exosomatic comfort? Perhaps, the destiny of man is to have a short, but fiery, exciting and extravagant life rather than a long, uneventful and vegetative existence. Let other species -the amoebas, for example -which have no spiritual ambitions inherit an earth still bathed in plenty of sunshine.
At least his dream of attempting to harness solar energy has recently found its way in international politics, and his recommended ethical principle of leaving as much as possible an intact planet, that is, its life-support functions and services for the future generations was also adopted by the sustainability discourse years later. I believe it is a good principle in spite of the difficulties in defining the time span meant by the 'future generations' and that it may invite present inaction. It is a good principle in the sense that it is the only thing that the present generations can indeed do for the future ones, as happiness, welfare or even dignity are not transferable. If the future generations made themselves miserable with a relatively intact planet, this would be a choice which present generations would hardly be able to influence. clarifications concerning the approach followed by Professor Serge Latouche, as well as Prof. Dr. Angelika Zahrnt for her precise clarifications on the growth debate in Europe and Germany, in which she has been an important actor in co-shaping the ecological discourse. This contribution was at its very conception, enhanced by the challenging comments which I received from Professor Roeland Jaap in 't Veld and Dr. Louis Meuleman. General yet equally helpful comments came from the observations of my colleagues in the TransGov project and my more distant colleagues at the Institute for Advance Sustainability Studies. Finally, I owe infinite gratitude to Professor Klaus T€ opfer as he allowed me the complete intellectual freedom necessary to purse this controversial topic. This contribution also profited from his vast experience in politics and education pertaining to economics and the environment in many places and positions around the world. Any errors are entirely mine.
Open Access. This chapter is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. | 2019-05-17T14:38:28.681Z | 2013-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "58b419948880bf83886bb64c889c02373615516f",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-642-28009-2_3.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "59f4e3b4ded0cd7119026fc8d869c7f2133f2035",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Political Science"
]
} |
256370714 | pes2o/s2orc | v3-fos-license | On balayage and B-balayage operators
Here we consider the balayage operator in the setting of $H^p$ spaces and its Bergman space version (B-balayage) introduced by H. Wulan, J. Yang and K. Zhu \cite{WYZ}, and extend some known results on these operators.
Introduction
Let D denote the unit disk {z : C : |z| < 1} and T the unit circle.For 0 < p < ∞, the Hardy space H p consists of all functions f which are holomorphic on D and satisfy It is known that each function f ∈ H p has the radial limit f (e it ) = lim r→1 − f (re it ) a.e. on T and f (e it ) ∈ L p (T).
For a finite positive Borel measure µ on D, the function is called the balayage of µ.It follows from Fubini's theorem that S µ (e it ) ∈ L 1 (T) (see, [3, p. 229]).
If I ⊂ is an arc of T, the Carleson square S(I) is defined as A positive Borel measure µ is called an s-Carleson measure, 0 < s < ∞, if there exists a positive constant C = C(µ) such that µ(S(I)) ≤ C(µ)|I| s , for any arc I ⊂ T.
A 1-Carleson measure is simply called a Carleson measure.In [1] Carleson proved that if µ is a positive Borel measure in D, then for 0 < p < ∞, H p ⊂ L p (dµ) if and only if µ is a Carleson measure.
It has been proved in [3, p. 229] that if µ is the Carleson measure, then S µ belongs to BMO(T).However, the Carleson property of measure µ is not a necessary condition for S µ being a BMO(T) function ( [5]).
In the next section we obtain an extension of the above mentioned result.More exactly, we prove that if µ is an s-Carleson measure, 0 ≤ s < 1, then S µ belongs to L 1,s .
In [6] H. Wulan, J. Yang and K. Zhu introduced the Bergman space version of the balayage operator on the unit disk that was called B-balayage.The B-balayage of a finite complex measure µ on D is given by It has been proved in [6] that if µ is a 2-Carleson measure, then there exists a constant where β is the hyperbolic metric on D. Here, applying a similar idea to that used in the proof of this result, we prove Theorem 1. Assume that 1 < p < ∞ and µ is a positive Borel measure on D. If µ is a 2p-Carleson measure, then there exists a positive constant for all z, w ∈ D.
Here C will denote a positive constant which can vary from line to line.
Balayage operators and Campanato spaces L 1,s
We start with the following Theorem 2. If µ is an s-Carleson measure, 0 < s ≤ 1, S µ is given by (1) and 0 ≤ γ < 1, then there exists a positive constant C such that for any Proof.Without loss of generality we can assume that |I| < 1.
Let for z ∈ D and θ ∈ R be the Poisson kernel for the disk D. By the Fubini theorem, For a subarc I of T let 2 n I, n ∈ N, denote the subarc of T with the same center as I and the length 2 n |I|.
In case (i) we have So, if e iθ , e iϕ ∈ I, then Now we turn to case (ii).Then for e iψ ∈ I, Consequently, for e iθ , e iϕ ∈ I, we get Now we put Q n = S(2 n I), n = 1, 2, . . .Then by ( 4) and ( 5), The above inequality and (3) imply The next theorem shows that if µ is an s-Carleson measure, 0 < s ≤ 1, then S µ is in the Campanato space L 1,s .Theorem 3. If µ is an s-Carleson measure on D, 0 < s ≤ 1 and S µ (t) = S µ (e it ) is the balayage operator of µ given by (1), then there exists a positive constant C such that for any and the inequality follows from Theorem 2 with γ = 0.
B-balayage for weighted Bergman spaces A p α
Recall that for 0 < p < ∞, −1 < α < ∞, the weighted Bergman space A p α is the space of all holomorphic functions in L p (D, dA α ), where and dA is the normalized Lebesgue measure on D, that is D dA = 1.If f is in L p (D, dA α ), we write It is well known that for 1 < p < ∞ the Bergman projection P α given by for all f ∈ A p α .The next corollary is an immediate consequence of Proposition.
Corollary.[6] For α > −1, σ > 0, let µ, ν be positive Borel measures on D such that Then µ is an A p α -Carleson measure if and only if ν is an A p α+σ -Carleson measure.Recall that for 1 < p < ∞, the Besov space B p is the space of all functions f analytic on D and such that where is the Möbius invariant measure on D.
We will use the fact that the Besov space B p = P α (L p , dτ ).The proof of this equality for α = 0 is given in [8, p. 90-92] and similar arguments can be used for α > −1.In particular, if f = P α g, where g ∈ L p (dτ ), then It then follows from [4, Theorem 1.9] that (6) f Bp ≤ C p,α g L p (dτ ) .
The next theorem gives a Lipschitz type estimate for functions in the analytic Besov space.
Theorem 5. [9] For any 1 < p < ∞, there exists a constant C p > 0 such that for all f ∈ B p and z, w ∈ D, where 1 p + 1 q = 1.Proof of Theorem 1.For z, w we have Since µ is a finite measure on D, the Jensen inequality yields Let q > 1 be the conjugate index for p, that is, 1 p + 1 q = 1.Then 2p = 2+ 2 q−1 and under the assumption, µ is an A r Put g = (α + 1) α+2 q f and observe that f q,α ≤ 1 if and only if g L q (dτ ) ≤ 1.Moreover, since α = 2 q−1 satisfies α+2 q = α, we get sup where the last inequality follows from Theorem 5 and inequality (6).
log 1 +
is a bounded operator from L p (D, dA α ) onto A p α .Let for z, w ∈ D, the function ϕ z (w) = z − w 1 − zw denote the automorphism of the unit disk D. The hyperbolic metric on D is given byβ(z, w) = 1 2 |ϕ z (w)| 1 − |ϕ z (w)| .For z ∈ D and r > 0 the hyperbolic disk with center z and radius r isD(z, r) = {w ∈ D : β(z, w) < r}.For s > 1 the condition for an s-Carleson measure given in Introduction is equivalent to the condition where Carleson squares are replaced by hyperbolic disks.More exactly, we have the following Proposition.[10, 2] Let µ be a positive Borel measure on D and 1 < s < ∞.Then the following statements are equivalent (i) µ is an s-Carleson measure (ii) µ(D(z, r)) ≤ C(1−|z| 2 ) s for some constant C depending only on r for all hyperbolic disk D(z, r), z ∈ D.For α > −1, (α + 2) measures are characterized by the following result. | 2018-10-09T10:36:04.000Z | 2018-10-09T00:00:00.000 | {
"year": 2018,
"sha1": "92f4b82be29c1eb36b485585234b445721dfdf73",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40315-019-00277-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "92f4b82be29c1eb36b485585234b445721dfdf73",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
270587191 | pes2o/s2orc | v3-fos-license | Rheumatoid arthritis and changes on spirometry by smoking status in two prospective longitudinal cohorts
Objective To compare longitudinal changes in spirometric measures between patients with rheumatoid arthritis (RA) and non-RA comparators. Methods We analysed longitudinal data from two prospective cohorts: the UK Biobank and COPDGene. Spirometry was conducted at baseline and a second visit after 5–7 years. RA was identified based on self-report and disease-modifying antirheumatic drug use; non-RA comparators reported neither. The primary outcomes were annual changes in the per cent-predicted forced expiratory volume in 1 s (FEV1%) and per cent predicted forced vital capacity (FVC%). Statistical comparisons were performed using multivariable linear regression. The analysis was stratified based on baseline smoking status and the presence of obstructive pattern (FEV1/FVC <0.7). Results Among participants who underwent baseline and follow-up spirometry, we identified 233 patients with RA and 37 735 non-RA comparators. Among never-smoking participants without an obstructive pattern, RA was significantly associated with more FEV1% decline (β=−0.49, p=0.04). However, in ever smokers with ≥10 pack-years, those with RA exhibited significantly less FEV1% decline than non-RA comparators (β=0.50, p=0.02). This difference was more pronounced among those with an obstructive pattern at baseline (β=1.12, p=0.01). Results were similar for FEV1/FVC decline. No difference was observed in the annual FVC% change in RA versus non-RA. Conclusions Smokers with RA, especially those with baseline obstructive spirometric patterns, experienced lower FEV1% and FEV1/FVC decline than non-RA comparators. Conversely, never smokers with RA had more FEV1% decline than non-RA comparators. Future studies should investigate potential treatments and the pathogenesis of obstructive lung diseases in smokers with RA.
WHAT IS ALREADY KNOWN ON THIS TOPIC
⇒ Restrictive and obstructive lung diseases are prevalent in rheumatoid arthritis (RA) and are associated with increased mortality.⇒ There have been limited investigations that evaluated for changes in pulmonary function measures over time comparing patients with and without RA.
WHAT THIS STUDY ADDS
⇒ After 5-7 years of follow-up after baseline spirometry, patients with RA who never smoked had more decline in per cent-predicted forced expiratory volume in 1 s (FEV 1 %) than non-RA comparators.⇒ Patients with RA who previously smoked ≥10 packyears, had less decline in FEV 1 % and FEV 1 /forced vital capacity (FVC) than non-RA comparators.⇒ The observed associations were most prominent in patients with RA who had baseline obstructive pattern and were not attributable to differences such as smoking levels or baseline spirometric measures.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ These results suggest that patients who have smoked with RA and an obstructive pattern may be a unique phenotype that could have less decline than expected.⇒ Further studies are required to explore the mechanisms and potential reversibility of systemic inflammation, autoimmunity and the impact of RA treatments on pulmonary function.
RMD Open RMD Open RMD Open patients with RA are at an increased risk for abnormalities of both restrictive and obstructive patterns on spirometric measures, which may not be explained by smoking. 7 8ecently, studies have shown that restrictive and obstructive lung diseases are not only prevalent in RA but also associated with high mortality rates.For instance, mortality rates in RA-ILD patients have been extensively described. 9 10A population-based cohort study suggests 1-year mortality rates of 13.9% in RA-ILD and 3.8% in non-ILD RA patients. 11Additionally, our research group previously showed a twofold higher risk of death in RA individuals with subclinical interstitial lung abnormalities compared with their non-RA comparators. 12Patients with RA with obstructive lung disease experience higher mortality rates compared with both patients with RA without obstructive lung disease 13 14 and non-RA patients with obstructive lung disease. 13espite these reported high mortality rates in patients with RA with pulmonary involvement, knowledge regarding the differences between pulmonary diseases in patients with and without RA is limited.This knowledge gap is crucial for understanding its pathogenesis, disease course and potential treatment approaches.Previous studies have highlighted associations between obstructive/restrictive patterns on pulmonary function tests (PFT) and the presence of rheumatoid factor and anticyclic citrullinated peptide antibodies, suggesting that lung diseases in RA patients may have a distinct pathogenesis from those in patients without RA. 15However, investigations into the progression of lung diseases in individuals with RA compared with those without RA are lacking.
In this study, we aimed to investigate whether the longitudinal changes in spirometric measurements differ between individuals with RA and non-RA comparators.We used data from two large prospective cohortsone representing the general population and another comprising smokers at high risk for pulmonary diseases.We hypothesised that RA is associated with more decline in spirometric measurements for both restrictive and obstructive patterns, independent of smoking and other potential confounders.
Study population and design
We analysed two data sources: the UK Biobank, representing a cohort of the general population, and COPDGene, representing a cohort of individuals with or at high risk for respiratory disease due to smoking.The UK Biobank is a national prospective cohort encompassing over 500 000 participants in the United Kingdom.Details of the study regarding the cohort have been previously described. 16Briefly, adults aged 45-80 years were randomly selected from the National Health Service between 2006 and 2010. 17Baseline visits were performed at 22 sites across the UK, involving health questionnaires, spirometric assessments and laboratory and imaging tests.A subset of baseline participants was invited to attend a follow-up visit, where they underwent a second spirometry examination about 7 years later.
][20] Briefly, non-Hispanic White or Black individuals aged 45-80 years who had a smoking history of at least 10 packyears at 21 clinical centres in the USA were recruited between 2007 and 2011.The cohort included smokers both with and without baseline obstructive lung disease, with the goal of ensuring the inclusion of one-third Black participants.The baseline enrolment process involved health questionnaires, spirometric assessments and high-resolution chest CT scans.Notably, individuals with known respiratory diseases (other than asthma or COPD) and those displaying significant interstitial lung disease (ILD) or bronchiectasis on chest CT were deemed ineligible.Participants were asked to return for a follow-up evaluation 5 years after the baseline visit.COPDGene received ethical approval from each site, and all participants provided written informed consent.This analysis was approved by the Mass General Brigham Institutional Review Board.
In our study, we included participants who underwent baseline and follow-up spirometry and had results for both the forced expiratory volume in 1 s (FEV 1 ) and forced vital capacity (FVC).Additionally, smoking data were required as a key covariate and for stratification of analyses.
RA cases and non-RA comparators
To identify participants with RA, we used a combination of self-reported physician-diagnosed RA and the use of at least one disease-modifying antirheumatic drug (DMARD) at baseline in both the UK Biobank and COPDGene cohorts.Recognising the potential limitations of relying solely on self-reported RA status, we adopted a case definition that demonstrated improved validity by incorporating DMARD use, as a prior study employing a similar case definition reported a positive predictive value (PPV) of 88%. 21We included medications approved for RA by the US Food and Drug Administration and other DMARDs previously validated for the identification of patients with RA in cohort studies. 22To establish a non-RA comparator group, we defined them as participants without a reported history of RA and who were not using any DMARDs at baseline.In addition, those reporting a history of RA but not indicating DMARD use and those on one or more DMARDs without a history of RA were excluded from the comparator group.We have previously used these definitions of RA cases and non-RA comparators in previous studies using the UK Biobank and COPDGene data. 8 12 13irometric measures In this study, spirometry was performed by trained clinical coordinators or respiratory therapists using the research protocol previously described in the UK
Rheumatoid arthritis Rheumatoid arthritis Rheumatoid arthritis
Biobank 16 and COPDGene. 20FEV 1 and FVC were measured at both baseline and during the follow-up visits.The per cent predicted values for FEV 1 and FVC (FEV 1 % and FVC%, respectively) were computed, adjusting for the individual's age (including known decline with ageing), sex and height, using race-neutral GLI global 2022 equations. 23The calculations were processed using R-package 'rspiro'.Before spirometry, bronchodilator medication was administered only to the COPDGene cohort and postbronchodilator spirometric results were used in the analyses.Annual changes in FEV 1 %, FEV 1 /FVC ratio and FVC% were calculated by comparing values at the two visits and factoring each participant's period in years from the baseline visit to follow-up.An obstructive pattern was defined as an FEV 1 /FVC ratio <0.7, whereas a preserved ratio impaired spirometry (PRISm) was characterised by the absence of an obstructive pattern and FVC% <80%, employing standard cut-off points widely used in both clinical practice and research studies. 24 25variates For both cohorts, we collected information on age at the baseline visit, sex and self-reported race.Additionally, baseline measurements, including height, body weight and body mass index (BMI), were recorded.Smokingrelated data, such as smoking status (never, former or current) and smoking pack-years, were extracted from the baseline health questionnaire.Furthermore, the presence of chronic respiratory illnesses (including asthma, bronchiectasis, COPD, ILD and idiopathic pulmonary fibrosis) and medications for these diseases (either inhaled or systemic) were identified using self-reported information collected during the baseline visit.
Statistical analysis
We reported the baseline characteristics and spirometric measurements at both baseline and follow-up visits, using frequencies, proportions and means with SD or medians with IQRs for RA cases and non-RA comparators in the UK Biobank and COPDGene cohorts.For the UK Biobank, subgroup analyses were performed based on smoking status and pack-years.Specifically, participants were categorised into those who had never smoked, those who ever smoked less than 10 pack-years, and those who ever smoked at least 10 pack-years, the latter mirroring the COPDGene inclusion criteria.
We examined the associations between RA and non-RA status with annual changes in FEV 1 %, FEV 1 /FVC ratio and FVC% using univariable and multivariable linear regressions.The multivariable model is adjusted for age, sex, BMI, smoking status (current/past), pack-years, baseline spirometric results (FEV1%, FEV 1 /FVC or FVC%) and use of inhaled/systemic medication use for obstructive lung disease.Given the longitudinal study design, only participants who attended follow-up visits, including spirometry examination, were included in the analysis.To address possible differential censoring, defined by dropout or death before the follow-up visit was due, an additional model with inverse probability of censoring weighting (IPCW) was employed within the same cohort.IPCW was calculated using the following covariates: RA/ non-RA status, age, sex, race, smoking status, pack-years, body weight, BMI, self-reported symptoms of limited walking, history of cancer, use of medications for obstructive respiratory diseases and spirometric measurements at baseline.Analyses were stratified based on smoking status and cohort.Stratified analyses were also performed based on the presence or absence of an obstructive spirometry pattern at baseline.Among those with at least 10 pack-years, we also performed a pooled analysis of both cohorts to enhance power in this subgroup.
In a secondary analysis, we compared annual changes in FEV 1 %, FEV 1 /FVC ratio, and FVC% in patients with RA divided into three groups according to DMARDs used: tumour necrosis factor (TNF) inhibitors (with/ without other DMARDs), methotrexate (MTX; with/ without other DMARDs) and other DMARDs.We chose these groups since MTX and TNF inhibitors were the most prevalent drugs used in both cohorts.For descriptive purposes, we also reported smoking changes in RA and non-RA in COPDGene, since follow-up smoking data were not available in the UK Biobank.We also stratified the main analyses by sex to investigate possible differences.
Two-sided p values <0.05 were considered statistically significant.All analyses were performed using SAS V.9.4 (Cary, North Carolina).Patients and the public were not involved in the design or implementation of this study.
Study sample
Of the 502 378 UK Biobank participants, we identified 2222 RA cases and 301 098 non-RA comparators.Of the 10 371 COPDGene participants, 85 RA cases and 9280 non-RA comparators were identified.We excluded 107 healthy never smokers in COPDGene since there was no RA case in the subgroup.Among them, 188 RA cases and 32 560 non-RA comparators in the UK Biobank and 45 RA cases and 5175 non-RA comparators in the COPDGene had follow-up spirometry results (see flow diagram in figure 1).
RMD Open RMD Open RMD Open
All COPDGene participants met the inclusion criterion of being current or past smokers with a smoking history of at least 10 pack-years, whereas over half of the UK Biobank participants were never smokers.Even among participants with a history of at least 10 pack-years, the median number of pack-years in COPDGene participants was approximately two times that in UK Biobank participants for both RA cases and non-RA comparators (20 vs 41 pack-years and 21 vs 38 pack-years, respectively).Regarding spirometric patterns at baseline, patients in the COPDGene cohort tended to have fewer normal patterns and more obstructive and PRISm patterns, and they received medications for obstructive respiratory diseases more frequently compared with participants in the UK Biobank cohort.The results for the UK Biobank subgroup of ever smokers with a history of less than 10 pack-years are presented in online supplemental table S1.
Pulmonary function measures at baseline and follow-up
While the mean FEV 1 % at baseline was 97.8% in RA cases and 100.0% in non-RA comparators within the UK Biobank cohort, it was 78.2% in RA cases and 81.1% in non-RA comparators within the COPDGene cohort.Furthermore, the mean FVC% at baseline was 102.6% in RA cases and 104.7% in non-RA comparators within the UK Biobank cohort and 86.8% in RA cases and 92.4% in non-RA comparators within the COPDGene cohort (table 2).
Among never smokers, the annual decline in both FEV 1 % and FVC% was numerically higher in RA cases S2.
Rheumatoid arthritis Rheumatoid arthritis Rheumatoid arthritis
Change of spirometric measures in RA cases compared with non-RA comparators Among never smokers in the UK Biobank (n=110 RA cases and n=20 701 non-RA comparators), no significant difference was observed in the annual FEV 1 % and FVC% change in multivariable linear regression models with IPCW (table 3).However, in never smokers without an obstructive pattern at baseline, RA was significantly associated with more decline in FEV1% (β=−0.49,p=0.04).
In the pooled analysis of both cohorts among ever smokers with at least 10 pack-years (n=106 RA cases and n=13 237 non-RA comparators), both FEV 1 % and FEV 1 /FVC showed significantly less decline in RA cases compared with non-RA comparators (FEV 1 %: β=0.50, p=0.02;FEV 1 /FVC: β=0.31, p<0.01; table 4) after adjusting for age, sex, BMI, smoking status (current/past), packyears, baseline spirometry results and inhaled/systemic medication use for obstructive lung diseases.These differences were even more pronounced in participants with an obstructive pattern at baseline (RA vs non-RA: FEV 1 %: β=1.12, p=0.01;FEV 1 /FVC: β=1.32, p<0.01).Given that these smokers were from two cohorts with different follow-up statuses, no analysis could be performed using the IPCW.The results for the UK Biobank subgroup of ever smokers with less than 10 pack-years are presented in online supplemental table S3.
Online supplemental table S4 summarises the results of the regression models based on cohort.In COPDGene, both FEV 1 % and FEV 1 /FVC showed significantly lower decline in patients with RA than non-RA comparators, especially in those with obstructive spirometry pattern at baseline (RA vs non-RA: FEV 1 %: β=1.16, p=0.01;FEV 1 / FVC: β=1.78, p<0.01) in the multivariable linear regression models with IPCW.Within the entire UK Biobank cohort, while there were no statistical differences in the annual changes in FEV 1 % and FVC% between the two groups, the effect size direction was similar to COPDGene.
Types of DMARDs among RA cases
Details of the specific DMARDs used in RA cases are shown in online supplemental table S5.Approximately half of the participants in both the UK Biobank and COPDGene cohorts reported the use of MTX.Among all RA cases, no significant difference was observed in the annual FEV 1 %, FEV 1 /FVC and FVC% changes among users of TNF inhibitors, MTX and other DMARDs in the multivariable logistic regression model (online supplemental table S6).However, among ever smokers with at
Analyses stratified by sex
Results stratified by male or female sex are presented in online supplemental tables S8-S11.
DISCUSSION
In this longitudinal study involving two large prospective cohorts in which participants underwent spirometry at baseline and follow-up for research purposes, we investigated changes in respiratory function on spirometry in RA cases compared with non-RA comparators, both among smokers and never smokers.Among never smokers without an obstructive respiratory pattern at baseline visit, RA cases had more decline in FEV 1 % than non-RA comparators.This finding aligns with results of previous studies reporting a higher incidence of obstructive lung diseases, such as COPD and asthma, in patients with RA compared with non-RA patients. 4-8 14Conversely, among smokers, particularly those with a pre-existing obstructive respiratory pattern at baseline, RA cases demonstrated a lower decline in FEV 1 % and FEV 1 /FVC compared with non-RA comparators.While this result was unexpected, it raises important clinical suggestions regarding pathogenesis, treatment and potential modifiability of obstructive lung disease among smokers.It is possible that obstructive lung disease in RA cases might have a unique autoimmune or inflammatory pathogenesis distinct from that in non-RA patients.It is also possible that DMARDs used in RA may have beneficial effects on pulmonary function, particularly in those with obstructive lung disease and heavy smoking.The association between RA and pulmonary airway diseases, such as asthma, bronchiectasis and COPD, is well established. 6Observational studies have shown that airway diseases are risk factors for the development of RA, [26][27][28] and, conversely, RA is a risk factor for the development of airway diseases, 4-8 14 with particularly strong associations seen in seropositive RA. 15 26 27 These findings suggest a shared pathogenesis in mucosal immunity between these airway diseases and RA.Lymphoid aggregates near airways and interstitium are present in early patients with RA 29 30 and sputum is found to detect anti-CCP antibodies and rheumatoid factors earlier than serum. 31However, Kronzer et al reported a strong association between RA and various respiratory diseases, including COPD, asthma and other chronic upper airway diseases, only in non-smokers. 27Our study also found an association between RA and annual decline of FEV 1 % only in non-smokers who did not have an obstructive pattern on baseline spirometry, suggesting the need to investigate its distinct pathogenesis of obstructive lung diseases in patients with RA in non-smokers and smokers separately.
Rheumatoid arthritis Rheumatoid arthritis Rheumatoid arthritis
3][34][35][36] The effectiveness of dupilumab, a fully human monoclonal antibody that blocks the shared receptor component of IL-4 and IL-13, improved FEV 1 in trials for COPD and asthma. 37 384][45][46] Less decline of obstructive respiratory pattern observed in RA cases demonstrated in our study might be attributed to the suppressive state of IL-4 and IL-13, which are associated with the progression of obstructive lung diseases.These findings are hypothesis-generating on whether obstructive lung disease in patients with RA has distinct pathogenesis from that in patients with non-RA, particularly among those with smoking history.
The favourable course of FEV 1 % and FEV 1 /FVC in patients with RA compared with non-RA comparators among smokers suggests important implications for the potential treatment of obstructive lung diseases.In our study, RA cases were limited to those with a self-reported RA diagnosis and DMARDs use to differentiate them from those with other articular diseases, such as osteoarthritis, which is more prevalent than RA.The effects of DMARDs on obstructive lung disease have not been extensively investigated, and only a few clinical trials and observational studies have been reported.In a secondary analysis in our study, spirometry outcomes were compared based on the type of DMARD among patients with RA.Despite the limited sample size, which may lack statistical power, there was a tendency for a lower decline in FEV 1 % with the use of DMARDs in the following order: other DMARDs, TNF inhibitors and MTX.Even MTX, which showed a greater decline in FEV 1 % than other DMARDs in this analysis, has been reported to result in less exacerbation of COPD in a previous large-scale observational study 47 and improvement in respiratory function in patients with asthma in a randomised controlled trial. 48Other types of DMARDs, such as TNF inhibitors and hydroxychloroquine, have also been reported to improve pulmonary function in asthma in a small case series study and a clinical trial, respectively. 49 50These findings highlight the importance of investigating the effects of DMARDs on obstructive lung diseases, particularly in smokers.
Our study had several strengths.First, to our knowledge, investigations on changes in respiratory function in patients with RA using data from large-scale cohorts, with non-RA individuals as the comparison group, have not been published before.The UK Biobank includes a diverse range of individuals from the general population.
Additionally, COPDGene focuses on a high-risk population for respiratory diseases due to smoking.Second, both cohorts had a follow-up period of approximately 5-7 years, allowing us to detect changes in respiratory function that might not be apparent over shorter durations.Also, spirometry was performed for research purposes, so is less biased than using clinically-indicated spirometric results.Third, both cohorts allowed us to examine respiratory factors in detail, including smoking status, smoking pack-years and medication use for obstructive lung disease.Fourth, we addressed the possible bias caused by differential censoring, a frequently encountered problem in prospective cohorts, by incorporating the IPCW calculated using respiratory and other factors obtained from these large-scale cohorts into the model of our analyses.In addition, rich details on covariates such as BMI were available for adjustment and thus did not explain our findings.Fifth, we calculated FEV 1 % and FVC% using the recently developed race-neutral GLI global 2022 equations, 23 which could significantly affect the interpretation of pulmonary function defects, especially among Black individuals. 26While the UK Biobank participants were predominantly White, the adoption of these new equations could reduce potential bias in the COPDGene, wherein one-third of the participants were Black.
Our study also had some limitations.First, we relied on a combination of self-reported RA status and the use of DMARDs to identify cases with and comparators without RA.While there may be some misclassification of RA case status, previous studies using similar methods have demonstrated a PPV of 88% for identifying RA. 21 22lso, relying only on self-reported RA without including DMARD use would increase sample size but decrease the validity of the exposure since this is typically low likelihood (20% or less) of having RA.There were less people with RA than initially expected in the COPDGene cohort of smokers.However, people with a chronic disease such as RA may be less likely to participate in this voluntary, longitudinal study, men composed a majority (55%), and we may have missed true cases that did not report RA or were not currently on a DMARD.Second, neither the UK Biobank nor COPDGene was designed to investigate RA, resulting in limited information on RA-related covariates.Consequently, we lacked data on serostatus, systemic inflammation, disease activity, bone erosions and RA duration, which could have provided valuable insights into the mechanism of the effects observed in our study.However, this would require RA-only analyses since most of these factors are not pertinent to people without RA.While we performed analyses of DMARDs, the sample size was even smaller and many specific drugs were not able to be examined.Studies are ongoing enrolling patients with RA to investigate how these RA-specific factors may influence spirometric changes and other markers of pulmonary health.A randomised trial investigating the effect of specific DMARDs on measures of lung health is needed for definitive conclusions.Third, despite utilising RMD Open RMD Open RMD Open data from two cohorts with a large sample size, the RA group was relatively small, resulting in insufficient statistical power to thoroughly investigate factors associated with changes in pulmonary function among patients with RA.However, the findings were robust across two large cohorts.Studies are ongoing to investigate the impact of smoking on spirometric and chest imaging measures among only patients with RA.Fourth, there may be additional factors that were unmeasured that may have affected findings.In particular, we had limited data on factors occurring between visits that may have differed by RA status and mediated findings, such as smoking changes and new-onset pulmonary diseases such as asthma or ILD.These postbaseline factors may be on the causal pathway between exposure and outcome, so may be potential mechanisms of the effects we observed but should not be adjusted for in analyses.While smoking is known to be deleterious for lung health and can induce some forms of ILD, some paradoxical improvements have also been noted. 51It is possible that RA may affect the spirometric trajectory in such a way that we investigated a plateau phase rather than during decline.We are unable to fully examine the trajectory of spirometric changes with only two measures.Fourth, we did not have serial measures of chest CT findings or other PFTs such as diffusion capacity of the lungs for carbon monoxide.In light of our findings and these limitations, it is recommended that future studies incorporate an RA-specific cohort and investigate the mechanisms of pulmonary disease in patients with RA, leveraging both detailed respiratory and RA-specific data.
In conclusion, RA cases had less decline in FEV 1 % and FEV 1 /FVC than non-RA comparators among smokers, particularly among those with an obstructive pattern on baseline spirometry.These associations were not attributable to variations in smoking, suggesting that RA with obstructive lung disease may be a distinct phenotype.However, RA cases without obstructive pattern that were never smokers had more decline of FEV 1 % than non-RA comparators, emphasising these patients could have deterioration of pulmonary function beyond what is expected based on ageing alone.These findings emphasise the need for further studies to explore the mechanisms and potential reversibility of systemic inflammation, autoimmunity and the impact of RA treatment on pulmonary function.Provenance and peer review Not commissioned; externally peer-reviewed.
Figure 1
Figure 1 Identification of participants with RA and non-RA comparators in UK Biobank and COPDGene.BR, bronchiectasis; DMARD, disease-modifying antirheumatic drug; ILD, interstitial lung disease; PFT, pulmonary function test in spirometry; RA, rheumatoid arthritis.
Table 1
Characteristics of RA cases and non-RA comparators with longitudinal spirometric measures in the UK Biobank and COPDGene (n=37 968) UK Biobank (overall) includes participants of never smokers, current/past smokers with <10 pack-years and current/past smokers with ≥10 pack-years.COPDGene originally recruited smokers with a history of at least 10 pack-years.†Medications for obstructive pulmonary disease include beta-agonist inhaler, inhaled/oral steroid, theophylline, ipratropium bromide and tiotropium bromide.‡Normal pattern: FVC% ≥80% and FEV *
Table 2
Spirometric measures at baseline and follow-up (n=37 968) UK Biobank (overall) includes participants of never smokers, current/past smokers with <10 pack-years and current/past smokers with≥10 pack-years.COPDGene enrolled smokers with a history of at least 10 pack-years.†Annualchange of spirometry measure = (value at follow-up -value at baseline)/(years from baseline visit to follow-up visit).
Table 3
Results from the linear regression of annual change of measures in pulmonary function test (PFT), comparing RA cases versus non-RA comparators among never smokers (n=20 811) Adjusted for age, sex, smoking status (current/past), pack-years, body mass index, spirometry measure (FEV 1 %, FEV 1 /FVC, or FVC%) at baseline and medication use for obstructive lung diseases.In the combined result of smokers (UK Biobank+COPDGene), the result in multivariable model is shown instead of that in IPCW model due to the different study designs on follow-up between the two cohorts.
Table 4
Results from the linear regression of annual change of spirometric measures, comparing RA smoker cases versus non-RA smoker comparators among current/past smokers (≥10 pack-years) Adjusted for age, sex, smoking status (current/past), pack-years, body mass index, spirometry measure (FEV 1 %, FEV1/FVC, or FVC%) at baseline and medication use for obstructive lung diseases.FEV 1 , forced expiratory volume in 1 second; FEV 1 %, percent predicted FEV 1 ; FVC%, percent predicted FVC; FVC, forced vital capacity; RA, rheumatoid arthritis. * Heart, Lung, and Blood Institute (R01HL153248, R01HL149861, R01HL147148).TJD is supported by the National Institutes of Health/National Heart, Lung, and Blood Institute (R01HL155522).ZW is supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases (R01 AR080659, K23 AR073334 and R03 AR0789938).SYA is supported by the National Institutes of Health/National Heart, Lung, and Blood Institute (K08HL145118).JS is supported by the National Institute of Arthritis and Musculoskeletal and Skin Diseases (grant numbers R01 AR080659, R01 AR077607, P30 AR070253, and P30 AR072577), the R. Bruce and Joan M. Mickey Research Scholar Fund, and the Llura Gund Award for Rheumatoid Arthritis Research and Care.The COPDGene study (NCT00608764) is supported by grants from the NHLBI (U01HL089897 and U01HL089856), by NIH contract 75N92023D00011, and by the COPD Foundation through contributions made to an Industry Advisory Committee that has included AstraZeneca, Bayer Pharmaceuticals, Boehringer-Ingelheim, Genentech, GlaxoSmithKline, Novartis, Pfizer and Sunovion.The funders had no role in the decision to publish or preparation of this manuscript.The content is solely the responsibility of the authors and does not necessarily represent the official views of Harvard University, its affiliated academic health care centers, or the National Institutes of Health.Competing interests PAJ reports grant funding and other support from Novartis, Galapagos and Boehringer Ingelheim, unrelated to this work.MM reports institutional grant support from Bayer and Honoraria from Chickasaw Nation.MHC has received grant funding from Bayer, unrelated to this work.TJD received support from Bayer and has been part of a clinical trial funded by Genentech, unrelated to this study.PFD reports grant funding from Bristol Myers Squibb.ZW has received grant funding from Bristol-Myers Squibb and Principia/Sanofi and consulting fees from Viela Bio, Zenas BioPharma, Horizon Therapeutics, Sanofi, MedPace, BioCryst, Amgen, Nkarta, Inc, Adicet Bio, and Therapeutic's and participation in data safety monitoring board or advisory board for Sanofi, Horizon, Novartis, Visterra/Otsuka and Shionogi, unrelated to this work.GMH reports consulting fees from Boehringer-Ingelheim, and the Gerson Lehrman Group, unrelated to this work.EKS has received grant support from Bayer and Northpond Laboratories, unrelated to this work.SYA reports consulting fees from Verona Pharmaceuticals and Vertex Pharmaceuticals and is cofounder and co-owner of Quantitative Imaging Solutions.RSJE reports contracts from Lung Biotechnology and Insmed, received a grant support from Boehringer Ingelheim and is cofounder and an equity holder of Quantitative Imaging Solutions.GRW reports grants from Boehringer Ingelheim, consultancy for Pulmonx, Janssen Pharmaceuticals, Novartis, and Vertex, and is founder and co-owner of Quantitative Imaging Solutions.JS has received research support from Bristol Myers Squibb and performed consultancy for AbbVie, Amgen, Boehringer Ingelheim, Bristol Myers Squibb, Gilead, Inova Diagnostics, Janssen, Optum, Pfizer, ReCor, Sobi, and UCB, unrelated to this work.Other authors report no competing interests.Patient consent for publication Not applicable.Ethics approval This study involves human participants and was approved by the Mass General Brigham Institutional Review Board, Reference number: 2020P000558.Participants gave informed consent to participate in the study before taking part. | 2024-06-20T05:05:16.385Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "26681db94a70178eec05c6ac1ddd7df0df2c8c62",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "26681db94a70178eec05c6ac1ddd7df0df2c8c62",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248391973 | pes2o/s2orc | v3-fos-license | The Pristine Inner Galaxy Survey (PIGS) IV: A photometric metallicity analysis of the Sagittarius dwarf spheroidal galaxy
We present a comprehensive metallicity analysis of the Sagittarius dwarf spheroidal galaxy (Sgr dSph) using $Pristine\,CaHK$ photometry. We base our member selection on $Gaia$ EDR3 astrometry applying a magnitude limit at $G_{0} = 17.3$, and our population study on the metallicity-sensitive photometry from the $Pristine$ Inner Galaxy Survey (PIGS). Working with photometric metallicities instead of spectroscopic metallicities allows us to cover an unprecedented large area ($\sim 100$ square degrees) of the dwarf galaxy, and to study the spatial distribution of its members as function of metallicity with little selection effects. Our study compares the spatial distributions of a metal-poor population of 9719 stars with [Fe/H] $<-1.3$ and a metal rich one of 30115 stars with [Fe/H] $>-1.0$. The photometric Sgr sample also allows us to assemble the largest sample of 1150 very metal-poor Sgr candidates ([Fe/H] $<-2.0$). By investigating and fitting the spatial properties of the metal-rich and metal-poor population, we find a negative metallicity gradient which extends up to 12 degrees from the Sgr center (or $\sim 5.5$ kpc at the distance of Sgr), the limit of our footprint. We conclude that the relative number of metal-poor stars increases in the outer areas of the galaxy, while the central region is dominated by metal-rich stars. These finding suggest an outside-in formation process and are an indication of the extended formation history of Sgr, which has been affected by the tidal interaction between Sgr and the Milky Way.
INTRODUCTION
The variety of galaxies present in the Universe, with their different shapes, features and sizes, suggests the existence of several formation processes behind galactic structures. Due to the mutual gravitational attractions, mergers have led to the formation of bigger structures. It has been widely acknowledged that satellite galaxies have been accreting onto the Milky Way (MW) (see for a review e.g. Bullock & Johnston 2005;Bland-Hawthorn & Gerhard 2016). The dwarf galaxies that orbit around the Milky Way are insightful laboratories to learn about the early evolution of our Galaxy, as they are relics of the main building blocks of the Galactic halo (Frebel & Bromm 2012).
The Sagittarius dwarf spheroidal galaxy (Sgr dSph) is one of such galaxies, discovered by Ibata et al. (1994) in the direction of the Galactic bulge. It is one of the biggest dwarf galaxies known around the Milky Way, with an estimated stellar mass of ∼ 4.8 × 10 8 M (Vasiliev & Belokurov 2020), and among the most luminous with v ∼ −15.1/−15.5 ). Its core is located on the opposite side of the Galactic centre, at a relatively nearby heliocentric distance of ≈ 26.5 kpc (Monaco et al. 2004;Ferguson & Strigari 2020;Vasiliev & Belokurov 2020). Sgr is a compelling example of an on-going merger with the Milky Way, in which the system is being disrupted by the tidal interaction with our Galaxy (Ibata 1997;Mateo et al. 1998;Belokurov et al. 2014), with the first in-fall occurring about 5 Gyr ago (Ruiz-Lara et al. 2020). Many of its stars have been stripped away from the core in long tidal streams (Ibata et al. 2001;Majewski et al. 2003;) that wrap around the Milky Way. Despite its inevitable destruction, the core of the Sgr dwarf galaxy is still visible. However, the projected proximity to the Galactic bulge has made the study of the dSph galaxy challenging due to the contamination from Milky Way foreground stars and extinction by interstellar dust.
With its history of tidal disruption, Sgr is a unique workshop for examining the physical aspects connected to the chemical evolution from the perspective of the hierarchical galaxy formation scenario. In recent years, a number of studies have been dedicated to disentangle the complex Sgr star formation history (SFH), based either on highresolution spectroscopy (e.g. Bonifacio et al. 2000;Smecker-Hane & McWilliam 2002;Sbordone et al. 2005;Monaco et al. 2005;Chou et al. 2007;McWilliam et al. 2013;Hasselquist et al. 2017;Hansen et al. 2018) or photometric techniques (Bellazzini et al. 1999b;Layden & Sarajedini 2000;Siegel et al. 2007). They found that Sgr has experienced many bursts of star formation that resulted in stellar populations with different ages and metallicities. These are described in detail for instance in the work of Siegel et al. (2007) Sgr is one of the most massive satellite galaxies around the MW, after the Large Magellanic Cloud and Small Magellanic Cloud. The stellar mass-metallicity relation for dwarf galaxies predicts that more massive galaxies show higher average metallicity (Kirby et al. 2013). The predominance of a relatively metal-rich population in the Sgr core (with the bulk of the stars having an average [Fe/H] ∼ −0.5, Monaco et al. 2003;Siegel et al. 2007;Mucciarelli et al. 2017) makes the identification and study of metal poor stars particularly difficult. Sgr does also host an old and metal-poor component ([Fe/H] < −1.0 and age ∼ 10 Gyr, Monaco et al. 2003;Siegel et al. 2007;Bellazzini et al. 2008), but to date, only ∼20 very metal-poor (VMP, [Fe/H] < −2.0) Sgr stars have been discovered and studied with either high-or low-resolution spectroscopy (Bellazzini et al. 2008;Mucciarelli et al. 2017;Hansen et al. 2018;Chiti & Frebel 2019;Chiti et al. 2020). This very metal-poor population in Sgr has important implications when studying galaxy evolution. They are archaeological fossils from the Schlegel et al. (1998), where for the sake of contrast we fix the upper limit of the colour bar at 0.5. The location of M54 has been highlighted with a blue cross. earliest time which will unveil the primitive stellar populations in the Sgr dwarf galaxy. One theoretical expectation is that smaller dwarf systems may have contributed to the formation of more massive ones, which could have happened to Sagittarius (Chiti et al. 2020). A recent work by Malhan et al. (2022) found that the metal-poor Elqui stream is associated with Sagittarius and was likely accreted inside the Sgr dSph.
Studying the spatial distribution of different stellar populations is key, because it helps us to understand the various episodes of star formation which have occurred during the evolution of Sgr. The correlation of the present spatial distributions of populations of different metallicities and ages provides hints about the primitive distribution of the gas from which they formed. Using chemical abundances of a sample of Sgr stars, Mucciarelli et al. (2017) revealed a metallicity gradient inside the core of the dwarf galaxy, supporting the hypothesis of a complex SFH. How the evolution of the galaxy has affected the spatial distribution of the different stellar populations is still an open debate, which needs more extended and comprehensive samples, and especially for the more metal-poor component. For this purpose, one would ideally have a large, homogeneous and clean sample of Sgr stars with available metallicities.
The incredible data collected by the Gaia mission, and especially the arrival of the high-accuracy Gaia EDR3 astrometry (Gaia Collaboration et al. 2021), allow for the building of a robust sample of Sgr member stars. Relying on photometric metallicities instead of spectroscopic metallicities allows the use of a much larger and more homogeneous sample to investigate the global metallicity structure of the galaxy. In this context, a great data set is the photometric Pristine Inner Galaxy Survey (PIGS, Arentsen et al. 2020), consisting of metallicity-sensitive CaHK photometry of stars in the Milky Way bulge region. PIGS includes a region focused on Sgr, from the highest-density area of the system until the onset of its tidal stream. This paper is devoted to a photometric metallicity analysis of the Sgr galaxy, carried out thanks to the combination of the Gaia astrometry and broad-band photometry and the PIGS metallicitysensitive CaHK photometry which cover about 100 square degrees of Sgr. The data and the member selection are presented in Section 2. The combination of Gaia and Pristine leads to an unprecedented large Sgr sample of 44785 stars with 0 < 17.3 with available metallicity information, enabling a wide investigation of the spatial distributions of the stellar populations with different metallicities hosted in Sgr. We analysed the different spatial distributions of populations of different metallicities ([Fe/H] < −2.0, < −1.3 and > −1.0) in Section 3, and present a large sample of 1150 new very metal-poor candidates. We study the metallicity gradient, which extends to at least ∼ 12 • from the centre of the Sgr remnant. Our approach is very effective in characterising the metallicity structure of Sgr. We discuss what our results can teach us about the (early) evolution of the Sagittarius system in Section 4, and finish with conclusions and a discussion of future prospects in Section 5.
Photometry
In this paper we use the PIGS photometry in the Sgr region, see Figure 1. The Pristine survey (of which PIGS is an extension) has been ongoing since 2016 (Starkenburg et al. 2017) and has the main goal to search for and study the most metal-poor stars in and around the Milky Way. It makes use of the narrow-band CaHK filter designed for MegaCam mounted on the Canada-France-Hawaii Telescope (CFHT), which covers the Ca II H&K absorption lines that are sensitive probes of stellar metallicity. The targeting of metal-poor stars in the main Pristine halo survey has been extremely efficient (Youakim et al. 2017;Aguado et al. 2019), and this efficiency has been demonstrated in the inner Galaxy as well (Arentsen et al. 2020). The Sgr extension of PIGS has not been used before, and we present it here for the first time.
Photometric field-to-field calibration
The data reduction proceeds as in Starkenburg et al. (2017) until the field-to-field calibration step. Figure 1 presents each of the observed CaHK MegaCam images in our footprint (orange squares), which may have slightly different zero-points of the order of a few tenths of a magnitude. In the main Pristine halo survey, fields were calibrated with respect to each other by determining CaHK offsets using the red part of the stellar locus of a CaHK-SDSS colour-colour diagram (see Figure 7 in Starkenburg et al. 2017). This part of the locus mostly consists of nearby dwarf stars. For this calibration, the photometry needs to be dereddened, which was done using the same reddening map for the halo survey, i.e. the Schlegel dust map, hereafter SFD (Schlegel et al. 1998) reddening along the line of sight, which may be different than the actual reddening to the nearby dwarfs used for the calibration. Towards the halo, this difference is small and has not been taken into account. Towards Sgr, however, the reddening could be expected to change significantly as function of distance to the stars because we are looking through a large part of the disc, and we should not use the integrated SFD reddening. We therefore devised a new field-to-field calibration strategy specifically for the PIGS Sgr footprint ( < −10 degree). Instead of using only the red part of the stellar locus, we use the full CaHK-Gaia stellar locus of nearby dwarfs and simultaneously fit for the CaHK offset (shifting the locus by a constant) and the average foreground E(B−V) for each field (changing the shape of the locus). We select dwarf stars ( < 4) with CaHK uncertainties < 0.15 mag, parallax uncertainties < 20% and distances roughly between 500 − 1000 pc (from the Gaia parallaxes) as to not span too large a range in distances. We apply additional quality cuts to the astrometry (RUWE < 1.2, this is stricter than the cut we apply in Section 2.2, because we aim for higher quality for the calibration) and the photometry (the same cuts on BP−RP excess factor and variability as discussed in Section 2.2) to get to the final calibration sample.
We select a field with relatively low E(B−V) SFD at ( , ) = (6.25 • , −21.2 • ) as the reference field, and correct it for extinction using the integrated SFD map (which is likely not too bad 20 degrees away from the Galactic plane) with the filter coefficients as described in the next sub-section. We take the median (CaHK − ) 0 in bins in (BP−RP) 0 of 0.1 (or 0.2 in the reddest part), requiring at least 5 stars per bin, and fit a 4th order polynomial to it. We do the same in each field to be calibrated, but for the reddened colours. We then compute the 2 for a grid of ΔCaHK and E(B−V) and adopt the best solution. We show the stellar locus of the reference field and an example field with its best fit in the top panel of Figure 2.
We test the quality of our calibration by determining the difference in CaHK for all stars in the Sgr footprint observed on two different images, possible thanks to the overlap between fields. Taking only stars with CaHK uncertainties less than 0.01, we infer a dispersion of 0.03 mag implying an uncertainty floor of 0.021 (dividing the dispersion by √ 2), see the bottom panel of Figure 2. This is comparable to the uncertainty floor estimated for the main survey (0.02 mag, Starkenburg et al. 2017), and corresponds to a metallicity uncertainty of ∼ 0.15 dex in our methodology (see Section 3).
Catalogue
We cross-match the PIGS photometry with the astrometric data from Gaia EDR3 (Gaia Collaboration et al. 2021), from which we use parallaxes and proper motions to perform a selection to isolate the member of Sgr (see Section 2.2). We also use the Gaia broad-band photometry combined with the PIGS photometry to select stars of different metallicities (see Section 3.2). Throughout this paper, we only use data cross-matched with the PIGS+Gaia catalogue for Sgr.
We correct for dust extinction using the SFD map. We use the colour-dependent extinction coefficients for the Gaia EDR3 filters from Casagrande et al. (2021) (adopting the "FSF" extinction law) and 3.924 for CaHK (Starkenburg et al. 2017).
Member selection
Establishing proper criteria to define the membership of stars to a dwarf galaxy is a crucial and challenging step. Sgr is highly affected by the contamination coming from different Galactic populations, namely the Galactic bulge, the thick disk and the thin disk , and also the Galactic halo. It is necessary to find an accurate balance between reducing Milky Way contamination and not loosing too many Sgr candidates.
We apply the following cuts to obtain Sgr members from our PIGS+Gaia catalogue. We only consider the part of the PIGS footprint with < −10 • , to avoid the region close to the Galactic plane which will have more contamination. We isolate Sgr candidates using a cut on the parallax to remove foreground stars (| | ≤ 2 ). We limit our analysis to stars with G 0 ≤ 17.3 to avoid fainter stars for which the uncertainties on the Gaia astrometry and the CaHK photometry become too large. Additionally, helium-burning (red clump and horizontal branch) stars start to contribute significantly at fainter magnitudes. Deriving photometric metallicities for the (bluer) horizontal branch is more difficult since the metallicity sensitivity of Pristine reduces for hotter stars, our spectroscopic training sample (see the next section) does not include horizontal branch stars and some of the stars on the horizontal branch are photometrically variable. Excluding these bluer (more metal-poor) stars while including the (more metal-rich) red clump stars would bias our sample against metal-poor stars. This is another reason to limit the analysis to brighter stars only. This leads to a sample of ∼ 52000 stars. We apply additional quality cuts on RUWE (< 1.4), astrometric_excess_noise_sig (≤ 2) as described in Lindegren et al. (2021), and the fidelity flag (> 0.5) from Rybizki et al. (2022). Collaboration et al. 2021) for the PIGS data-set after having isolated the Sgr candidates. These PM values are subtracted from the mean PM of the Sgr members ( ) following expression (4) from Vasiliev & Belokurov (2020). The Sgr galaxy is visible as a clear over-density centred at 0. We select Sgr stars within a radius of 0.6 mas/yr −1 , delimited by the red circle. We constrained the photometric variability for the Gaia photometry using the Gaia catalogue parameters phot_g_n_obs (the number of observations in the G-band) and phot_g_mean_flux_over_error (the mean flux over the error in the G-band), following equations 17 and 18 in Fernández-Alvar et al. (2021). We further clean the sample using the BP−RP flux excess: | * | ≤ 3 * ( ) (Riello et al. 2021). These two last refinements remove 1.3 percent of the previous selection. We finally defined the Sgr members with a cut on the proper motions (PMs), as shown in Figure 3. We select stars within a radius of 0.6 mas yr −1 of the mean PM of Sgr, corrected for its variation with RA and Dec as reported in the work (Vasiliev & Belokurov 2020) using the following equations: where Δ = − 0 , Δ = − 0 represent the difference in RA and Dec with respect to the Sgr centre. We assume the nuclear globular cluster M54 (NGC6715) to be at the Sgr centre, with coordinates 0 = 283.764 • and 0 = −30.480 • . We present the colour-magnitude diagram (CMD) and the Pristine colour-colour diagram for the Sgr selection (after applying the cuts above) in Figure 4. The Ca H&K term appears in the y-axis of the colour-colour diagram, making it sensitive to metallicity and creating a spread in the metallicity values along the vertical axis. One can see three distinct blobs in the colour-colour diagram. The largest blob consists of red giant branch stars in Sgr.
The small blob of stars around (BP − RP) 0 = 0.8 and y-axis = 6.5 has values spreading a wide range in parallax compared the rest of the stars. These are likely foreground stars, which we decide to cut using the red dashed line in Figure 4, together with all bluer stars which are mostly other types of contamination. With this last cut, our final Sgr sample contains 44 785 stars.
The stars in the distinct sequence in the colour-colour diagram between ∼ (BP − RP) 0 = 0.9 to 1.2 (between the red and blue dashed lines) do not have a different parallax distribution compared to the main group of stars, and follow the spatial distribution of the main group of stars as well. We hypothesise that these are not foreground stars, nor Sgr red giant branch stars, but rather Sgr heliumburning red clump/horizontal branch stars, which have slightly higher temperatures compared to the normal red giants and hence form a distinct sequence in the colour-colour diagram. The main red clump in Sgr occurs at 0 ∼ 17.7 and it is excluded from our selection, hence the brighter red clump stars must be on the near side of the dwarf galaxy to enter in our 0 < 17.3 selection.
Spectroscopic samples
We use spectroscopy from two different Sgr samples to evaluate to our Sgr selection. The first sample we use is APOGEE DR16 (Ahumada et al. 2020), with which we have 568 stars shared with our Sgr selection. Most of these are relatively metal-rich ([Fe/H] > −1.0), see the middle panel of Figure 5. We checked the latest APOGEE release as well (DR17, Abdurro'uf et al. 2022), but did not find any additional stars in common with our Sgr selection.
The second sample we use is a spectroscopic follow-up from PIGS, which contains almost exclusively lower metallicity stars. Arentsen et al. (2020) presented the PIGS low-and medium-resolution spectroscopic follow-up of metal-poor inner Galaxy candidates, obtained with AAOmega+2dF on the AAT (Saunders et al. 2004;Lewis et al. 2002;Sharp et al. 2006). The low-resolution ( ∼ 1300) optical and intermediate resolution ( ∼ 11 000) calcium triplet spectra were analysed through a full-spectrum fitting using the FERRE code 1 (Allende Prieto et al. 2006), providing effective temperatures, surface gravities, metallicities and carbon abundances (see Arentsen et al. 2020 for details). More spectroscopic follow-up was obtained later in 2020, which has been analysed in the same way and has already been used in Arentsen et al. (2021). In total, four AAT pointings included dedicated observations of Sgr stars (see their positions in Figure 1). These have not yet been discussed in any publication.
The Sgr selection for the follow-up was made by adding two simple Gaia DR2 cuts to the PIGS selection: ( − ) < 0.1 and proper motions within 0.6 mas yr −1 of = −2.7 and = −1.35 no transformations like Equations 1 were applied). We used Gaia DR2 because the spectroscopic follow-up was done before Gaia EDR3, as part of the main PIGS follow-up program predating the current work. The Sgr selection was extended to half a magnitude deeper ( = 17.0) than the main PIGS sample. Whereas the PIGS fields were observed for 2h each, two of the Sgr fields were observed for 3h, one for 2.5h and one for 1.5h. The CMD and metallicities of PIGS stars in our final Sgr selection are shown in the left and middle panels of Figure 5. The cross-match between the PIGS AAT spectroscopy and our selected Sgr sample results in 426 objects (keeping only stars with good quality spectroscopic parameters, following Arentsen et al. 2020). Together, the APOGEE and PIGS samples cover the full metallicity range of Sgr.
We present the radial velocities from the spectroscopic PIGS and APOGEE stars in our Sgr sample in the right-hand panel of Figure 5. For our selection, the median APOGEE and PIGS radial velocities are ≈ 144.5 km s −1 and ≈ 145.6 km s −1 respectively. The resulting histogram shows that the radial velocities have a smooth distribution with no clear outliers, around the mean literature Sgr radial velocity of ∼ 140 km s −1 (Ibata et al. 1994). The PIGS sample has a slightly higher velocity dispersion than the APOGEE sample, which is in line with the expectation that metal-poor stellar populations are typically older and more pressure-supported. We will further discuss this scenario in Section 4.
The colour-coding of CMD shown in the left panel of Figure 5 displays the spectroscopic metallicities of the PIGS and APOGEE stars. The most metal-poor stars are located on the blue side, while the redder part is populated by cooler metal-rich stars. From the metallicity histogram in the middle panel, the dominance of PIGS among the metal-poor stars with respect to the APOGEE sample is clear, the result of PIGS being focused on the search of metal-poor stars in Sgr. A comparison between metallicities from PIGS and APOGEE has been made for bulge stars in Arentsen et al. (2020), finding good agreement -a dispersion of 0.2 dex, and only a slight systematic offset (with APOGEE being more metal-rich by 0.1 − 0.2 dex, depending on the metallicity).
Spectroscopic calibration sample
We make use of a training sample with available spectroscopic metallicities present in the footprint of the main Pristine halo survey (Starkenburg et al. 2017) to derive iso-metallicity lines for the Pristine colour-colour diagram, which we will use to calibrate the metallicity scale of our photometric PIGS-Sgr sample. The training sample consists of the main training sample for the Pristine survey, which has been carefully built to contain many very metal-poor stars (Ahumada et al. 2020) to extend the training sample to higher metallicities and lower temperatures. Because we only have giant stars in our Sgr sample, we only keep the giants in the training sample (log < 3.8 and eff < 5700 K). After cross-matching this spectroscopic sample with the most recent internal Pristine CaHK catalogue (already cross-matched with Gaia), de-reddening it in the same way as our Sgr photometry, and applying the same photometric quality cuts as before, we obtain a sample of ∼ 23000 giant stars with −4.0 < [Fe/H] < +0.5. The resulting sample is shown on the Pristine CaHK-Gaia colour-colour diagram in the upper panel of Figure 6, colour-coded by the spectroscopic metallicities from the training sample. On top of it we show our derived iso-metallicity lines, ranging from −3.0 to 0.0 in steps of 0.5 dex, which will be further described in the following section.
METALLICITY ANALYSIS
The following section reports the metallicity analysis conducted on the photometric Sgr selection with the help of the spectroscopic training sample. With the aim of studying the distribution of different metallicity populations in Sgr, we divide the sample in two main groups and study their spatial distribution. We also fit models to the Sgr stellar density, paying attention to a possible metallicity gradient within Sgr. Finally, we present the spatial distribution of the very metal-poor stars.
Derivation of iso-metallicity lines
We employ the spectroscopic training sample from the main Pristine survey to derive iso-metallicity lines in the CaHK-Gaia colour colour space, which we will use in our Sgr analysis to divide the sample into various groups of metallicity. Some iso-metallicity lines have been derived for the Pristine-SDSS colour space before, but not for the Pristine-Gaia colour-colour space. For this work, we are only interested in giants since only giants are part of our Sgr selection. We binned the colour-colour space in (BP − RP) 0 and selected slices of spectroscopic metallicities within 0.1 dex of a given [Fe/H], with a minimum number of 5 stars per (BP − RP) 0 bin. To derive an iso-metallicity line we determined the median y-axis value in each bin and successively fit a 2nd order polynomial to these points for [Fe/H] ≤ −1.8 and a 3rd order for [Fe/H] ≥ −1.7. The resulting iso-metallicity lines are show in the upper panel of Figure 6, on top of the spectroscopic training sample from the main Pristine survey. The lines range from [Fe/H] = −3.0 to [Fe/H] = 0.0 dex, in steps of 0.5 dex. We limit the polynomials to (BP − RP) 0 < 1.6, since for redder colours the metal-poor and metal-rich stars start overlapping and crossing.
The derived iso-metallicity lines are on the main survey calibration scale for the CaHK photometry, but the Sgr-PIGS photometry is on a different scale, offset by a constant. To determine the offset we used the available Sgr spectroscopy from APOGEE and the PIGS/AAT data, which is shown and colour-coded by metallicity in the bottom panel of Figure 6. We compute the offset between a given iso-metallicity line and the stars in the Sgr spectroscopic sample falling in the same metallicity range. We used Sgr stars with −1.6 < [Fe/H] < −1.4, −1.9 < [Fe/H] < −1.7, and −2.1 < [Fe/H] < −1.9, and computed the difference between these stars and the iso-metallicity lines at [Fe/H] = −1.5, −1.8 and −2.0, respectively, finding an average shift of 0.52 mag. In the bottom panel of Figure 6, the same iso-metallicity lines from the upper panel (plus one at [Fe/H] = −1.3 represented by the thicker yellow line) are shown, now shifted with the offset derived above to match the Sgr colour-colour space. They are colour-coded by the corresponding spectroscopic metallicity.
We found that the shift between the Sgr spectroscopic metallicities and the iso-metallicity lines derived from the training sample depends on the exact metallicity range that is used. We hypothesize that this is connected to a significant difference in the [ /Fe] abundances between MW stars and Sgr stars. In the training sample from the main Pristine survey, the reddest part of the colour-colour diagram (with (BP − RP) 0 1.5) splits into two sequences for giants when colour-coded by [ /Fe] from APOGEE. Their alpha abundances are representative of thin and thick disc stars. The Sgr [ /Fe] is significantly lower than both of those (Hasselquist et al. 2017). Some discussion on this can be found in the Appendix. For this reason, we will not be determining individual photometric metallicities for each star in this paper, because it is not clear exactly what scale they would be on. The choice of cuts performed employing these iso-metallicity lines is extensively discussed in further sections, and we show that our main interpretations do not depend on the details.
Metallicity division for Sgr stellar populations
The iso-metallicity lines offer an effective way to separate the Sgr sample into two main populations, one metal-rich (MR) and one metal-poor (MP) population. Looking at how the choice of different iso-metallicity lines has an impact on the metallicity distribution within the Sgr core gives an insight about the role of the selection selection effect on the final results. Figure 6) as all being part of the population with higher metallicity, as there are 305 spectroscopic stars with (BP−RP) 0 > 1.6 with [Fe/H] > −1.0. We include the reddest stars in our metal-rich populations in the remainder of this work.
Spatial distributions
Next, we build density maps of the metal-poor and metal-rich populations defined in Section 3.2 to study their spatial distributions.
The spatial distributions are shown in the left and middle columns in Figure 7 for the three different metallicity separations, all of which practically show the same pattern: a higher stellar density is found in the centre of Sgr around M54, while the number of observed stars drops moving outwards from the centre to the onset of the stream. One thing to notice is that the more metal-rich component appears to be more centrally concentrated compared to the more spread-out metal-poor population. The very central density of the two metallicity populations is dominated by the central globular cluster M54 (with average [Fe/H] −1.55, Carretta et al. 2010b). This relates to the very complicated metallicity distribution function of the central region of Sgr influenced by the presence of the nuclear star cluster (Bellazzini et al. 2008;Alfaro-Cuello et al. 2020).
We use Figure 7 to investigate the presence of a metallicity gradient. The ratios between the metal-rich and metal-poor populations are shown in the right-hand column. The three rows corresponds to different selections obtained using iso-metallicity lines fitted for various metallicity values, i.e. [Fe/H] = −1.5, −1.3 and − 1.0. The histogram shows that the relative fraction of metal-poor objects is higher further away from the centre, while the central area presents on average a lower fraction. This effect is less visible, although always present, as the metallicity limit for dividing the populations get lower. Indeed for the division at For the analysis that follows in this work, we chose to use the stars with [Fe/H] < −1.3 as the metal-poor (MP) populations, while for the metal-rich (MR) group we set [Fe/H] > −1.0. In this way we ensure that the division between the two groups is more clean and not dominated by stars that are close to the dividing line. We check the photometric selection with the help of the APOGEE and PIGS spectroscopy available, which covers most of the investigated metallicity range. We find that ∼ 98% of spectroscopic stars with [Fe/H] < −1.3 are part of our MP population, while ∼ 95% of the stars with spectroscopic metallicities > −1.0 are part of our MR population. We have tested that this characteristic does not depend strongly on the metallicity limit used to separate the MP and MR components. The main result, that the MP/MR ratio increases further from the Sgr centre, is visible also when using even lower [Fe/H] values for defining the MP population. diagram, and is linked to the star formation rate of a system. Before this change in the [ /Fe] abundance, the chemical enrichment is mostly subjected to the presence of the supernovae type II (SNII) from massive stars, which have much shorter timescales than supernovae type Ia (SNIa) with lower mass progenitors. A galaxy that builds up and maintains metals and gas before SNIa start to contribute to the gas enrichment will reach a higher metallicity of the knee, compared to a galaxy which looses metals due to galactic winds, or does not show an efficient SFH (Helmi 2020
Method
To quantify the structural differences between the MP and MR populations, we fit a model to their spatial distributions. We bin the Sgr footprint in pixels of a few tens of arc-minutes, ∼ 20 for the MP and ∼ 15 MR, as the stellar density of this latter population is higher than the one of the metal-poor component. We follow the approach of Martin et al. (2008) and express the Sgr stellar density as: where i indicates the pixel, A 0 is the central density, r e is the exponential scale radius, and r i is the elliptical distance of each pixel with respect to the centre of the distribution. The r depends on the ellipticity e and the position angle by: Where x and y are related to the right ascension ( ) and declination ( ) and to the tangential plane of the sky by: with the centre at ( 0 , 0 ).
The best set of parameters were obtained using Markov Chain Monte Carlo sampling, using the emcee 2 package. We fit a model for the stellar density of the metal-poor and metal-rich populations, with the iso-metallicity lines at [Fe/H] = −1.3 and [Fe/H] = −1.0 as discriminant. For each fit, we used 64 walkers and 30000 steps. We excluded all pixels more than 50% outside of the PIGS-Sgr footprint (the black solid line in Figure 9), the density of pixels with 50%-100% within the footprint has been scaled according to the fraction inside the footprint. To not bias the fit due to the globular clusters in the footprint, we avoided pixels where M54 is located using a radius of 0. We tested the inclusion of a Galactic background in the model, using an exponential function dependent on the Galactic latitude, but we found it to be unconstrained in the fitting procedure. This hints at a very efficient cleaning of MW contamination in our member selection, which we further discuss in Section 4.
Results
The resulting parameters of our fits are summarised in Table 1 distributions of the set of parameters. Some correlation can be seen between 0 vs. 0 , re vs. A, and re vs. e, but the fitted parameters are overall well-constrained. metallicity populations. All other model parameters only show minor changes, which are likely not significant. We suspect that the uncertainties on the model parameters are underestimated, which we will briefly get back to in the discussion. The change of the position angle of ≈ 3.3 degrees between the MR and MP populations represents a change of only a small fraction, and similarly for the eccentricities.
The most visible difference between the centres of the main MP and MR populations concerns the RA component: 0.253 0.028 −0.028 degree, but this is still only a small change that is unlikely to be significant. Our derived centres differ from the coordinates of M54 at maximum ∼ 0.31 • in RA for the MP stars, and ∼ 0.07 • for the MR population. Del Pino et al. (2021) The spatial distributions of the models and residuals are shown in Figure 9. The residuals show structure for both the MP and MR fits, especially for the regions with RA > 290 • . Our elliptical model appears to be too simplistic to describe the distribution of stars in Sgr, which is not surprising given the complex, disrupting nature of the system. We do not expect the residuals to be related to the Gaia scanning law. The effect of the scanning law is strongest for faint stars (closer to the magnitude limit in Gaia, 20), producing inhomogeneities on the scale of ∼1 degree (Fabricius et al. 2021). In this work, we only use relatively bright stars ( 0 < 17.3) and do not make any strong cuts on any of the Gaia uncertainties.
Very metal-poor stars
The study of iron-depleted stars is of great interest for reconstructing the history of the Sgr galaxy, as they carry essential information to draw the story line of the early evolution of their host galaxy. Different investigations of VMP stars in Sgr over the years have led to an improved understanding of the early evolution of this dissolving The spectroscopic Sgr-PIGS follow-up sample contains 100 stars with [Fe/H] spec < −2.0, this is the largest spectroscopic VMP Sgr sample to date. Our full photometric Sgr-PIGS data is an excellent source for more VMP stars. We selected VMP stars following the same approach explained in Section 3.2, but this time using the isometallicity line at [Fe/H] = −2.0 (see blue line in Figure 6). Our VMP sample consists of 1150 stars, which is the largest sample of VMP candidates with [Fe/H] < −2.0 in Sgr to date.
The distribution of the VMP stars is shown in Figure 10. Almost the entire selection (> 99.9%) is far beyond the tidal radius of M54 (7.5'), therefore not associated to the globular cluster (Harris 2010). It is clear that these ancient stars are located at all radii but do follow the overall Sgr density distribution, the stellar density being higher in the centre. Around RA ∼ 295 • , an over-density of stars is noticeable, which corresponds to the GC Ter 8 (which has a spectroscopic
Metallicity gradient
Many studies (e.g. Keller et al. 2010;Hyde et al. 2015;Mucciarelli et al. 2017;Hayes et al. 2020;Garro et al. 2021) have shown that a metallicity gradient is present both in the streams and in a small central region of the Sgr remnant, which is connected to the intricate chemo-dynamical evolution of this dwarf galaxy. The PIGS data is an excellent dataset to study the metallicity gradient in the core of the galaxy. In Figure 11 we show the same 2D histogram as in Figure 7 Table 1). The centre of each ellipse is represented by the red cross (283.830 • and −30.493 • ). Bottom: The ratios of MP/MR stars in each ellipse. The error bars are calculated assuming a Poissonian distribution and the colours of the points correspond to the colours in the top panel. The distances on the x-axis are calculated along the semi major-axis of each ellipse with respect to the centre, taking the middle distance between two consecutive ellipses. For the outer three ellipses, the black points represent the ratios after removing the globular clusters Arp2 and Ter8. populations from the previous section, showing again the MP stars dominating at the edges and the MR in the central region.
Inspired by the idea of Mucciarelli et al. (2017) who mapped the change of metallicity as a function of the projected distance from the Sgr centre, we divide Sgr in concentric ellipses to examine the MP/MR ratio for each section, see Figure 12. The ellipses are built using the parameters that we derived for the MR population in Section 3.4, which appear in Table 1. The ellipses have a fixed width of 1.2 deg along the semi-major axis, except for the innermost region, where we chose smaller ellipses to probe the effect of M54 (with each bin containing at least 300 stars), and for the two outermost bins, where the density of Sgr stars drops rapidly. The area covered is significantly larger compared to the one considered by Mucciarelli et al. (2017), as we computed the ratio out to ∼ 12 • from the position of M54 along the Sgr semi-major axis, whereas the previous analysis only went out to ∼ 0.15 • (or 9'). Similarly, also the number of targets investigated is greatly increased with respect to the spectroscopic sample of Mucciarelli et al. (2017), which accounted for 235 stars.
By computing the ratio MP/MR for each division, as shown in Figure 12, it is possible to appreciate the change of the metallicity moving away from the centre. The distances are set along the semi-major axis of each ring with respect to the centre of the MR population (with [Fe/H] > −1.0, see Table 1) using the projected coordinates from equation 4. The error bars are calculated assuming a Poissonian distribution. We find that Sagittarius presents a clear negative metallicity gradient -the relative number of metal-poor stars is higher at larger radii from the centre of the galaxy.
For the central ellipse, the stellar budget of M54 contributes by enhancing the relative number of metal-poor stars. The last rings, located furthest away from the centre of the galaxy, have higher uncertainties due to the lower stellar density in the outer Sgr region. The outer two ellipses also contain the two metal-poor globular clusters (Goldsbury et al. 2010), respectively. For the outer three ellipses, the black points illustrate the MP/MR ratios after removing the stellar contribution from these two clusters. Excluding these two systems, we find that the trend seems to flatten starting at ∼ 8 • .
Sources of uncertainties
Our analysis is subject to uncertainties, which we discuss below. Overall, they do not significantly affect our main conclusions.
MW contamination
Despite the numerous cuts applied, some Milky Way foreground (or background) stars could still remain in the Sgr sample. We tested the level of contamination by selecting some Milky Way control regions in different parts of the proper motion space, to check their numbers and to see how they are distributed in the footprint and the colour-colour diagrams. We apply exactly the same cuts as to our Sgr sample, except for the proper motions. We selected three circular regions with the same PM radius as for our Sgr selection in roughly the same region of the PM space, which are shown in orange, green and blue circle in Figure 13. Two of these fields, the green and orange circles, have higher densities than we expect in the Sgr region and show a pessimistic case of what the contamination level could be (175 and 332 stars). The control field depicted in blue contains even fewer stars (only 100). These numbers are to be compared with the 44 785 stars in our Sgr selection. From the numbers of stars in each control region, it is possible to compute the percentage of the possible level of contamination left, which reaches at maximum ≈ 0.4%, 0.7% and 0.2% for each field, respectively.
We show their distribution in RA and Dec in the middle panel of Figure 13. The stars are mostly concentrated closer to the Galactic plane (on the left), and their density fades away towards higher RA (further away from the plane). The low ratio of MW stars left in the outer part of Sgr indicates that the contribution from the metal-poor MW halo is not dominant and it should not affect our metallicity analysis in the lower density regions. We inspected the location of the control fields in the colour-colour diagram (bottom panels) and found out that they are mostly located in the bluer part of the diagram, overlapping with the region that we identified as Sgr red clump stars. The contamination appears to be split between the MR and MP groups in roughly equal proportions compared to the Sgr stars, therefore they should not bias the results from our metallicity analysis. We tested whether it made a difference to our main results to exclude the RC region of the colour-colour diagram in our Sgr selection (using the blue line in Figure 4). We found no large differences, and therefore decided to keep the red clump region in our analysis.
As described in Section 3.4, we considered the MW contribution in the model fitting and found that it was unconstrained and did not impact the main results. This is consistent with our estimate of the MW contamination in this section, finding that it is very small.
Brightness cut
The choice of the magnitude cut ( 0 = 17.3), which noticeably reduced the Sgr sample (80% of the original sample within the available magnitude range), has the advantage to give us a clean sample in the sense that the astrometry and the photometry are better constrained enabling a more effective cleaning from the MW foreground stars. Whilst fainter targets might be interesting to have a more stars of this galaxy, we decided to reject the horizontal branch and the red clump region because it would make the selection of MR and MP populations more complicated and less complete. Furthermore, discarding the horizontal branch means to remove variable stars which start appearing in this region of the diagram.
Iso-metallicity lines
For the derivation of the iso-metallicity lines, we considered using the CaHK and Gaia G, BP and RP uncertainties. The uncertainties in CaHK are all < 0.08, and when included in the fitting operations our results did not change significantly. The errors on the Gaia photometry are much smaller than the CaHK, hence we ignored them as well. As mentioned earlier in the text (Section 3.1), the isometallicity lines stop to be trustworthy for the coolest stars, namely (BP − RP) 0 > 1.6, because in this region the metal-poor and metalrich sequences start crossing each other. For this reason we relied on the APOGEE [Fe/H] values to classify stars with (BP−RP) 0 > 1.6 as all being metal-rich. We also found that in the spectroscopic training sample, the location of stars in the Pristine colour-colour diagram depends on the alpha abundances for the coolest/reddest stars (with (BP − RP) 0 > 1.5), resulting in a degeneracy between metallicities and alpha abundances. The alpha abundances in Sgr are different compared to those in the Milky Way, therefore the iso-metallicity lines derived from the training sample could be partly inappropriate for Sgr. This may be connected to our finding that the CaHK shift between the spectroscopic training sample and the Sgr spectroscopy sample appears to depend on the metallicity of the Sgr stars used.
It is worth mentioning that the iso-metallicity lines were derived from a sample with low extinction towards the Galactic halo, whereas the Sgr region presents higher extinction since it is relatively close to the Galactic disc. Uncertainties in the extinction correction mean that this difference is expected to slightly increase the uncertainties in the metallicities for the Sgr sample, but it is not expected not create a systematic offset in the metallicity calibration.
Definitions of MP and MR populations
Figures 7 and 11 showed that, even when shifting the metallicity boundary for the MP and MR regions between −1.0, −1.3, −1.5 or −2.0, the fraction of MP stars relative to the MR ones always increases with distance from the centre of Sgr, therefore the metallicity gradient remains and does not depend on our exact definition of the MP population. This effect was quantified in the bottom panel of Fig populations, presenting a clear negative metallicity gradient. Choosing the two more metal-poor boundaries introduced in section 3.2 ([Fe/H] < −1.5 and < −2.0) for the MP population instead, we reproduce a similar trend of the MP/MR ratio rising away from the centre for these cases, presented in Figure 14. In both panels, it is possible to notice a flattening in the trend starting at around ∼ 6 • . The fraction of stars with [Fe/H] < −1.5 appears to be relatively constant in the outermost rings, except for the last annulus. There the MP/MR ratios do experience an important rise due to the presence of Ter8 and Arp2. From the same figures it can be observed that the flattening is stronger when the contribution from the two metal-poor clusters Arp2 and Ter8 is removed. For the [Fe/H] < −2.0 stars, there is no sign of M54 in the innermost rings anymore, and the rest of the trend shows a similar flattening as the previous case.
It is beyond the scope of this paper to further quantify the gradient (e.g. in terms of dex/kpc), which requires individual metallicity estimates for all our stars. For that, a larger spectroscopic sample of Sgr stars is needed, either to be used by itself or to better constrain the photometric metallicities.
Model fitting
The exponential elliptical profile adopted in our model fitting is a relatively simple approximation. Although it roughly corresponds to the projected shape of Sgr, the real distribution of Sgr stars is more complex. The model fits may also be influenced by inhomogeneities throughout the footprint. Both could result in unrealistic uncertainties from the MCMC. We checked whether the results from the model fits depend on the binning of the data, and find no significant changes.
To investigate whether differences for the model parameters between the MP and MR populations are likely to be real, we performed model fits on Sgr stellar populations in smaller metallicity bins (0.2 dex wide). We found a trend with metallicity for the scale radius, but found no clear evidence of trends with metallicity for the eccentricity, position angle and centre of the models. We also concluded from this exercise that the uncertainties from the MCMC appear to be underestimated. In view of these considerations, the changes between the different model parameters (see Table 1), except for the scale radius, should be treated with caution.
Sgr metallicity gradient compared to the literature
Evidence of a metallicity gradient in Sgr was already reported early on Bellazzini et al. (1999a). Those authors connected the presence of a metallicity gradient to a significant age variation in the dwarf system, caused by a protracted star formation history. Alard (2001) studied two Sgr fields of 2x2 • , discovering a variation of −0.2 dex along the Sgr major axis that was linked to an age variation inside the core.
Thanks to near-infrared photometry, McDonald et al. (2013) studied the variation of the fraction of metal-poor (−1.6 < [Fe/H] < −0.9) and intermediate, metal-rich ([Fe/H] > −0.7) stars in an area of eleven square degrees and revealed that the metal-poor population is more spread throughout the dSph, while the metal-rich stars ([Fe/H] ∼ 0.0) are grouped in ellipsoidal distribution around the centre of the galaxy. McDonald et al. (2013) identified clear traces of a metallicity gradient away from the region dominated by the bulge population (from RA ∼ 287 • ). Mucciarelli et al. (2017), through their spectroscopic study of 235 stars in the Sgr core and in M54 (all within 9' of the centre), uncovered a metallicity gradient as well, finding a higher fraction of metal-rich stars in the centre. They speculated that the metallicity gradient in that region can be linked to an extended star formation history, in which recent metal-rich bursts took place in the central area of Sgr which might have caused later stellar generations to be more centrally concentrated.
Our results are consistent with previous works, namely our metalrich population (MR, [Fe/H] > −1.0) dominates the innermost region while our metal-poor population (MP, [Fe/H] < −1.3) becomes more important at larger radii, i.e., there is a metallicity gradient in our data. In line with these results, the greater value of the scale radius for the more metal-poor stars could be an indication that, generally, the stars belonging to this category are more smoothly distributed on larger distances compared to the more centrally concentrated metal-rich stars. By comparing the values derived for a number populations in narrower metallicity bins, we found a clear increase of this parameter moving to lower metallicities.
We note that our work covers an area of ∼ 100 square degrees, extending to ∼ 12 degrees from the very centre of the galaxy, which is considerably larger than the region covered in previous studies, which typically focused on the very inner part of Sgr. Here we find that the metallicity gradient of Sgr extends beyond the central part of the galaxy and it manifests all the way outwards to the stream. The Sgr streams have metallicity distributions peaking at lower metallicity values than its core, [Fe/H] ≈ −0.8/−1.1 and [Fe/H] ≈ −0.5 respectively (Hayes et al. 2020). Our analysis revealing a higher fraction of stars with [Fe/H] < −1.3 in the outskirts (RA > 290 • ) is in congruence with these findings, supporting the scenario in which the most metal-poor stars were the least bound to Sgr, and have been tidally stripped away from the core first.
This result highlights the power of the Pristine data, which enables us to characterise not only the dense central regions of the dwarf galaxy but also the outskirts. Combined with Gaia astrometric information, the photometric metallicity information is ideal to study the structural properties of a dwarf system along its entire extension.
The formation and evolution of Sgr
At this stage of the work, it is tantalising to tie the results from our metallicity analysis to the history and evolution of Sgr. In general, the morphology and the star formation history of a system can be heavily influenced by various physical processes that can be both internal, such as feedback from SNe events and gas pressure support, or triggered by external factors, e.g. ram pressure stripping and tidal disturbances caused by Galactic tides (tidal stripping and tidal stirring) (Mayer et al. 2001(Mayer et al. , 2006Łokas et al. 2010) or mergers (Starkenburg & Helmi 2015;Benítez-Llambay et al. 2016).
Recently, Tepper-García & Bland-Hawthorn (2018) and Vasiliev & Belokurov (2020) in their simulations assumed Sgr to have been a gas-bearing dwarf spheroidal galaxy before it fell into the Milky Way. According to this theory, Sgr has conserved its origin of dSph despite the tidal interaction with the Milky Way, and is predicted to dissolve over the next Gyr (Vasiliev & Belokurov 2020). Others suggest that the Sgr progenitor was a gas-rich, flattened rotating system that transformed into a dSph due to tidal stirring in the interaction with the Milky Way, and whose inner core might survive the next pericenter passage Łokas et al. 2010;Del Pino et al. 2021). Łokas et al. (2010) suggested that the Sgr progenitor resembled the Large Magellanic Cloud and they described it as a disky galaxy whose stellar populations formed a bar-like structure that survived until the second pericenter passage. The simulations of Oria et al. (2022) suggest that Sgr hosted a rotating component that has been perturbed during the interaction with the MW, but that this component was not the dominant fraction of the stellar mass. There is no consensus yet on the progenitor of Sgr, but more data might help to reach a conclusion. The arguments about the nature of the Sgr progenitor and its subsequent evolution are mainly based on the kinematical properties of Sgr and its stream, but there is another dimension as well: the chemistry.
Processes shaping the metallicity gradient
One process which is known to shape the age/metallicity gradients in satellite galaxies is ram-pressure stripping, which is responsible for removing the gas that was originally in the dwarf galaxy (Mayer et al. 2001(Mayer et al. , 2006. Ram-pressure stripping first removes the gas in the less dense outer regions of a dwarf galaxy, and removes the central gas reservoir at the very end. This means that new stars could be forming for longer periods of time in the centre compared to the outer regions. The gas in the inner regions is chemically enriched due to the prolonged star formation, hence this process can lead to age and metallicity gradients. We know that the Sgr core contains a population of relatively young stars (< 2 Gyr, e.g. Siegel et al. 2007), so it must still have had gas up to those times. However, no gas has currently been detected in Sgr (Burton & Lockman 1999;Koribalski et al. 1994). According to the modelling of Tepper-García & Bland-Hawthorn (2018), 30 − 50 per cent of the gas in Sgr was stripped ∼ 2.7 Gyr ago (the first time it crossed the Galactic disc), while the complete loss of the remaining gas took place ∼ 1 Gyr ago, when it crossed the disc the last time. Another process related to metallicity gradients in satellite galaxies is tidal stripping due to interaction with the host galaxy. Tidal stripping affects the more diffuse component of a dwarf more strongly, because it is less tightly bound to the galaxy. In Sgr, tidal stripping has led to the removal of much of its stellar content, which is now forming the large Sgr stream. The Sgr streams are more metal-poor than the progenitor. The metallicities of their stellar populations span between [Fe/H] ∼ −2.5 and ∼ −0.5 (De Boer et al. 2015). This indicates that the metal-poor stars were the ones that were less bound to the core, suggesting a radial metallicity gradient in Sgr. We also found in this work, using the radial-velocities available from the spectroscopic catalogues, that the velocity dispersion in the core of Sgr is higher for metal-poor stars than for metal-rich stars, supporting this scenario.
It has been shown that there is a difference between the metallicity gradient detected in dSph and dwarf irregular galaxies. Dwarf irregular galaxies (dIrrs) are rotationally supported and have a disky dwarf progenitor, while dwarf spheroidals are recognised to be pressure supported systems. Generally, the first category shows a steeper decreasing gradient profile with respect to the flatter trend present in dSph systems (Mayer et al. 2001;Taibi et al. 2022). Since the progenitor of Sgr is still under debate, we can expect that depending on the assumed scenario for the progenitor -a dSph or disky rotating system -the metallicity gradient would be less or more pronounced. It is also necessary to bear in mind that the transition from a disk galaxy to a dSph can be caused by tidal stirring process, which have been invoked in the case of Sgr for being responsible for its observed elliptical shape (Łokas et al. 2010).
Connecting ages and metallicities
The relation between metallicity and age gradients is important in constraining the processes behind the metallicity gradient. Many works showed that age-metallicity relations can be derived for red giant branch (RGB) stars in dwarf galaxies using information from their SFHs and CMDs (Carrera et al. 2011;del Pino et al. 2015del Pino et al. , 2017. However, these strategies are effective when the SFH concerns the same region of interest of the stars. We did not directly associate the metallicity gradient in our work with a possible age gradient as it would have been beyond the scope of this paper. We could assume age-metallicity relations for Sgr from other works (see for instance Layden & Sarajedini 2000;Bellazzini et al. 2006;Siegel et al. 2007), but this is not trivial as most of them analyse fields of only a few degrees in the very central part around M54. Keeping that caveat in mind, if we adopt the age-metallicity relation presented by Siegel et al. (2007), we can speculate that our identified MR and MP populations have ages of ∼ 4 − 8 and 10 Gyr, respectively.
Another way to connect ages and metallicities is by using their alpha abundances. (Carretta et al. 2010a;De Boer et al. 2014), and has been connected to an age of ≈ 11Gyr. The knee is still somewhat unconstrained in the core and could be different, since the data suggests it experienced a prolonged and complex star formation history. However, assuming it is around the same metallicity, this is another indication that our MP population is significantly older than the MR population.
Sgr has interacted with the Milky Way for 8 Gyr, and the first close pericenter passage of Sgr is predicted to have happened ∼ 5 − 6.5 Gyr ago Ruiz-Lara et al. 2020). Given an age of ∼ 4 − 8 Gyr for the MR stars, this means that the younger metal-rich stars might have been born during the first encounter as a consequence of the infalling triggered by the tidal interaction. Also the older metal-rich stars and the metal-poor stars would already be present at the time of the first interaction. However, we can not exclude that these populations will have been affected differently by the tidal interaction with the Milky Way, depending on their internal properties at the moment of infall -i.e. how tightly they were bound to the remnant, and whether they had any rotation or not.
Metallicity gradient and its interpretation in Sgr
The processes playing a role in the formation of metallicity gradients in low-mass dwarf systems (M * 10 8 − 10 10 M ) remain not fully understood. There is not a clear consensus whether metallicity gradients are formed by protracted central star formation episodes with respect the outer regions, or if they are driven by processes acting more on the older, more metal-poor stars by moving them outwards (Revaz & Jablonka 2018;Taibi et al. 2022), and/or a combination of these.
The complex combination of internal (such as rotation, orbits, angular momentum content) and environmental factors (tidal and rampressure stripping) plays a role in shaping and weakening metallicity gradients, and it is difficult to disentangle these additional factors from the role of an extended SFH (Mayer et al. 2006;Sales et al. 2010;Taibi et al. 2022) and consequently link the gradient to a progenitor.
By looking at our figures illustrating the various [MP/MR] ratios, the fact that it is still possible to detect a gradient might be interpreted as a hint of disky progenitor in which the gradient should have been strong enough to be partially preserve until today. The change of steepness observable at RA ∼ 290 • might be related to the transition from the outer core to the stellar stream, which might mitigate the profile observed for the very inner part where the various star formation bursts took place.
The work of Mercado et al. (2021) reported that a late gas accretion is a further event that can weaken or flatten an existing metallicity gradient in a dwarf galaxy. If we consider that Sgr might have experienced a first encounter with the MW around 5-6 Gyr ago, this factor should be added to the secular processes which act in weakening the trace of the original gradient. If we add to this picture the protracted SFH in Sgr, according to Benítez-Llambay et al. (2016), a steep gradient can only be present in the case of a past merger event, responsible of scattering the old metal-poor component, followed by an in situ metal-rich star formation from the infalling central gas.
It is however ambitious to derive robust conclusions about the progenitor of Sgr before it fell into the MW -whether it was a rotating disky galaxy or a pressure supported spheroidal galaxy -without ages or individual metallicities. We previously discussed how ages could help in getting a more complete picture of the evolution of Sgr. On the other hand, individual metallicities would give the possibility of quantifying the slope of the gradient and spotting possible brusque changes in its trend that we are not able to detect with our metallicity division. If we were able to quantify the radial metallicity gradient, we could compare it with similar pressure-supported dSphs but located in isolated environments. This comparison would enable us to evaluate the impact of both internal feedback and Sgr properties (such as mass and angular momentum) and exterior mechanisms on the Sgr metallicity gradient.
In our fits of the the spatial distributions of the MP and MR populations, the only significant difference we found was that of the scale radius, with the MR population being more centrally concentrated than the MP population. This reflects their spatial distribution and the detected trend in the [MP/MR] ratio: a younger, more centrally concentrated metal-rich component is surrounded by an older, more disperse metal-poor population present at increasing radii. To further disentangle the different processes which could have shaped the metallicity gradient in Sgr, the spatial properties should be accompanied by age information.
Besides the change in the scale radius between the MP and MR populations, if the other small differences we found in the structural parameters from the model fitting were real, what could they tell us? We found a small shift between the centres of the MR and MP populations in the RA direction. This could hint that the tidal interaction might have also played a more severe role in shaping and shifting the extended MP population compared to the MR population, which formed on a longer time-scale from the central gas reservoir. The change of the position angle for different metallicity populations could be related to the fact that they have interacted with the Milky Way tidal field over different periods of time and thus their orbits and positions have evolved differently. It could also be due to rotation within the dwarf galaxy, which may be different for the young (metal-rich) and the old (metal-poor) populations, as suggested in for example Ibata et al. (1997). The values of the ellipticity for both populations (0.566 and 0.592 for MP and MR respectively) are lower than the ellipticity of ∼ 0.65 presented by Majewski et al. (2003). But it is still high, which is a sign of the process of tidal elongation of Sgr induced by interactions with the Milky Way. The higher e value for the MR population might also be related to its lower velocity dispersion.
Comparison with other dSph
The Fornax dSph is an interesting example of a dwarf galaxy that shows some similarities with the Sgr dSph galaxy. The work of De Boer et al. (2012) reports the existence of a radial gradient of age and metallicity, with more metal-rich and younger star forming episodes condensed in the central region of the system, and oldest and more metal-poor stars (with an age of ≥ 11Gyr) appearing at all radii. The dominance of the intermediate-age stellar populations and the protracted SFH for both systems indicate that their dynamical masses were sufficient to retain enough gas (before the complete gas loss) to keep forming stars in their inner regions, after the gas fell back into the central potential well. It is also suggested that the cause of repeated peak in the star formation might be a merger with a gasrich companion. In the case of Fornax, a precise chemical estimation predicts the alpha-knee to occur at [Fe/H] ∼ −1.5 (corresponding to an age of 7-10 Gyr). Battaglia et al. (2006) found the older, more metal-poor population ([Fe/H] < −1.3, age > 10 Gyr) to be more spatially extended than the more metal-rich and younger population (with [Fe/H] > −1.3 and ages between 2-8 Gyr), which was more centrally concentrated. Comparable results are presented in other works, such as Stetson et al. (1998) and Wang et al. (2019). Also Coleman & de Jong (2008) and Del Pino et al. (2013) reported a protracted SFH and detected a gradient in the stellar populations of which the youngest and most metal-rich formed more segregated in the centre of the galaxy.
Another compelling example is the Sculptor dSph galaxy, in which the metal-poor ([Fe/H] < −1.7) and metal-rich ([Fe/H] > −1.7) stellar components posses different kinematics and spatial distributions. I.e. the more extended metal deficient population has higher velocity dispersion than the metal-rich population (Tolstoy et al. 2004), which was likely created after the enriched original gas sank back to the centre.
New sample of very metal-poor stars in Sgr
Very and extremely metal-poor stars in dwarf galaxies strongly reflect the early star formation history of their host systems and, being tracers for the first nucleosynthesis events, can help to constrain the properties of the first stars, such as their initial mass function. The high-resolution study targeting the metal-poor tail of Sgr performed by Hansen et al. (2018) discovered a similarity in the chemical composition between Sgr and the MW halo for these iron-depleted candidates, hinting that galaxies like Sgr contributed to the MW stellar halo. Hasselquist et al. (2017) reached a similar conclusion. By studying this type of objects, focusing on their chemical composition, it is possible to study the past and/or ongoing accretion events.
Despite the lack of abundances measurement, the distribution of our unprecedented selection of VMP stars opens a window into the star formation processes behind this ancient population. It suggests that no such stars were formed recently in the inner area, as we found them to be quite diffuse. Yet they do also show a higher density (similarly to the other populations) in the centre, as seen in Figure 10. The distribution of this population can be linked to the gradual disruption of the progenitor, now leaving a core and the wide stellar streams. Indeed, it is also likely that tidal impulses, which Sgr experienced from the MW during passages at its pericenter, provoked a violent mixing of stars of different populations. This could have erased a possibly more pronounced radial metallicity profile in the Sgr progenitor, which was suspected to show an even greater fraction of more metal-rich stars tightly bound in the interior regions (Chou et al. 2007). These stars would have been removed at later times compared to the older and more metal-poor population, creating the known metallicity variations along the Sgr stellar streams (Chou et al. 2007;Hayes et al. 2020). Within this perspective the remaining VMP objects in the core can be seen as left-overs of the more ancient population ( 10 Gyr), once hosted in the galaxy progenitor, which has been gradually stripped away and deposited in the streams, known to be on average 1 dex more metal-poor than the core (De Boer et al. 2015).
There is no overlap between our VMP selection and the APOGEE spectroscopic data (which contains mostly metal-rich stars), while the cross-match with the PIGS spectroscopic catalogue reveals 115 stars in common. Additional spectroscopic follow-up spectroscopy of the VMP candidates in our sample is required to further study the nature of these stars and the early chemical evolution of Sgr, and this effort is on-going.
CONCLUSIONS AND FUTURE WORK
In this work we presented the largest photometric metallicity study of the core of the Sagittarius galaxy to date, using metallicitysensitive narrow-band photometry from the Pristine Inner Galaxy Survey (PIGS). To summarise the results: • By combining the PIGS photometry with the precise astrometry and broad-band photometry from Gaia EDR3, we were able to isolate bright giant Sgr stars ( 0 ≤ 17.3) to build an unprecedented sample of 44 785 reliable Sgr members with metallicity information.
• Using photometric instead of spectroscopic metallicities allows a much more homogeneous analysis of Sgr populations of varying metallicity. The PIGS data cover ∼ 100 square degrees of Sgr out to 12 degrees along the semi-major axis from the centre (corresponding to ∼ 5.5 kpc at the distance of Sgr), covering most of remnant of the dwarf galaxy core. We divided the Sgr stars into different metallicity populations, with our two main samples being the metal-poor (MP) having [Fe/H] < −1.3 and the metal-rich (MR) having [Fe/H] > −1.0.
• Our data reveal a metallicity gradient, with the metal-rich stars dominating in the inner regions and the metal-poor stars towards the outer regions. This is consistent with previous evidence of a metallicity gradient in Sgr, but we extend it to much larger radii than previously observed.
• We fitted models of the stellar density distributions for populations with various metallicities, separating metal-poor and metal-rich stars at [Fe/H] = −2.0, −1.5, −1.3 and −1.0. The most striking difference we find is a change in the scale radius as function of metallicity, where the metal-rich stars are more centrally concentrated, while the more metal-poor component is more diffuse and distributed as a spheroid with a larger effective radius.
• The PIGS photometry is still sensitive to metallicity for very metal-poor (VMP, [Fe/H] < −2.0) stars. We previously used it to select stars for low-/medium-resolution spectroscopic follow-up, resulting in the largest sample of 100 spectroscopically confirmed Sgr VMP stars. In this work, we further used the PIGS photometry to build an unprecedented sample of 1150 VMP candidates in Sgr with 0 < 17.3. This remarkable sample of iron-depleted stars is left over from an ancient population that was once hosted in the Sgr progenitor, which has likely partially been removed and distributed to the Sgr streams and/or the Galactic halo.
• We discussed how the history and evolution of Sgr could have impacted the various Sgr stellar populations. Our results are consistent with an outside-in quenching process with an older, diffuse metal-poor stellar population and a younger, more centrally concentrated metal-rich counterpart forming at later times. Sgr had an extended and rich star formation history, forming different stellar populations with a different spatial and chemical evolution. To further connect our detected metallicity gradient with the properties of the underlying stellar populations, we need a better precision in metallicity and, currently missing, information about the ages of different metal-poor populations.
Spectroscopic studies on elemental abundances of dwarf systems pave the way in revealing their assembly and evolution histories in more depth, and increase the knowledge about metallicity distributions and age gradients in dwarf galaxies. For example, a strong connection exists between the initial mass function and the early chemical evolution. Chemical information allows to shed light on important aspects in the life of a dwarf galaxy, such as the frequency of star formation episodes, the stellar yields, and the mixing processes that took place in the interstellar medium. Another factor in the evolution of a dwarf galaxy is its interactions with other galaxies, and Sgr is a unique example of a complex disrupting dwarf system (core and streams).
By exploring with high-resolution spectroscopy the chemical composition of the Sgr stellar populations, especially focusing on its metal poor tail, it will be possible to constrain its star formation history and characterise the early evolution of this dSph galaxy. We are planning spectroscopic follow-up of our unprecedented Sgr sample with metallicity information, the result of the powerful combination of the metallicity-sensitive photometry from the Pristine survey (Starkenburg et al. 2017) and the excellent Gaia EDR3 data. | 2022-04-27T06:47:52.906Z | 2022-04-26T00:00:00.000 | {
"year": 2022,
"sha1": "453f06e4a0b57e3cd8668ea9dfdc8a702242418a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4e06facfa84223b916ea9bc04be4ae055c4c11e5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
250680688 | pes2o/s2orc | v3-fos-license | Fourier and Schur-Weyl transforms applied to XXX Heisenberg magnet
Similarities and differences between Fourier and Schur-Weyl transforms have been discussed in the context of a one-dimensional Heisenberg magnetic ring with N nodes. We demonstrate that main difference between them correspond to another partitioning of the Hilbert space of the magnet. In particular, we point out that application of the quantum Fourier transform corresponds to splitting of the Hilbert space of the model into subspaces associated with the orbits of the cyclic group, whereas, the Schur-Weyl transform corresponds to splitting into subspaces associated with orbits of the symmetric group.
Mathematical description of the model
Let us consider a one dimensional Heisenberg magnetic ring of N nodes, each with a single node spin s and periodic boundary conditions. Such a model reveals a symmetry under collective unitary rotations u ∈ SU (n) in the single-node spaces, and permutations σ ∈ Σ N of nodes. Mathematically, we have two sets N = {j = 1, 2, ..., N },ñ = {i = 1, 2, ..., n}, n = 2s + 1 (1) the set of N nodes of the crystal, and the set of single-node states (or the set of labels of projections m i = s + 1 − i, i ∈ñ of the single-node spin s), respectively. A state of the whole magnet is given by assigning the spin projection i to each node j of the magnet f :Ñ →ñ, and it can be presented as The setñÑ = {f :Ñ −→ñ} (3) of all states of the form (2) constitutes the computational basis ( f |f = δ f f , f, f ∈ñÑ ) which spans the Hilbert space of the model where lc CñÑ denotes linear closure of the setñÑ over the field C of complex numbers. The space (4) consist of all linear combinations of magnetic configurations with coefficient being In order to exploit permutational symmetry of the model, let us consider the action A : Σ N ×ñÑ →ñÑ of the symmetric group Σ N on the basis states (2), as a purely permutational representation where f • σ −1 is the composition of mappings f :Ñ →ñ and σ −1 :Ñ →Ñ , so that |f • σ −1 = | . . . i σ −1 (j) . . . , and A(σσ ) = A(σ)A(σ ). This action decomposes the set (3) of all magnetic configurations into orbits O µ of the symmetric group labelled by weights µ. The weight is a composition where a part is the occupation number for the single-node state i ∈ñ, for f ∈ O µ . In other words, the weight µ characterises distribution of single-node states over nodes of the magnet, in such a way that the part µ i is equal to the number of nodes with the single-node state |i . The restriction of the action A to the orbit O µ gives a transitive representation of Σ N with the Young subgroup as the stabiliser of an initial magnetic configuration in the orbit O µ .
2. Quantum Fourier transform on orbits of the cyclic group. The basis of orbits.
It is also the case that the magnet is invariant under the action of the cyclic subgroup C N ⊂ Σ N , it implies that each Σ N -orbit (6) decomposes into C N -orbits, in accordance with the restriction where R C N :Cκ is a transitive representation of C N , with the cyclic group C κ C N being the stabiliser, so that κ is a divisor of N , and K(N ) is the lattice of all divisors of N . The multiplicity m(µ, κ) in equation (11), that is, the multiplicity of occurrence of the transitive representation R C N :Cκ of the cyclic group in transitive representation R Σ N :Σ µ of the symmetric group, or the number of κ-tuply rarefied C N -orbits in the orbit O µ of the symmetric group Σ N , is given by a combinatoric formula [1] m(µ, κ) = Here, gcd( µ 1 κ , . . . , µn κ ) denotes the greatest common divisor of integers µ i /κ, i ∈ñ, and µ : Z >0 −→ {0, ±1} is the standard Möbius function of number theory.
Thus, A ↓ C N (the action A restricted to the subgroup C N ) decomposes each Σ N -orbit (6) into strata labelled by κ ∈ K(N ), consisting of κ-tuply rarefied C N -orbits. In this way, we achieve new set of labels which classify, in some different way, all basis states of the magnet. Here, t ∈m(µ, κ) labels orbits of C N on O µ in the stratum κ and j ∈κ labels configurations within C N -orbits, κ = N/κ is the length of the κ-tuply rarefied C N -orbit. We call (13) the basis of orbits in the set (3) of all magnetic configurations [2]. The basis of orbits specifies all positions of the classical counterpart of the Heisenberg magnet with the structure imposed by the cyclic group.
But, sometimes, especially if we work in the field of quantum information [3], it is more convenient to pass from the position representation to the momentum one, and the basis (13) allows us to do it for each C N -orbit separately. In order to present this approach, let us observe, that each C N -orbit spans a subspace in H, which is invariant under C N according to the decomposition of the transitive representation R C N :Cκ of the cyclic group into irreps where Γ k = exp (2πik/N ) is the irrep of C N , is the κ-tuply rarefied Brillouin zone, and One can see that one dimensional subspaces in Eq. (17) which carry irreps Γ κ , are given by the quantum Fourier transform on orbits of the cyclic group.
An example of quantum Fourier transform
For a very simple case, the Heisenberg magnet with N nodes and one spin deviation, we have what means that we have here one orbit of the symmetric group which consists of one stratum, and this stratum consists of just one regular orbit of the cyclic group. Thus from Eq. (17) we have The transformation (18) is exactly the quantum Fourier transform on the orthonormal basis {|j , | j ∈Ñ } (c.f. for example [3]).
where ∆ λ and D λ are irreducible representations of the symmetric and unitary group, respectively, D W (N, n) denotes the set of all partitions of the integer N into not more than n parts, and appropriate multiplicities, on the strength of Schur-Weyl duality [4], satisfy relations Relations (20) steam from the quantum-mechanical observation that these two actions mutually commute, that is, despite the fact that both dual groups are, for N > 2, n > 1, highly noncommutative. In this way the Hilbert space H of all quantum states of the composite system decomposes into sectors Thus restriction of the action A to the sector H λ gives dim D λ copies of irrep ∆ λ of the symmetric group, whereas restriction of the action B to the same sector H λ gives dim ∆ λ copies of irrep D λ of the unitary group. The duality of Schur-Weyl admits therefore the irreducible basis of the form b irr = {|λ t y | λ ∈ D W (N, n), t ∈D λ , y ∈∆ λ }, whereD λ and∆ λ are some standard bases for the irrep D λ and ∆ λ , respectively. We call this basis irreducible, because its elements transforms under the action of the symmetric or unitary group according to these irreps. It is convenient to take standard bases in the form where SY T (λ) denotes the set of all standard Young tableaux of the shape λ in the alphabetÑ of nodes, and SSW T (λ,ñ) is the set of all Weyl tableaux of the shape λ in the alphabetñ of spins. Finally, on the strength of the Schur-Weyl duality (20), and after choosing basis elements like in Eq. (24), we obtain here, the sum runs over all partitions λ greater or equal µ in dominance order, and K λ µ denotes the Kostka number [5]. Combinatorially, K λ µ is equal to the number of all semistandard Weyl where coefficients µf |λty form a unitary matrix, which transforms the initial basis O µ of magnetic configurations to the irreducible one of the Schur-Weyl duality. We refer to it as to the Kostka matrix at the level of bases. A method of calculation of such transformation matrix relies on representation theory technique called pattern calcullus [6] and was developed in the work [7].
One can see that states given by Eq. (26) are certain wave packets of magnetic configurations which belongs to the orbit O µ of the symmetric group. These states have strictly defined symmetry described by Weyl t and Young y tableaux, what means that they transform under the action of the symmetric and unitary groups according to these irreps.
where |j denotes the magnetic configurations (a states with spin deviation at the node j) and the coefficient j|λ y can be computed from the formula
Final remarks and conclusions
Two transformations, quantum Fourier and Schur-Weyl, are very similar. The first one is strictly connected with translation symmetry of the magnet and corresponds to splitting of the Hilbert space into subspaces associated with the orbits of the cyclic group. The second, however, arises from permutational symmetry and correspond to splitting of the Hilbert space into subspaces spanned on the orbits of the symmetric group. These two decompositions of the Hilbert space are very important, because subspaces they create are invariant under the action of the Hamiltonian so that diminishes the size of the eigenproblem by the factor N (in the case of Fourier transform) or dim λ (in the case of Schur-Weyl transform). | 2022-06-28T02:46:42.225Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "18c7c709292cc0b47fbd62f0068661a4997edb1f",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/213/1/012018/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "18c7c709292cc0b47fbd62f0068661a4997edb1f",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
39978331 | pes2o/s2orc | v3-fos-license | Choledochal cysts-Classification, physiopathology, and clinical course
Although biliary canal cysts were first described around the 1720s, the aetiology, physiopathology, natural course, and treatment options of the disease remain controversial. These cysts are becoming more common and can now be more easily diagnosed thanks to recent developments in imaging methods. Nevertheless, if left un-diagnosed, the risk of progressive complications such as spontaneous perforation, cholelithiasis, choledocholi-thiasis, cholangitis, secondary biliary cirrhosis, portal hypertension, and development of malignancies should be considered. In this review, we discuss the epidemiology, classification, physiopathology, carcinogenesis, and clinical course of biliary cysts. Introduction Choledochal cysts (CCs) are rare medical conditions, which are congenital cystic dilatations of any portion of the bile ducts, most often occurring in the main portion of the common bile duct. Although choledochal cysts are considered a disorder of childhood and infancy, the ages in reported cases range from newly born to 80 years old; however 60% of such cysts are diagnosed in patients less than 10 years old [1-6]. Epidemiology Choledochal cysts (CCs) are extremely rare with an incidence of about 1/100–150,000 in Western societies. The disease affects 1 in 13,500 live births in the USA and 1 in 15,000 live births in Australia. It is seen more fre-quently in Asians; two out of three cases are of Japanese origin despite the reported incidence of 1/1,000. There is significant female gender predominance (F/M: 3–4/1). The cause of this female and Asian origin predomin-ance is unknown [6]. Classification Alonso-Lej defined three types of biliary dilatations in 1959; this classification system has since been widely accepted. Todani expanded this classification in 1977 and divided the CCs into five subgroups. Todani re-modified the classification to include pancreatic junctional abnormalities, and the resulting system became the final and most commonly used classification method [6] (Table 1) (Figure 1). According to the Todani classifi-cation, CCs are classified as follows: Correspondence to: Mustafa Yener Uzunoglu, Sakarya Üniversitesi Egitim ve Arastırma Hastanesi, Adnan Menderes Cad. Sağlık Sok. No: 195 Adapazarı, 54100, Sakarya, Turkey, Tel: +905056503394, E-mail: drmyuzunoglu@gmail.com
Introduction
Choledochal cysts (CCs) are rare medical conditions, which are congenital cystic dilatations of any portion of the bile ducts, most often occurring in the main portion of the common bile duct. Although choledochal cysts are considered a disorder of childhood and infancy, the ages in reported cases range from newly born to 80 years old; however 60% of such cysts are diagnosed in patients less than 10 years old [1][2][3][4][5][6].
Type IA
Cystic dilatation of the extrahepatic bile ducts progressive dilatation of the weak wall of the cyst [17].
The amylase level in the cystic common bile duct has also been found to be higher in this patient population than in control groups in some studies [18,19]. Furthermore, high amylase levels have been associated with early clinical findings and the degree of dysplasia. According to this theory, symptoms are seen at an earlier age, when the severity of the pancreaticobiliary reflux and the amylase level are higher and the disease is more asymptomatic, and becomes complicated at an older age when the severity of reflux and the amylase level are lower.
Since amylase can be used to determine the severity of the pancreaticobiliary reflux, trypsinogen and phospholipase-A2 levels have also been investigated as disease markers and were found to be increased in patients with CC [19][20][21]. Interestingly, trypsinogen was found to be activated by trypsin, in 61% of cases in the biliary tract and in 65% of cases in the gallbladder [19]. Enterokinase is necessary for this activation and normally it cannot be produced in mucosa other than the duodenal wall. Enterokinase is secreted by the dysplastic biliary epithelium, which is secondary to the pancreatic reflux and is activated by trypsinogen and lecithin-lisolecithin activation through phospholipase-A2, inflammation and destruction in the wall of the biliary tract has been proposed. This theory is further supported by animal experiments in which a pancreaticobiliary junction abnormality is produced surgically, leading to a biliary tract dilatation [21,22].
The presence of a pancreaticobiliary reflux has been confirmed in patients with a CC following administration of secretin, which increases pancreatic secretions in those patients. Secretin causes dilatation of the biliary tract and gallbladder. In the control group, on the other hand, the duodenum alone was shown to be filled, confirming the presence of a pancreaticobiliary reflux. This also supports the hypothesis that the presence of a pancreaticobiliary reflux is responsible for the pathogenesis of the "coarse form" of the disease [23,24].
Only 50-80% of CCs demonstrate an association with a pancreaticobiliary junction abnormality. In addition, the presence of antenatally diagnosed CCs despite a lack of immature pancreatic secretions also suggests that this theory is not fully adequate [25,26]. Also, when evaluating the long common channel theory, it is unclear which length defines "long", since common channels of 10-45 mm have been demonstrated. Therefore, it has been suggested by some authors that a junction at a level other than the duodenal wall should be accepted as long, since this might allow mixing of pancreatic secretions with bile, leading to reflux [21].
Another theory is related to the congenital origin of CCs. Excess growth of the immature epithelium in the biliary tract during the development phase or an absence at any phase has been suggested to cause biliary tract dilatation [27,28]. A study that evaluated neonatal cystic CCs found that the amounts of neurons and ganglia were decreased in these cases [29]. Based on this finding, cystic dilatations were suggested to develop secondary to a dilatation at the distal part of the biliary tract, similar to Hirschprung disease, rather than fusiform ones that are acquired due to abnormal reflux. Another study found that elastin fibrils in the biliary tract are absent before the first year of life; cystic dilatations were proposed to develop before an individual is 1 year old, while fusiform dilatations were suggested to develop after 1 year of life secondary to increased pressure in the biliary tract [29,30].
Another theory proposes that dilatations seen in adults develop due to obstructions at the distal biliary tract sec-ondary to various abnormalities (Oddi sphincter dysfunction, scar tissue and gallstones) gallbladder opens into a biliary duct of normal diameter which is proximal to the cyst. The intrahepatic biliary tree is preserved.
Type IC Choledochal Cysts: Characterised by a regular and fusiform dilatation extending from the pancreati-cobiliary junction into the intrahepatic biliary tract.
Type II Choledochal Cysts: Characterised by a diverticulum originating from the extrahepatic biliary tract that is generally connected to the tract via a narrow peduncle.
Type III Choledochal Cysts: Characterised by a dilatation limited to the duodenal wall in the distal part; named as a choledochal cell since it resembles a ureterocele morphologically and aetiologically. The outer wall of the cyst contains almost exclusively duodenal mucosa while the inner wall may include duodenal or biliary epithelium. This lesion has been divided into five subgroups by some authors according to its associations with the choledococele, ampulla Vateri and pancreatic channel; this classification has gained much support [7].
Type IVA Choledochal Cysts: Characterised by multiple intra-extra hepatic dilatations. The intrahepatic dilata-tion may be cystic, fusiform or irregular. In addition, Tadoni reported that these cysts could be classified as cys-tic-cystic, cystic-fusiform or fusiform-fusiform according to the shape of the intrahepatic and extrahepatic dilatations [8].
Type IVB Choledochal Cysts: Characterised by multiple dilatations including only the extrahepatic biliary tracts. The morphology of this type of CC can be described as "beads" or as "a bunch of grapes" [9].
Type V Choledochal Cysts: These cysts, also known as Caroli disease, are characterised by multiple intrahepat-ic saccular or cystic dilatations. Caroli disease describes isolated biliary dilatations, while Caroli syndrome de-scribes biliary dilatations along with congenital hepatic fibrosis [10]. Some authors have described Caroli disease in addition to extrahepatic CC; however, they were unable to differentiate this entity from Type IVA CC. On the other hand, some authors have reported that differentiation can be based on an extrahepatic diffuse fusi-form dilatation diameter < 3 cm, in addition to intrahepatic saccular dilatation [11,12].
Furthermore, a subgroup known as the "coarse form" refers to cases that present with abdominal pain and ob-structive jaundice and that include pancreatobiliary junction abnormalities but no dilatation of the biliary tract. Patients in this group have the same clinical findings as with CCs; histological inflammation and malignancy potential are believed to represent another facet of the disease [13,14]. Other than these, combined types includ-ing Type I and II CCs have also been described [15].
Incidences have been reported as 50-80% for Type I, 2% for Type II, 1.4-4.5% for Type III, 15-35% for Type IV and 20% for Type V.
Physiopathology
Although the exact aetiology of CC is unknown, many theories have been proposed in the pathophysiology of the condition. The most widely accepted hypothesis is Babbitt's theory, which states that the long common channel develops due to a pancreatobiliary junction abnormality. According to this theory, the long common channel allows mixing of the pancreatic secretions and bile for longer than usual, activating pancreatic enzymes. The activated enzymes then cause inflammation and destruction in the wall of the biliary tract, causing dilatation [16]. Also, high pressure inside the pancreatic channel causes Integr Cancer Sci Therap, 2016 doi: 10.15761/ICST.1000209 and that a long narrow stenosis results in a cystic dilatation while a short wide stenosis results in a fusiform dilatation [8,29]. According to this theory, both distal and hilar-intrahepatic stenoses are necessary for the development of Type IV A cysts.
In general, those theories are meant to explain Type I and Type IV CCs. Type II cysts, on the other hand, are diverticular cysts that demonstrate minimal inflammation and carginogenic potential histologically. Therefore, it is unclear whether these cysts develop secondary to causes explained in the theories stated above or whether they are true biliary duplications [31].
Type II cysts (choledococeles) have been proposed to develop secondary to a pressure increase in the distal intramural biliary tract due to an ampullary obstruction or sphincter dysfunction. Some authors, on the other hand, suggest that choledococeles may actually be duodenal or biliary duplication cysts, since they can contain duodenal or biliary inner epithelium [32,33].
Carcinogenesis
The premalignant nature of CC has been widely recognised; not only is the development of malignancy more frequent, the age of development of malignancy is also earlier than in the normal population [34]. Malignancy is a result of chronic inflammation, which leads to dysplasia and may also develop secondary to recurrent cholangitis and pancreatic reflux [35][36][37]. The risk of malignancy associated with a CC has been reported to be 10 to 15% in the overall population; however, this rate increases with increasing age [21,38]. The risk of malignancy is 23% at the age of 20 to 30 years, while it can increase up to 75% at the age of 70 to 80 years [35,39]. Malignancies include adenocarcinomas in 73 to 84% of cases, anaplastic carcinomas in 10%, undifferentiated carcinomas in 5 to 7%, squamous cell carcinomas in 5%, and other types of carcinoma in 1.5% [40,41]. These malignancies affect the extrahepatic biliary tract in 50 to 62% of cases, the gallbladder in 38 to 46%, the intrahepatic biliary tract in 2.5%, and the liver and pancreas in 0.7% [35]. The presence of a pancreaticobiliary junction abnormality carries a risk of malignancy in 16 to 55% of cases, regardless of whether a biliary dilatation is present [35,42,43]. The malignancy risk in the coarse form without biliary tract dilatation is 12 to 39%. While malignancies usually develop inside the cyst, in the coarse form they develop in the gallbladder. Therefore, some authors have suggested that tumours are most common in areas of highest exposure to biliary irritation (inside the cyst in patients with CC, or in the gallbladder if there is no cyst] [36][37][38]. The risk of malignancy is 7-15% and 2.5% in Caroli disease and in a choledococele, respectively [44][45][46][47].
Clinical course
Although the symptoms of biliary cysts can be seen at any age, they manifest before the age of 10 years in 80% of cases. Although the triad of abdominal pain, jaundice and an intraabdominal palpable mass is known as the classical clinical presentation, it is rare for a patient to present with all three signs (∼20%); however, two out of three of those symptoms are present in 8% of cases [48,49]. In the neonatal period, patients often present with abdominal pain and mechanical jaundice (< 12 months), while older patients present with abdominal pain, nau-sea and vomiting, and jaundice [50][51][52].
The initial symptoms may be abdominal pain and signs of peritonitis due to cyst rupture in 1-2% of cases [74]. Cyst rupture is thought to be due to the fact that the cyst wall, which becomes more fragile secondary to a distal obstruction in the biliary tract or increased intraabdominal pressure, cannot withstand the tension [75]. Perforation is often seen at the junction of the cystic duct and the main hepatic duct, which has the weakest blood supply in the biliary tract [74][75]. In cases of perforation, although the clinical findings are extremely aggressive, radiographic diagnosis is challenging because dilatations in the biliary tract disappear. | 2019-03-16T13:11:08.369Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "2911b6dc3a2cb84c3516e63949a82c791a6ff2d1",
"oa_license": "CCBY",
"oa_url": "https://oatext.com/pdf/ICST-3-209.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ed90f2c4e55051fdaad3422d1896cd30cfc7e1c4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17894865 | pes2o/s2orc | v3-fos-license | LR and L+R Systems
We consider coupled nonholonomic LR systems on the product of Lie groups. As examples, we study $n$-dimensional variants of the spherical support system and the rubber Chaplygin sphere. For a special choice of the inertia operator, it is proved that the rubber Chaplygin sphere, after reduction and a time reparametrization becomes an integrable Hamiltonian system on the $(n-1)$--dimensional sphere. Also, we showed that an arbitrary L+R system introduced by Fedorov can be seen as a reduced system of an appropriate coupled LR system.
Introduction
In this paper we study nonholonomic geodesic flows on direct product of Lie groups with specially chosen right-invariant constraints and left-invariant metrics.
Let Q be a n-dimensional Riemannian manifold Q with a nondegenerate metric κ(·, ·) and let D be a nonintegrable (n − k)-dimensional distribution on the tangent bundle T Q. A smooth path q(t) ∈ Q, t ∈ ∆ is called admissible (or allowed by constraints) if the velocityq(t) belongs to D q(t) for all t ∈ ∆. Let q = (q1, . . . , qn) be some local coordinates on Q in which the constraints are written in the form (α j q ,q) = n X i=1 α j iq i = 0, j = 1, . . . , k, (1.1) where α j are independent 1-forms. The admissible path q(t) is called nonholonomic geodesic if it is satisfies the Lagrange-d'Alambert equations d dt λjα j (q)i, i = 1, . . . , n, (1.2) where the Lagrange multipliers λj are chosen such that the solutions q(t) satisfy constraints (1.1) and the Lagrangian is given by the kinetic energy L = 1 2 κ(q,q) = 1 2 P ij κijqiqj . After the Legendre transformation pi = ∂L/qi = P j κijqj , i = 1, . . . , n, one can also write the Lagrange-d'Alambert equations as a first-order system on the cotangent bundle T * Q. As for the Hamiltonian systems, the Lagrangian L(q,q) (or the Hamiltonian H(q, p) = 1 2 P ij κ ij pipj in the cotangent representation of the flow) is always the first integral of the system.
Suppose that a Lie group K acts by isometries on (Q, κ) (the Lagrangian L is Kinvariant) and let ξQ be the vector field on Q associated to the action of one-parameter subgroup exp(tξ), ξ ∈ k = T Id K. The following version of the Noether theorem holds (see [1,2]): if ξQ is a section of the distribution D then On the other side, let ξQ be transversal to D, for all ξ ∈ k. In addition, suppose that Q has a principal bundle structure π : Q → Q/K and that D is the collection of horizontal spaces of a principal connection. Then the nonholonomic geodesic flow defined by (Q, κ, D) is called a K-Chaplygin system. The system (1.2) is K-invariant and reduces to the tangent bundle T (Q/K) = D/K (for the details see [26,2,8,11]).
The equations (1.2) are not Hamiltonian. However, in some cases they have a rather strong property -an invariant measure (e.g, see [1,27,4]). Within the class of K-Chaplygin systems, the existence of an invariant measure is closely related with their reduction to a Hamiltonian form after an appropriate time rescaling dτ = N dt (see [10,29,18,8,11]).
Veselov and Veselova [30,31] constructed nonholonomic systems on unimodular Lie groups with right-invariant nonintegrable constraints and left-invariant metrics, so called LR systems, and showed that they always possess an invariant measure. Similar integrable nonholonomic problems on Lie groups, with left and right invariant constraints, are studied in [17,21,22,3,19]. Recently, a nontrivial example of a nonholonomic LR system, which can be regarded also as a generalized Chaplygin system (n-dimensional Veselova rigid body problem [30,17]) such that Chaplygin reducibility theorem is applicable for any dimension is given by Fedorov and Jovanović [18].
It appears that LR systems can be viewed as a limit case of certain artificial systems (L+R systems) on the same group, which also possess an invariant measure (see Fedorov [15]). The latter systems do not have a straightforward mechanical or geometric interpretation and arise as a "distortion" of a geodesic flow on G whose kinetic energy is given by a sum of a left-and right-invariant metrics.
A class of L+R systems on G can be seen as a reduction of a class of nonholonomic systems defined on the semi-direct product of the group G and a vector space V (see Theorems 3,4 in Schneider [28]). We shall prove that an arbitrary L+R system on G can be obtained as a reduction of a coupled noholonomic LR system defined on the direct product G × G.
One of the best known examples of integrable nonholonomic systems with an invariant measure is the celebrated Chaplygin sphere which describes a dynamically non-symmetric ball rolling without sliding on a horizontal plane and the center of the mass is assumed to be at the geometric center [9]. It is interesting that the Chaplygin's sphere appears within both constructions. In the construction described in [28] one should take for the configuration space the Lie group of Euclidean motion SE(3), that is the semi-direct product of SO(3) and R 3 [28]. On the other side, Chaplygin sphere is a LR system on the direct product SO(3) × R 3 (e.g., see [16]). This was a starting point in considering the coupled nonholonomic LR systems below.
Outline and results of the paper. In Section 2 we recall the definition and basic properties of LR and L+R systems. We define the coupled LR systems and show that any L+R system can be obtain as a reduction of an appropriate coupled LR system (Sections 3, 4). An example of a coupled LR system on G × g is given, which provides an alternative generalization of the Chaplygin sphere problem (Section 4, system (6.17) in Section 6).
In Section 5 we study a n-dimensional variant of the spherical support system introduced by Fedorov [13]: the motion of a dynamically nonsymmetric ball S with the unit radius around its fixed center that touches N arbitrary dynamically symmetric balls whose centers are also fixed, and there is no sliding at the contacts points.
Recall that the rubber rolling of the sphere S 2 over some other fixed convex surface in R 3 means that that in the addition to the constraint given by the condition that the velocity of the contact point is equal to zero, we have no-twist condition that rotations about the normal to the surface are forbidden. The rubber rolling of the dynamically non-symmetric sphere over the another sphere, considered as a Chaplygin system on the bundle SO(3) × S 2 → S 2 (where SO(3) acts diagonally on the total space), as well as the Hamiltonization in sphero-conical variables of S 2 is given by Koiller and Ehlers [12]. The integrable cases are found by Borisov and Mamaev [7]. In particular, when the radius of the fixed sphere tends to infinity, we get the rubber rolling of the sphere over the plane (rubber Chaplygin sphere). The Chaplygin reducing multiplier for the rubber Chaplygin sphere is given in [11].
By the analogue, we define the n-dimensional rubber spherical support system with additional no-twist conditions at the contact points. It appears that both systems fits into the construction of coupled LR systems. Similarly as for the 3-dimensional spherical support system studied in [13], we prove that the 3-dimensional rubber spherical support system is integrable (Section 5).
Finally in Section 6 we consider the n-dimensional rubber Chaplygin sphere problem describing the rolling without slipping and twisting of an n-dimensional ball on an (n − 1)-dimensional hyperplane H in R n as coupled LR systems on the direct product SO(n) × R n−1 . It appears that the rubber Chaplygin sphere is a SO(n − 1) × R n−1 -Chaplygin system closely related to the n-dimensional nonholonomic Veselova problem, which allows as to prove the existence of the Chaplygin multiplier for a specially chosen inertia operator of the ball. In particular, when n = 3, the multiplier exist for any inertia tensor of the ball, and reduces to the one obtained in [11,12].
Preliminaries
LR systems. LR system on a Lie group G is a nonholonomic geodesic flow of a leftinvariant metric and right-invariant nonintegrable distribution D ⊂ T G (see [30,31]). Through the paper we suppose that all considered Lie groups G have bi-invariant Riemannian metrics, or equivalently AdG-invariant Euclidean scalar products ·, · on corresponding Lie algebras. In particular, Lie groups G are unimodular.
Let g = T Id G be the Lie algebra of G. In what follows we shall identify g and g * by means invariant scalar product ·, · , and T G and T * G by the bi-invariant metric. For clearness, we shall use the symbol ω for the elements in g and the symbol m for the elements in g * ∼ = g.
The Lagrangian is defined by L(g,ġ) = 1 2 Iω, ω , where ω = g −1 ·ġ is the angular velocity in the moving frame. Here I : g → g is a symmetric positive definite (with respect to ·, · ) operator. The corresponding left-invariant metric will be denoted by (·, ·)I . The distribution D is determined by its restriction d to the Lie algebra and it is nonintegrable if and only if d is not a subalgebra of g. Let h be the orthogonal complement of d with respect to ·, · and let a1, . . . , a k be a orthonormal base of h. Then the right-invariant constraints can be written as or, equivalently, αi, ω = 0, αi = Ad g −1 (ai), i = 1, . . . , k. (2.1) Here Ω = Adg(ω) =ġ · g −1 represents angular velocity in the space. Equations (1.2) in the left trivialization take the forṁ where m = ∂L/∂ω = Iω ∈ g * is the angular momentum in the body frame. The Lagrange multipliers λi can be found by differentiating the constraints (2.1). They are actually defined on the whole phase space T * G and we can consider the system (2.2), (2.3) on T * G as well (see [31]). The constraint functions αi, ω are then integrals of the extended system and the nonholonomic geodesic flow is just the restriction of (2.2), (2.3) onto the invariant submanifold (2.1).
Instead of (2.2), (2.3), one can consider the closed system consisting of (2.2) anḋ [31]). Also, since for ξ ∈ g, the associate vector field ξG of the left G-action is right invariant and the momentum mapping of the left action equals to M = Adg(m) (angular momentum in the space), the LR system (2.2), (2.3) has the Noether conservation laws: If the linear subspace h is the Lie algebra of a subgroup H ⊂ G, then the Lagrangian L and the right-invariant distribution D are invariant with respect to the left H-action. As a result, the LR system can naturally be regarded as a H-Chaplygin system [18].
Geodesic flow on G with L+R metric. In addition to the nondegenerate linear operator I defining the left-invariant metric (·, ·)I , introduce a constant symmetric linear operator Π 0 : g → g defining a right-invariant metric (·, ·)Π on the n-dimensional compact Lie group G: for any vectors We take the sum of both metrics and consider the corresponding geodesic flow on G described by the Lagrangian where Π g = Ad g −1 Π 0 Adg. We can also consider the case when Π g is not positive definite, but the total inertia operator B = I + Π g is nondegenerate and positive definite on the whole group G. The geodesic motion on the group is described by the Euler-Poincaré equationṡ together with the kinematic equationġ = g · ω.
In order to find explicit expression for g −1 (∂L/∂g), we first note that for any ξ ∈ g, ξ, As a result, Also, in view of the definition of Π, its evolution is given by n × n matrix equatioṅ Since ·, · is AdG invariant scalar product, we have ad T ω = −ad ω , andΠ = [Π, adω]. Equations (2.6), (2.7) form a closed system on the space g × Symm(n) with the coordinates ωi, Πij (ω = P i ωiei, Π = P i≤j Πijei ⊗ej), where e1, . . . , en is a orthonormal base of g.
L+R systems. Following Fedorov [15], consider the equations (2.6) modified by rejecting the term g −1 (∂l/∂g). As a result, we obtain the another system on the space g×Symm(n). This is generally not a Lagrangian system, and, in contrast to equations (2.6), (2.7), it possesses the "momentum" integral Bω, Bω . In view of the structure of the kinetic energy, we shall refer to the system (2.8) (or (2.9)) as L+R system on G [15].
Coupled nonholonomic LR Systems
Define a coupled nonholonomic LR system on the direct product G × G1 (G = G1) as a LR system given by the Lagrangian function and right-invariant constraints where hi, i = 1, . . . , q are mutually orthogonal linear subspaces of g.
The Lagrangian (3.1) in the second variable is right-invariant as well. It is convenient to write the equations of motion both in the left-trivialization (in variables g and ω) and right-trivialization (in variables g1 and W) Then the right-invariant distribution D ⊂ T (G × G1) is given by Let h g i = Ad g −1 (hi) = g −1 ·hi ·g and let pr h g i : g → h g i be the orthogonal projections, i = 0, . . . , q.
.2), (3.3) if it satisfies equations
The equations of a motion in the right-trivialization (or in the space frame) readṀ where the Lagrange multipliers (reaction forces) Λi belong to hi (i = 0, 1, . . . , q) and Let D0 ⊂ T G be the right-invariant distribution defined by (3.2).
Theorem 3.2. The equations (3.5), (3.6), (3.7) onD are reducing to the following system on D0 ⊂ T G: (3.16) Proof. The equations (3.5) and (3.7) form a closed system on D0. If (g(t), ω(t)) is a solution of (3.5), (3.7), then one can easily reconstruct the motion of W. Let while the hi-components of the angular velocity W are determined from the constraints (3.3): Now, let a1, . . . , a k j be the orthonormal base of hj. Then α1 = Ad g −1 (a1), . . . , α k j = Ad g −1 (a k j ) will be the orthonormal base of h g j . We have αi, ω αi.
Whence, by using (2.4) and the identity ω, [αi, ω] = 0, we obtain d dt The above equation implies that (3.5), (3.7) can be rewritten in the form (3.16). 2 The derivation of Bω, ω along the flow is: d dt Bω, ω = 2 [B, ω], ω + 2 λ0, ω . The first term is equal to zero since ·, · is a AdG-invariant scalar product, while the second term is equal to zero from the constraint (3.2). We can refer to L red = 1 2 Bω, ω as to the reduced Lagrangian, or reduced kinetic energy. If pr k W ≡ 0, the reduced kinetic energy coincides with the kinetic energy of the reconstructed motion on the whole phase space.
From the equation (3.9) we also get the linear conservation law The integrals (3.18) and (3.19) are actually Noether integrals (2.5) of the system. The other Noethers integrals are trivial: Remark 3.1. If h0 = 0, i.e., we do not impose the constraint (3.2), the reduced system is an L+R system on the Lie group G d dt (Bω) = [Bω, ω],ġ = g · ω. (3.20) Further suppose that (3.17) is the Lie algebra of the closed Lie subgroup K ⊂ G and that linear subspaces hi are AdK -invariant: Then, since h kg i = h g i , k ∈ K, the L+R equations (3.20) are left K-invariant and we can reduce them to Q × g, where Q = G/K is the homogeneous space, with respect to the left-action of K.
Remark 3.2. In the case when h0 is the Lie algebra of a closed subgroup H ⊂ G, h1 + h2 + · · · + hq = g and linear spaces hi are AdH invariant, then the coupled LR system (3.1), (3.2), (3.3) is (H × G1)-Chaplygin system with respect to the action: The reduced space D/(H × G1) is the tangent bundle of the homogeneous space G/H.
N -Coupled Systems
There is a straightforward generalization of the construction to the case when we have coupling with N different Lie groups, that is the configuration space is the direct product G × G1 × · · · × GN and the Lagrangian is where ·, · i are AdG i invariant scalar products on Lie algebras gi = T Id Gi, i = 1, . . . , N . Let us fix a base e1, . . . , en of g and some bases f1, . . . , f d i of gi (di = dim gi). Let be the linear mappings with matrixes [Ai] (pi × n) and [Bi] (pi × di) in the above bases.
In addition, we suppose that the (pi × pi)-matrixes Repeating the arguments of Theorems 3.1 and 3.2, the considered N -coupled nonholonomic system reduces to the L+R system where Bω = Iω + Πω, and Πω in the matrix form, relative to the base (3.21), is given by As above, one can easily incorporate an additional right invariant constraint of the form (3.2).
LR systems on G × g × · · · × g. As an example, consider the case where Gi are all equal to the Lie algebra g considered as a Abelian group, ·, · i = ·, · and the constraints (4.2) are given by where Γi are fixed elements of the Lie algebra g and ρi are real parameters. Note that, since Gi = g is Abelian group, the angular velocities coincide with the usual velocity: The equations of a motion in the right-trivialization reaḋ where M = Adg(Iω). This is a {Id} × g N -Chaplygin system and it is reducible to T G. Differentiating the constraints (4.5), from (4.7) we get the Lagrange multipliers Therefore, the equations (4.7) in the left-trivialization take the form where γi = Ad g −1 (Γi), i = 1, . . . , N . where Remark 4.1. Nonholonomic systems on semi-direct products G ×σ V , where σ is a representation of the Lie group G on the vector space V are studied in Schneider [28]. Proposition 4.1 can be derived from Theorem 3 given in [28].
Spherical Support
Consider the motion of a dynamically nonsymmetric ball S in R n with the unit radius around its fixed center. Suppose that the ball touches N arbitrary dynamically symmetric balls whose centers are also fixed, and there is no sliding at the contacts points. We call this mechanical construction the spherical support. For n = 3 spherical support is defined by Fedorov [13,15]. The configuration space is SO(n) N+1 : the matrixes g, gi ∈ SO(n) map the frames attached to the ball S and the ith ith peripheral ball to the fixed frame, respectively. The Lagrangian is of the form (4.1), where for ·, · we take the scalar product proportional to the Killing form the angular velocities ω, Ω, wi, Wi of the balls are defined as above, I : so(n) → so(n) is the inertia tensor of the ball S and Di, ρi ∈ R are the central inertia moment and the radius of the ith peripheral ball. Let Γi ∈ R n be the unit vector fixed in the space and directed from the center C of the ball S to the point of contact with the ith ball. Nonholonomic constraints express the absence of sliding at the contact points. This means that velocity of the point of contact of the ball S with the ith ball, in the space frame, is the same as the velocity of the corresponding point on the ith ball.
Consider the fixed point on the ball S with coordinates r and R in the body and space frames, respectively. Then the velocity of the point r in space is given by the Poisson equation (e.g, see [17]) V =Ṙ = d dt (g · r) =ġ · g −1 · g · r = ΩR. Therefore, the velocity of the contact point with the ith peripheral ball is given by ΩΓi. Similarly, the velocity of the corresponding contact point of the ith ball in the space frame is given by −ρiWΓi and the constraints are We see that the n-dimensional spherical support is actually a N -coupled LR system studied in the previous section. Let ,ġi = Wi · gi, i = 1, . . . , N.
We have the conservation lawṡ which together with the right ({Id} × SO(n) N )-symmetry lead to the following statement Proposition 5.1. The spherical support system reduces to the L+R flow where Bω = Iω + P N i=1 Di/ρ 2 i (ω γi ⊗ γi + γi ⊗ γi ω) and γi are defined by (5.3). One can say that the reduced system (5.5) on T SO(n) describes the free rotation of a "generalized Euler top", whose tensor of inertia is a sum of two components: one is fixed in the body and the other one is fixed in the space.
Note that the vectors γi in the frame attached to the ball S satisfy the Poisson equations (e.g., see [17])γ i = −ωγi, i = 1, . . . , N. For n = 3 the system is integrable by the Euler-Jacobi theorem, and its generic invariant manifolds are two-dimensional tori (see [13,15]). Rubber spherical support. Now consider the rubber spherical support system in R n . The analogue of rubber rolling is that, in addition to the constraints (5.4), the rotations of the ball S and ith peripheral ball around the vector Γi are the same: Since pr k i = I − pr h i we get where The equations (5.11) are trivial since W can be expressed in terms of Ω from constraints (5.4) and (5.9).
As above, we get family of geometric integrals that can be expressed as the coefficients of the polynomials (5.14) For n = 3, among the reduced kinetic energy 1 2 B * ω, ω and integrals (5.14) there are four independent one. Theorem 5.3. For n = 3, the rubber spherical support system (5.10), (5.12) is solvable by the Euler-Jacobi theorem and its generic invariant manifolds are two-dimensional tori.
Rubber Chaplygin Sphere
Following [17,16], consider the generalized Chaplygin sphere problem of an n-dimensional ball of radius ρ, rolling without slipping on an (n − 1)-dimensional hyperspace H in R n . For the configuration space we take the direct product of Lie groups SO(n) and R n , where g ∈ SO(n) is the rotation matrix of the sphere (mapping frame attached to the body to the space frame) and r ∈ R n is the position vector of its center C (in the space frame). For a trajectory (g(t), r(t)) define angular velocities The Lagrangian of the system is then given by Here I : so(n) → so(n) and m are the inertia tensor and mass of the ball, ·, · is given by (5.1) and (·, ·) is the Euclidean scalar product. Let Γ ∈ R n be a vertical unit vector (considered in the fixed frame) orthogonal to the hyperplane H and directed from H to the center C. The condition for the sphere to role without slipping leads that the velocity of the contact point is equal to zero: This is a right-invariant nonholonomic constraint of the form (4.2). If we take the fixed orthonormal base E1 = (1, 0, . . . , 0, 0) T , . . . , En = (0, 0, . . . , 0, 1) T , such that Γ = En, then the constraint (6.2) takes the forṁ ri = ρΩin, i = 1, . . . , n − 1,ṙn = 0, where Ωij = Ω, Ei ∧ Ej .
The last constraint is holonomic, and for the physical motion we take rn = ρ. From now on we take SO(n) × R n−1 for the configuration space of the rolling sphere, where R n−1 is identified with the affine hyperplane ρΓ + H.
Let h ⊂ so(n) be the linear subspace h = R n ∧ Γ and k ∼ = so(n − 1) its orthogonal complement in so(n). Define the rubber Chaplygin sphere as a Chaplygin sphere (6.1), (6.2) subjected to the additional right-invariant constraints Ω, k = ω, k g = 0, k g = Ad g −1 k, ⇐⇒ Ωij = 0, 1 ≤ i < j ≤ n − 1, (6.3) describing the no-twist condition at the contact point. As a result, the distribution is right SO(n) × R n−1 as well as the left SO(n − 1) × R n−1 invariant (SO(n − 1) is the subgroup of SO(n) with the Lie algebra k). Moreover, the rubber Chaplygin sphere is a (SO(n − 1) × R n−1 )-Chaplygin system. Let γ be the vertical vector in the frame attached to the ball γ = g −1 Γ. Then where M = Adg(Iω) is the ball angular momentum in the space and Λ0 ∈ h, Λ1 ∈ R n are Lagrange multipliers. From (6.2) and (6.5) we find Λ1 = mρΩΓ. On the other hand Whence, we can write equations (6.4) as a closed system on D0 ⊂ T SO(n), where D0 is the right-invariant distribution defined by (6.3) (reduction of R n−1 -symmetry). From (3.14), (6.6) and the relation pr h γ (ω) = (ω · γ) ∧ γ =ω γ ⊗ γ + γ ⊗ γω, in the left-trivialization of T SO(n) the reduced system takes the form where λ0 = Ad g −1 (Λ0). Let be the angular momentum of the ball relative to the contact point (see [17]). Then we have: Proposition 6.1. The motion of the rubber Chaplygin sphere, in variables ω, g, is described byk = [k, ω] + λ0,ġ = g · ω, (6.8) or, in variables ω, γ, by equationṡ The Lagrange multiplier matrix λ0 belongs to k γ and is determined from the constraint (6.3).
(ii) Under the time substitution dτ = 1/ p (Aγ, γ) dt the reduced system (6.10) (or (6.12)) becomes a Hamiltonian system describing a geodesic flow on S n−1 with the Lagrangian (6.14) (iii) For A with distinct eigenvalues, the latter system is algebraic completely integrable and generic invariant manifolds are (n − 1)-dimensional tori.
(iv) Moreover, the SO(n − 1)-reconstruction of the motion is solvable: the generic trajectories of the system (6.8) are straight-lines (but not uniform) over (n − 1)dimensional invariant tori.
The complete integration is presented in [18]. Given a solution (g(t), ω(t)) of the system (6.8), the reconstruction of r-variable simply follows from the integration of the constraint (6.2) Ad g(t) ω(t)Γ dt .
Remarks on the Chaplygin sphere.
• Borisov and Mamaev [5,6] proved that the classical Chaplygin rolling sphere problem is Hamiltonian after an appropriate time rescaling. Recently, the Hamiltonization of the homogeneous Chaplygin rolling sphere problem in R n is given in [20], while the Hamiltonization of the non-homogeneous reduced Chaplygin sphere problem is obtained in [25].
• Let us turn back to the coupled LR system described in For G = SO(3) we reobtain the equations of a motion of the Chaplygin sphere in R 3 . Thus the system (6.17) can be seen as an alternative generalization of the Chaplygin sphere problem. | 2009-02-10T14:12:53.000Z | 2009-02-10T00:00:00.000 | {
"year": 2009,
"sha1": "97c85ffe071657b5de9b19f1d3e70e7e0ffa53ad",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0902.1656",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "97c85ffe071657b5de9b19f1d3e70e7e0ffa53ad",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
3562377 | pes2o/s2orc | v3-fos-license | Perdeuteration of cholesterol for neutron scattering applications using recombinant Pichia pastoris.
Deuteration of biomolecules has a major impact on both quality and scope of neutron scattering experiments. Cholesterol is a major component of mammalian cells, where it plays a critical role in membrane permeability, rigidity and dynamics, and contributes to specific membrane structures such as lipid rafts. Cholesterol is the main cargo in low and high-density lipoprotein complexes (i.e. LDL, HDL) and is directly implicated in several pathogenic conditions such as coronary artery disease which leads to 17 million deaths annually. Neutron scattering studies on membranes or lipid-protein complexes exploiting contrast variation have been limited by the lack of availability of fully deuterated biomolecules and especially perdeuterated cholesterol. The availability of perdeuterated cholesterol provides a unique way of probing the structural and dynamical properties of the lipoprotein complexes that underly many of these disease conditions. Here we describe a procedure for in vivo production of perdeuterated recombinant cholesterol in lipid-engineered Pichia pastoris using flask and fed-batch fermenter cultures in deuterated minimal medium. Perdeuteration of the purified cholesterol was verified by mass spectrometry and its use in a neutron scattering study was demonstrated by neutron reflectometry measurements using the FIGARO instrument at the ILL.
Introduction
Neutron scattering studies offer unique insights to structural biology, especially when used in conjunction with selective and nonselective deuteration approaches . In neutron crystallography, hydrogen atoms are readily visible, yielding crucial information on protonation states of active site residues, charge transfer processes, and hydration (Howard et al., 2011;Cuypers et al., 2013a,b;Casadei et al., 2014;Haupt et al., 2014;Blakeley et al., 2015;Cuypers et al., 2016;Kwon et al., 2016). Small-angle neutron scattering (SANS) studies have the significant advantage that contrast variation methods can be used to distinguish and model different components of a macromolecular complex (Vijayakrishnan et al., 2010;Cuypers et al., 2013a,b;Ibrahim et al., 2017, Appolaire et al., 2014, Edlich-Muth et al., 2015, and in a comparable way, neutron reflection studies allow strongly complementary information to be provided in the analysis of membranous interfaces (Grage et al., 2011;Fragneto, 2012). Furthermore, important aspects of macromolecular dynamics and its coupling to hydration water dynamics are provided by neutron incoherent scattering studies (Schirò et al., 2015). These insights result mainly from the fact that the scattering powers for neutrons of both hydrogen and deuterium are of comparable magnitude (although, crucially, their scattering lengths differ in sign) with those of the other atoms typically found in biological macromolecules -in strong contrast to the situation for X-rays where hydrogen/deuterium atoms scatter very weakly. This is of crucial importance given that about half of the atoms in biological molecules are hydrogen and that they are often highly significant to biological structure, dynamics, and function. Deuteration, the replacement of hydrogen atoms by the stable isotope deuterium, is a powerful method for the investigation of the structure and dynamics of biomolecules by means of NMR, Raman/infrared spectroscopy and neutron scattering. In the case of neutron analyses, the pronounced differences between the scattering lengths of hydrogen-and deuterium-containing molecules enable parts of molecular complexes to be highlighted by neutron scattering methods such as small-angle neutron scattering (SANS), neutron reflectometry (NR), or neutron crystallography (NMX).
In the case of structural work on lipid systems by SANS and NR as well as NMR (Stockton et al. (1977); Hagn et al. (2013), the deuteration of phospholipids and other membrane components can be heavily exploited (Maric et al., 2014;de Ghellinck et al., 2014;Gerelli et al., 2014;Foglia et al., 2011). However, chemical synthesis of unsaturated perdeuterated lipids and sterols still remains challenging. de Ghellinck et al. (2014) have demonstrated that perdeuterated phospholipids and sterols can be extracted from P. pastoris cells grown in deuterated minimal medium. These authors have also shown that while the phospholipid and ergosterol homeostasis is maintained in deuterated cultures, the fatty acid unsaturation level is modified; the production of perdeuterated unsaturated lipids is significantly enhanced when P. pastoris is grown at lower temperatures.
The multi-lamellar organization of fully deuterated lipid extracts of P. pastoris membranes has been shown using neutron diffraction (Gerelli et al., 2014). This study showed that at high relative humidity, non-deuterated and deuterated lipids are similar in their multi-lamellar organization. However, at low relative humidity, non-deuterated lipids are characterized by a larger single lamellar structure than observed for the deuterated samples. Furthermore, perdeuterated lipids have been used to characterize structural changes in the membrane of P. pastoris induced by the antifungal Amphotericin B (de Ghellinck et al., 2015).
In addition to the extraction of lipids from non-recombinant P. pastoris cultures, perdeuterated lipids have also been isolated from nonrecombinant E. coli (Lind et al., 2015) and a recombinant E. coli expression system was successfully used for the biosynthesis of selectively deuterated phosphatidylcholine (PC) (Maric et al., 2014, Maric et al., 2015. Chemically synthesised cholesterol molecules that are partially deuterated − such as cholesterol-D 6 (deuteration in ring) and cholesterol-D 7 (deuteration in tail) are commercially available (Kessner et al., 2008). However, fully deuterated cholesterol (cholesterol-D 46 ) is difficult to synthesize chemically. Since high concentrations of deuterium are toxic for mammals and mammalian cell lines, perdeuteration of cholesterol cannot be achieved in native organisms. A biosynthetic route for this therefore depends on the use of a deuterium-resistant recombinant organism that can be adapted to growth in a fully deuterated medium. P. pastoris, a methylotrophic yeast, has been shown to grow in fully deuterated minimal medium with d 8 -glycerol as carbon source and to produce perdeuterated lipids including ergosterol, a molecule related to cholesterol de Ghellinck et al., 2014;Hirz et al., 2013) have succeeded in lipo-engineering P. pastoris by several gene insertions and knock-out mutations to produce cholesterol instead of its native ergosterol. Here, we report a robust protocol for recombinant perdeuteration of cholesterol in a lipo-engineered P. pastoris strain in flask and in high cell-density cultures. The biosynthetically labelled cholesterol has been produced and purified in large quantities (tens of mg). The production, HPLC purification, and characterisation by gas chromatography and mass spectrometry are described. An example illustrating the feasibility of exploiting the perdeuterated d-cholesterol in neutron scattering studies is demonstrated by NR measurements from perdeuterated and unlabelled cholesterol in a synthetic lipid monolayer.
Growth of recombinant P. pastoris in perdeuterated fed-batch cultures
900 ml of deuterated BSM containing 10 g of d 8 -glycerol was inoculated with 100 ml of preculture in a 3 l fermenter (Labfors, Infors). During the batch and fed-batch phases the pD was adjusted to 6.0 by the addition of NaOD and the temperature was adjusted to 28°C. The gasflow rate of sterile filtered air was 2.0 l/min. Stirring was adjusted to ensure a dissolved oxygen tension (DOT) of 30%. The initial OD 600 was 0.9. After 7 days the glycerol from the batch phase was consumed and the fed-batch phase was initiated by constant feeding of 30 g of d 8glycerol over 12 days. The final OD 600 was 40 and 32 g of Pichia cellular wet weight was obtained.
Determination of sterol production
15 mg of deuterated or non-deuterated Pichia cell paste was transferred to Pyrex tubes and resuspended in 1 ml of 0.2% pyrogallol in MeOH and 400 μl of 60% KOH. Five μl of ergosterol (2 mg/ml) were added as internal standard (IS) and samples were saponified at 90°C for 2 h. Sterols were extracted three times with n-heptane and dried under a stream of nitrogen. Dried extracts were dissolved in 10 μl of pyridine and derivatized with 10 μl of N'O'-bis(trismethylsilyl)-trifluoracetamide. Samples were diluted with 50 μl of ethyl acetate and analyzed by gas chromatography-mass spectrometry (GC-MS) (Hirz et al., 2013).
Isolation and purification of perdeuterated cholesterol
Cholesterol was extracted from P. pastoris cell paste using an organic solvent extraction procedure. The cell paste was transferred into a 500 ml round-bottomed flask to which was added 65 g potassium hydroxide, 43 ml water, 200 ml methanol and 350 mg pyrogallol. This mixture was heated for 3 h under gentle reflux while keeping the stirring at a minimum to avoid foaming. After cooling to room temperature, insoluble materials were filtered off and the methanolic solution was extracted three times -each with 100 ml cyclohexane. The combined extracts were washed with 100 ml water, dried over sodium sulphate and concentrated under reduced pressure. The crude material was treated with 10 ml ethyl acetate and passed through a short plug of silica gel to remove polar impurities and insoluble materials. The perdeuterated cholesterol was isolated in pure form using a ThermoFisher UltiMate 3000 binary semipreparative HPLC system equipped with a NUCLEODUR ® 100-10 C18ec column (125 mm × 21 mm, 5 μm, Macherey-Nagel, Düren, Germany) and a VP 20/16 NUCLEODUR ® C18ec guard column. Using an isocratic mixture consisting of acetonitrile/methanol (9:1) at a flow rate of 20 ml/min at 30°C using a detection wavelength of 210 nm, the desired product was collected baseline-separated between 18.7 and 25.0 min. After removing the solvent under reduced pressure, pure perdeuterated cholesterol was obtained. HPLC analysis was conducted on an Agilent 1100, equipped with a DAD detector and a NUCLEODUR ® C18 Gravity column (150 mm × 3 mm, 3 μm, Macherey-Nagel, Düren, Germany) using an isocratic mixture of acetonitrile/methanol 1:1 at a flow rate of 0.70 ml/ min at 30°C.
Neutron reflectometry measurements
NR measurements were carried out using the FIGARO instrument at the Institut Laue-Langevin (ILL) (Campbell et al., 2011). Data were recorded using neutrons with wavelengths of 2-30 Å at incident angles of 0.62°and 3.8°. Data from three samples were recorded to illustrate the effect of replacing the h-cholesterol by d-cholesterol. A mixture of 1:4 cholesterol to dipalmitoylphosphatidylcholine (DPPC) by mole was prepared in each case as a chloroform solution. After spreading and compression to a surface pressure of 25 mN m −1 , the reflectivity was measured, which was normalized with respect to a measurement of pure D 2 O. Three neutron contrasts were studied (i) h-cholesterol with d 62 -DPPC on null reflecting water (NRW), (ii) h-cholesterol with d 62 -DPPC on D 2 O and (iii) d-cholesterol with h-DPPC on NRW, where NRW is a mixture of 8.1% v/v D 2 O in H 2 O that has zero scattering length density. Data fitting was carried out using a two layer model of tails and hydrated lipid head, with the cholesterol included in the tail layer. A two-layer model was applied where the one in contact with air comprised the acyl chains of the lipid together with cholesterol, and the one in contact with the water comprised solvated head groups. The number of chains was constrained to be equal to the number of head groups of the phospholipid in the layers and the surface excess of the lipid and of the cholesterol were constrained to be equal in the three measured contrasts. The scattering length density of the d-cholesterol was taken as 7.65 × 10 −6 Å −2 , h-cholesterol as 0.21 × 10 −6 Å −2 , the tails of d 62 -DPPC to 8.15 × 10 −6 Å −2 , the tails of h-DPPC to −0.43 × 10 −6 Å −2 and the heads of DPPC to 1.85 × 10 −6 Å −2 . Note that the value of 8.15 × 10 −6 Å −2 was calculated for the lipid tails (C 30 D 62 ) for the d 62 -DPPC using a volume for the tails corresponding to the liquid condensed phase (752 Å 3 , Small, 1984;Marsh, 2010). Recent papers have followed such an approach (Micciulla et al., 2018;Sheridan et al., 2017;Braun et al., 2017). The d 62 -DPPC was obtained from Avanti Polar lipids.
Cell growth
The cholesterol producing P. pastoris strain was grown in unlabelled as well as in deuterated basal salt medium with d 8 -glycerol as carbon source. A similar approach has been used by de Ghellinck et al. (2014) to produce perdeuterated non-recombinant yeast lipids. The growth behaviour of both the yeast lipid producing and the lipo-engineered cholesterol producing deuterated Pichia cultures showed a longer lagphase in D 2 O containing medium by comparison with cultures grown in unlabelled media. This was even more pronounced for the cholesterol producing culture (4 vs. 2 days). The growth rate in the exponential phase was the same for the perdeuterated and the unlabelled cholesterol producing cultures and a final OD 600 of about 30 was obtained after 10 days. The non-recombinant yeast lipid producing cultures reached higher OD 600 values (about 80 vs. 30) with shorter doubling timesindicating the growth inhibiting effect of cholesterol production in P. pastoris grown in deuterated minimal media.
Sterol analysisdeuterated versus non-deuterated samples
Samples of the unlabelled (control) and perdeuterated cholesterol producing P. pastoris cell paste from flask cultures were analysed by gas chromatography-mass spectrometry (GC-MS) for their sterol composition as described above. Deuterated sterols show a shorter retention time (between 0.6 and 0.9 min) by comparison with their non-labelled analogues, in accordance with the published data on perdeuterated ergosterol produced in P. pastoris. Fig. 1 shows the gas chromatogram for the sterols produced by the strain (i.e. cholesterol, 7-dehydroxycholesterol (7-DHC) and zymosterol) under both non-deuterated ( Fig. 1(a)), and deuterated ( Fig. 1(b)) conditions. Tables 1a and 1b show the sterol compositions of both unlabelled and perdeuterated cholesterol-producing P. pastoris cell pastes. The largest observed mass was 503 Da as expected for trimethylsilylated perdeuterated cholesterol. In the deuterated samples, additional peaks occur, which relate to intermediates in the sterol biosynthetic pathway. These compounds may arise as a result of a lower activity of deuterated DHCR7 (7-dehydrocholesterol reductase), DHCR24 (24-dehydrocholesterol reductase) and ERG24 (C-14 sterol reductase) enzymes. The results indicate that cholesterol biosynthesis may not occur as efficiently in deuterated media as it does in unlabelled growth media; this is also reflected in the lower amounts of total sterols extracted from the deuterated cell paste (see Table 1b). The molecular structures of the main sterol species synthesized in P. pastoris under non-deuterated and deuterated conditions are shown in Fig. 2.
With a cholesterol content greater than 50% of total sterols and a total sterol production of about 6 mg per gram of Pichia wet weight (CWW), the sterol analysis clearly demonstrates the feasibility of producing significant amounts of perdeuterated cholesterol using recombinant P. pastoris (> 3 mg/g cellular wet weight).
The effect of flask/fermenter cultures on deuterated sterol production
Since deuterated media components such as D 2 O and d 8 -glycerol are costly, the possibility of using deuterated high-cell density cultures as a cost efficient alternative to flask cultures was investigated. A fed-batch culture was grown using deuterated minimal medium and a d 8 -glycerol feeding regime was followed. Full details of GC-MS analyses for the sterol content obtained using comparable flask and fermenter cultures are given in the Supplementary materials (Tables S1 and S2 respectively). The sterol composition and yields from perdeuterated flask cultures and perdeuterated fedbatch fermenter cultures are shown in Fig. 3.
In the fermenter cultures, there was an immediate gain associated with the volumetric yield of cell paste, typically by a factor at least 10 . Furthermore, despite the fact that the sterol yield (per gram of cell paste) was lower in fermenter cultures, the fraction of d-cholesterol in the sterol pool was significantly higher (Fig. 3(b)), and facilitated subsequent purification.
Purification and characterisation by mass spectrometry of perdeuterated cholesterol
Starting with 31 g of perdeuterated cell paste grown in a fed-batch culture, the organic solvent extraction yielded 263 mg crude extract after solvent removal under reduced pressure. Purification using reverse-phase HPLC yielded 42.6 mg of perdeuterated cholesterol. The retention time of the perdeuterated cholesterol was 9.19 min. Purity of the isolated material was found to be 98.5% by both HPLC (detection wavelength 210 nm, data not shown) and GC-MS (see Fig. 4).
Evaluation of the potential of perdeuterated cholesterol in neutron reflectivity studies
The observed reflectivity data are shown in Fig. 5. Fits to these data were carried out using the Motofit software in Igor Pro (Nelson, 2010). The fitted thickness was 14.7 Å for the tail region, and 10.0 Å for the head region in all cases. Three different contrasts are shown in Fig 5: (i) perdeuterated cholesterol (d-cholesterol) and hydrogenated DPPC (h-DPPC) on null-reflecting water (NRW) (blue curve), (ii) unlabelled cholesterol (h-cholesterol) and d 62 -DPPC on NRW (green curve), (iii) hcholesterol and d 62 -DPPC on D 2 O (red curve). Contrast (i) allows the surface excess of cholesterol to be determined; contrast (ii) allows the surface excess of DPPC chains to be determined, and contrast (iii) allows the hydration of the head group layer to be determined.
The successful fitting of a common physical model to the data recorded for all three contrasts validates the surface excesses of the two components and the location of cholesterol at the interface. It is striking that the interpretation of the data from these measurements in terms of locating the cholesterol is rather straightforward. In the case of measurements on NRW, the reflectivity is strongly dominated by the deuterated material and this allows the location of the cholesterol molecules to be identified directly to be in the same region as the hydrocarbon tails of the lipid. The common physical model that is found to fit the data measured with three contrasts has the cholesterol in the same layer as the acyl chains of the DPPC. This is evident by examining the insert of Fig. 5, which shows the positioning of the cholesterol (blue line) within the region occupied by the acyl chains of the DPPC (green line). The phosphocholine head group region is effectively shown by the dip in the scattering length density (SLD) observed in the experiments performed with d 62 -DPPC on D 2 O, (red line in the insert of Fig. 5). The results thus show that the association of the molecules is mainly driven by hydrophobic interactions. An interfacial roughness of ∼3.5 Å is required in the in the fit, as is shown by the non-abrupt changes in the density profiles between air, the two layers, and water. Another feature of the applied model is that there is no extensive penetration, beyond the OH group, of cholesterol towards the bulk water as its inclusion worsened the fit of the model to the measured data. This direct location of the cholesterol relies on the simple identification of these deuterated molecules. A fuller description of the analysis will be presented elsewhere and related to other results that have been reviewed by Table 1a Sterol composition of unlabelled cholesterol-producing P. pastoris cell paste (mean values ± SD of triplicates are shown). Ergosterol was used as an internal standard (IS). Rheinstädter and Mouritsen (2013). As the cholesterol is distributed over a considerable thickness(about the length of a cholesterol molecule), it is not possible to directly estimate an orientation or tilt of the molecules since the neutron reflection technique is sensitive only to the overall scattering length density distribution. Future diffraction studies of multiple bilayers that contain deuterated cholesterol could be helpful to give more information about the arrangement in three-dimensions.
Discussion
In neutron scattering experiments such as neutron reflection (NR) or small-angle neutron scattering (SANS), as well as in techniques such as NMR, deuterated membrane components provide important contrast when present in a mixture with other labelled or unlabelled lipids or when used to highlight membrane proteins. However, in common with perdeuterated proteins, perdeuterated cholesterol cannot be matched out in pure D 2 O since its scattering length density is higher than that of D 2 O. For protein labelling, protocols for match-out deuteration have been developed using E. coli or P. pastoris high cell-density cultures (Dunne et al., 2017) and protocols for match-out deuteration of cholesterol are currently undertaken in ILL's Life Sciences Group. As noted previously, the availability of d-cholesterol can be broadly exploited in neutron scattering studiesparticularly those relating to lipid systems of various types. This capability is likely to provide novel information on the structural arrangement of mammalian membranes. Examples include small-angle neutron scattering (SANS) of solutions or neutron reflection measurements of interfacial systems that are of direct relevance to membranes and membrane proteins, high density lipoprotein/low density lipoprotein (HDL/LDL) exchange phenomena related to atherosclerosis (Browning et al., 2017), properties of alveolar surfaces, and lung surfactant systems (Thompson et al., 2010(Thompson et al., , 2013Hemming et al., 2015) where it is desirable to identify the physical and chemical changes of specific components. Other applications are possible in neutron crystallographic studies of proteins that interact with cholesterol, and neutron incoherent scattering studies that focus on the dynamics of specific components of a membranous system. Besides its use for neutron scattering and possibly for NMR applications, perdeuterated cholesterol, in combination with stimulated Raman scattering (SRS), could be extremely valuable in imaging approaches for the study of intracellular cholesterol trafficking mechanisms (Lee et al., 2015). The combination of microscopic information with Raman spectroscopy provides a powerful molecular imaging method, and allows visualization at the diffraction limit of the laser light used, and biochemical characterization through associated spectral information. In order to distinguish the molecules of interest from other naturally occurring biomolecules spectroscopically, deuterium labels are needed. The introduction of carbon-deuterium (C-D) bonds into biomolecules or drug compounds by in vivo deuteration approaches or by organic synthesis (Bergner et al., 2011) is a relatively noninvasive labelling approach that does not cause major changes to the chemical and physiological properties of the molecules. In Raman imaging, C-deuterated molecules exhibit characteristic vibrational signatures in the C-D stretching region around 2100-2300 cm −1 , avoiding spectral interference with contributions from a complex biological environment. Raman microscopy, in combination with deuteration of fatty acids, has been used to image the metabolism of such lipids in macrophages and to trace their subsequent storage patterns. The appearance of cytosolic lipid droplets is a hallmark of macrophage transformation into foam cells, a key step in early atherosclerosis (Matthäus et al., 2012). Perdeuterated cholesterol may also be used for highly efficient screening of drugs that target cholesterol metabolism.
Low level deuterium incorporation from heavy water into fatty acids and cholesterol is an attractive method for determining their fractional synthesis in humans (Leitch and Jones, 1993). Diraison et al. (1996) found that the maximum in vivo incorporation number of deuterium atoms into plasma cholesterol was 27 out of the 46 hydrogen atoms present in the molecule. Since in mammals the toxicity of deuterium becomes evident at about 20% replacement of body water by deuterium oxide (Katz et al., 1962), full deuteration of cholesterol requires a recombinant expression system that can cope with high deuterium concentrations. | 2018-04-03T03:15:38.075Z | 2018-05-01T00:00:00.000 | {
"year": 2018,
"sha1": "28a1dd9beca26bf07c2a0e82ef94f8ddbbb1ebae",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.chemphyslip.2018.01.006",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "50346789574e2d55b641c2e185919bcdbc695abd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
257630354 | pes2o/s2orc | v3-fos-license | Low estimated glomerular filtration rate is an independent risk factor for higher hydroxychloroquine concentration
Background The aim of this study was to analyze the relationship of the estimated glomerular filtration rate (eGFR) to hydroxychloroquine (HCQ) blood concentrations in systemic lupus erythematosus (SLE) patients. Method Patients with SLE who had been taking HCQ for more than 12 months were recruited. All subjects gave written informed consent. Various clinical characteristics and laboratory values were examined. The blood concentration of HCQ was measured by high-performance liquid chromatography, and the relationship of eGFR to HCQ blood concentration was mainly investigated. Result In total, 115 patients with SLE receiving long-term HCQ therapy were included in the study. The median concentration of HCQ was 1096 ng/ml (range 116–8240 ng/ml). The eGFR was strongly associated with blood concentration of HCQ (P = 0.011, P < 0.05), when adjusted for age, sex, body mass index (BMI), weight-adjusted dose, prednisone use and immunosuppressive drug use. No statistically significant association were found between age, duration, BMI, weight-adjusted HCQ dose, corticosteroid use, immunosuppressant use and blood concentrations of HCQ. Conclusion We provided novel evidence that impaired renal function influenced the blood concentration of HCQ. Patients with low eGFR need to adjust the HCQ dosage according to the monitoring results of HCQ blood concentrations. Key points • A higher HCQ blood concentration was associated with low eGFR. • This finding reinforces the importance of routine HCQ measurement to maintain normal blood concentrations. • HCQ blood monitoring will be useful for dose modification in patients with renal dysfunction.
Introduction
Hydroxychloroquine (HCQ) is a traditional antimalarial drug that is effective in the treatment of systemic lupus erythematosus (SLE). In addition to immune regulation through the inhibition of immune activation and the reduction of cytokine production, it is associated with a wide range of benefits, such as its anti-infection, anti-thrombotic, lipid-lowering, photoprotection, hypoglycemic, anti-osteoporosis, and anti-inflammation qualities, and improvement of dry eye [1,2].
Previous studies [3][4][5][6][7] have identified that a very low blood concentration of HCQ is a simple marker and predictor of systemic lupus erythematosus exacerbation and treatment failure. A recent study showed that higher HCQ blood concentrations predicted retinopathy of HCQ [8]. Thus, interest in the measurement of HCQ blood concentrations has increased.
Due to its unique pharmacokinetic and pharmacogenomic characteristics, there is significant interindividual variability in HCQ blood concentrations, even if individuals take the same dose [9][10][11]. Several studies have analyzed the factors that influence the changes in the blood concentration of HCQ [12][13][14][15]. However, these studies have revealed contradictory findings, particularly on renal function. Most of the studies found a significant association of impaired renal function with high blood HCQ concentration [12,14]. Another study did not find a relationship between renal function and HCQ concentration. However it should be noted that due to the small sample size of the study, only 3.7% of patients had renal dysfunction classified as CKD stage 3 or greater [13]. Another study including patients with renal dysfunction showed a trend toward lower blood HCQ concentrations [15]. However it should be noted that the patients took lower than usual doses (200 mg/day). Although the correlation between renal function and blood HCQ concentration remains controversial, we believe the negative findings could be attributed to the low power or low doses. Does the dosage of HCQ need to be adjusted in patients with renal dysfunction? How can the concentration of HCQ in the blood be modified in the presence of renal dysfunction with a decrease in eGFR? Therefore, the purpose of the study was to identify the relationship of eGFR to HCQ blood concentrations in SLE patients.
Study design
This was a cross-sectional study aimed at exploring various factors associated with the blood concentration of HCQ, especially focusing on the effect of eGFR. The human ethics committees at the Peking University People's Hospital approved the study (2020PHB209-01). All research adhered to the tenets of the Declaration of Helsinki. All subjects gave written informed consent.
Population
Patients who received HCQ (400 mg/day) for at least 12 months were included in this study. Whole blood was collected, and laboratory values were collected from the electronic medical record system (HIS platform). We also analyzed blood concentrations of HCQ in patients with chronic renal insufficiency. We used the chronic kidney disease-epidemiology collaboration (CKD-EPI) equation to estimate the eGFR [16]. Renal function was classified on the basis of the stage of chronic kidney disease (CKD), with eGFR ≧90, 60-89, 30-59, 15-29, and < 15 ml/minute/1.73 m. 2 corresponding to stage 1, 2, 3, 4, and 5 disease, respectively [17].
Sample processing
Whole-blood HCQ can be quantified by high-performance liquid chromatography (HPLC). 300 μl of blood sample and 10 μl of 25 μl•mL −1 metronidazole solution which was used as the internal standard were extracted with 900 μl of ethyl acetate, redissolved with 200 μl of mobile phase after drying with nitrogen, and 20 μl of supernatant was taken for determination. Chromatographic separation was performed on a Symmetry® C 18 column (4.6 mm x 250 mm, 5 μm) at 35℃ using a mobile phase of 20 mmol•L −1 KH 2 PO 4 -acetonitrile (85:15, ν/ν, pH adjusted to 3 by H 3 PO 4 ) at a flow rate of 0.8 mL•min-1 and the detection wavelength was 254 nm. A calibration curve (100-5,000 ng/ml) was generated to validate the method. The relative standard deviations of intraday and interday precisions for HCQ were within 4%. The selectivity, sensitivity, precision, and accuracy of the method were established by an internal standard prior to measurement. The method was simple, sensitive, and accurate, and could be used for the measurement of HCQ blood concentrations in human blood.
Statistical analysis
We used descriptive statistics and plots to test the data. Because the HCQ blood concentration was not normally distributed, it was natural log-transformed. Categorical variables are presented as frequencies and percentages, and continuous variables are presented as the means and standard deviations. Normally distributed values are expressed as the mean ± SD. Nonnormally distributed values were categorized into quartiles. The clinical characteristics of patients in the low and high concentration group were compared with the chi-square test for categorical variables and the Mann-Whitney test for continuous variables. The association of eGFR and HCQ blood concentration was assessed with the use of multivariable logistic regression models. According to possible confounders, we adjusted multivariable logistic regression models for age, sex, body mass index (BMI), weight-adjusted dose, and use of prednisone and immunosuppressive drugs. In another separate analysis, for multiple group comparisons distributed by CKD, one-way analysis of variance (ANOVA) or a nonparametric Mann-Whitney test was performed. The effect of eGFR on HCQ concentration was analyzed by a simple linear regression model. Statistical analyses were performed using SPSS Statistics 24.0 software (SPSS Inc., Armonk, NY, USA) and presented using GraphPad Prism 8.0 software (GraphPad Software Inc., San Diego, CA, USA). The adopted significance levels in all analyses were set at 5%.
Clinical characteristics of patients with SLE
The study included 111 patients who were receiving the same daily dose of HCQ (400 mg/day every day). The median concentration of HCQ was 1096 ng/ml (range 116-8240 ng/ml). The analysis was conducted after dividing patients into two groups according to the blood concentration of HCQ. The patients with an HCQ blood concentration equal to or lower than the median (1096 ng/ml) were classified as the low concentration group (n = 55), and the patients with an HCQ blood concentration higher than the median (1096 ng/ml) were classified as the high concentration group (n = 56). The characteristics of both groups are shown in Table 1. The weight (p = 0.008), body mass index (p = 0.036), and weight-adjusted HCQ dose (p < 0.001) were significantly different between the two groups. The age, sex ratio, medication duration time, cumulative dose and combination therapy (prednisone and immunosuppressive drugs) were similar for the low and high blood concentration groups. We also examined the association between laboratory values and HCQ blood concentrations, and a less significant relationship was observed ( Table 2). The patients with low concentrations of HCQ had a higher estimated glomerular filtration rate (eGFR) than those with high HCQ concentrations [: mean-standard deviation: (106.21 ± 14.95) vs. (99.22 ± 25.65), p = 0.011] (Fig. 1). By drawing the scatter plot, it is intuitively judged that there is a linear relationship between HCQ concentrations and the eGFR (F = 4.099, P < 0.045) (Fig. 2).
The relationship of renal function to blood concentrations of HCQ
Through multivariate analysis, we identified independent factors related to high blood concentrations of HCQ. The eGFR was independently associated with a high blood concentration of HCQ. There was a significant relationship between the eGFR and blood HCQ concentration in unadjusted models (p = 0.006) and in Model 1, which was adjusted for age, sex, BMI, weight-adjusted dose, use of prednisone and immunosuppressive drugs (p = 0.005) ( Table 3). When patients were categorized according to CKD stage, only 6 patients had chronic renal insufficiency (eGFR < 60 ml/min); among them, five people had CKD stage 3, and one person had CKD stage 4. These patients also took the same daily dose of HCQ (400 mg/day every day), and it was not adjusted to take CKD into account. The median blood concentration of HCQ was 2404.04 ng/ ml (950.80-8240.20 ng/ml) and was significantly higher than the median blood concentration of HCQ in the 105 patients of the study who also received 400 mg/day (1046 ng/ml [range 116-7374.89 ng/ml]; P = 0.049). There was no significant difference in HCQ blood concentration according to the five different CKD stages, but there was a trend towards a difference but, perhaps due to the small sample size, the difference was not significant. When one (Table 4).
Discussion
In this study, we identified factors that might explain interindividual variations in blood concentrations of HCQ in patients with SLE. Interestingly, we identified a clear correlation between eGFR and blood concentration of HCQ.
Although several previous studies [12][13][14][15] examined the relationship between HCQ blood concentrations and variables such as age, BMI, smoking, drug-drug interactions, dosage, laboratory examination (white blood cell, blood platelet, hemoglobin, neutrophil, etc.), few have studied renal function. We found a significant association of low eGFR with high blood concentrations of HCQ. Similarly, Ji Yeon Lee et al. conducted a cross-sectional study to explore the relationship between renal function and blood concentration of HCQ. They found high blood HCQ concentrations in 4 patients with abnormal eGFR compared with 23 SLE patients with normal eGFR [13]. Another study performed by M. Jallouli et al. also found an inverse correlation between the estimated glomerular filtration rate and HCQ blood concentration, and in their study, they also studied three patients receiving long-term dialysis and confirmed that HCQ was not dialyzable [12].
However, a study including 15 patients with renal dysfunction (creatinine: 1.4-4.9 mg/ml) and 6 patients with more severe renal dysfunction (creatinine > 5.0 mg/ml) showed a trend toward lower HCQ concentrations with renal failure, suggesting that renal failure dosing led to suboptimum HCQ concentrations [15]. Although our study had an opposite conclusion to the above two studies, we thought the reason for the inconsistency was the lower than usual doses, in which the patients only received HCQ doses of 200 mg/d. On the other hand, it suggested that blindly reducing the dose was not the best way to quantitate renal disease with respect to HCQ dosing. Moreover, another study also found no statistically significant association between renal function and [HCQ] or [DHCQ]. However, the author recognized that the study population was not ideal for studying the relationship due to low power [14]. Although it is still controversial to conclude that HCQ blood concentrations are associated with renal function, our positive finding could be attributed to high power. As previously described, retinal toxicity [18,19], neuromyotoxicity [20] and cardiotoxicity [21] of HCQ may be enhanced by renal dysfunction. HCQ can lead to retinal toxicity, and an increasing number of patients with advanced HCQ retinopathy have been had identified in recent publications, thus suggesting a need for guidelines that focus on recommending appropriate dosing and toxicity monitoring [22,23]. The most important risk factor is greatly dependent on daily dose, which is calculated using body weight. Guidelines aimed at preventing retinal toxicity recommend using less than 5 mg/kg/day of actual body weight per day instead of 6.5 mg/ kg/day of ideal body weight. To date, there are few guidelines for dose adjustment in patients with renal dysfunction. The Joint European League Against Rheumatism and European Renal Association-European Dialysis and Transplant Association (EULAR/ERA-EDTA) guidelines [22] state that HCQ is recommended for all patients without contraindications, with a maximum dose of 5 mg/kg/day. When GFR < 30 ml/min, the dose can be reduced by 50%. Kidney Disease: Improving Global Outcomes (KDIGO) guidelines [23] also state that HCQ is appropriate for all patients without contraindications, but the recommended dosage varies. Guidelines recommend an initial dose of 6.5 mg/kg/day of ideal body weight or 400 mg/ day, and 4-5 mg/kg/day during maintenance treatment. A dose reduction of at least 25% is recommended when eGFR is less than 30 ml/min/1.73m 2 . We think that HCQ blood monitoring will be useful. Therefore, further studies are needed in patients with renal dysfunction to confirm our findings and to examine the association among renal function, HCQ blood concentration and toxicity.
Conclusion
In conclusion, we provide novel evidence that a higher HCQ blood concentrations are associated with low eGFR. This finding reinforces the importance of routine HCQ measurement to maintain normal blood concentrations. HCQ blood monitoring will be useful for dose modification in patients with renal dysfunction. With the popularization of blood drug concentration determination, such data might be useful for clinicians. | 2023-03-21T06:16:22.185Z | 2023-03-20T00:00:00.000 | {
"year": 2023,
"sha1": "8c9b0caccd89690ac27b1c8e507729b91cb652cb",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10067-023-06576-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "8b9f5346ca7b812cc8e93fc2c6f75777256e478c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256808469 | pes2o/s2orc | v3-fos-license | SCoPE Sets: A Versatile Framework for Simultaneous Inference
We study asymptotic statistical inference in the space of bounded functions endowed with the supremum norm over an arbitrary metric space $S$ using a novel concept: Simultaneous Confidence Probability Excursion (SCoPE) sets. Given an estimator SCoPE sets simultaneously quantify the uncertainty of several lower and upper excursion sets of a target function and thereby grant a unifying perspective on several statistical inference tools such as simultaneous confidence bands, quantification of uncertainties in level set estimation, for example, CoPE sets, and multiple hypothesis testing over $S$, for example, finding relevant differences or regions of equivalence within $S$. As a byproduct our abstract treatment allows us to refine and generalize the methodology and reduce the assumptions in recent articles in relevance and equivalence testing in functional data.
Introduction
Historically there has been a large body of work connecting hypothesis tests and confidence sets starting as early as [32] with refinements in [1,2,17]. Later simultaneous confidence sets have been constructed from stagewise multiple testing procedures, among others [18,20,21,26,44]. This school of thought is familiar to many statisticians in form of the duality between families of hypothesis tests and confidence sets which dates back to [32] (for a modern treatment [24,Thm 3.5.1]). These works have in common that they treat statistical hypothesis testing as the fundamental paradigm and view confidence sets largely as a derived concept. However, there is an intuitive appeal of confidence intervals over hypothesis testing which is nicely expressed in R. Little's comment to the ASA statement on the p-value [52]: "[...] I teach a basic course in biostatistics to public health students. Confidence intervals are no problem-ideas like margin of error have even entered the vernacular. The difficulties begin with hypothesis testing. [...]". Implicitly, this intuition appeared as well in the works on equivalence testing where the null hypothesis is that a parameter (for example a population mean) is not contained in a known interval because the first equivalence tests were based on confidence intervals [41,54]. Only later tests have been derived from the intersection-union principle [19,42]. A thoughtful discussion of the connection between confidence intervals and equivalence tests and possible pitfalls is presented in [4].
In this work we show that shifting the focus from statistical hypothesis testing to confidence statements allows us to generalize and unify many current simultaneous inference techniques based on family-wise error rate (FWER) like criteria, for example, relevance and equivalence testing in the space of continuous functions using the supremum norm, simultaneous confidence bands (SCBs) and inference on level or excursion set of functions. We call our unifying concept Simultaneous Coverage Probability Excursion (SCoPE) sets.
Motivation of SCoPE Sets
A standard technique for simultaneous inference on a real-valued function µ ∈ ∞ (S), where ∞ (S) is the space of bounded functions over a metric space S endowed with the supremum norm 1 , are SCBs. For example, assume thatμ N is an estimator of µ and that the family of intervals CI α (s) = μ N (s)−τ N q α σ(s),μ N (s)+τ N q α σ(s) indexed by s ∈ S forms a (1−α)-SCB, i.e., P ∀s ∈ S : µ(s) ∈ CI α (s) ≥ 1 − α .
Such (1 − α)-SCBs are frequently used to perform multiple hypothesis tests controlling the FWER in the strong sense at level α, for example, to test the null hypothesis µ(s) = c ∈ R for all s ∈ S. The test rejects the null hypothesis, if there exists s ∈ S such that c / ∈ CI(s). Famous examples are Scheffé's method for contrasts in a linear regression [40], Tukey's method [48] and Dunnett's method [14]. Hereafter we call a multiple hypothesis test with strong FWER control at level α a strong α-FWER test.
It is less known that a (1 − α)-SCB yields for all the hypotheses H c + 0,s : µ(s) > c + (s) vs. H c + 1,s : µ(s) ≤ c + (s) and H c − 0,s : µ(s) < c − (s) vs. H c − 1,s : µ(s) ≥ c − (s) indexed by s ∈ S and c ± ∈ F(S) = f : S → R ∪ {±∞} a strong α-FWER test. The latter can, for example, be derived from our Proposition 1 which extends the main result from [34]. Using c ± q = c ± ± τ N qσ for q ∈ R it shows that SCBs guarantee simultaneously for all c ∈ F(S) that the lower excursion setL c − qα = s ∈ S |μ N (s) < c − qα (s) is a subset of L c = s ∈ S | µ(s) < c − (s) and the upper excursion setÛ c + qα = s ∈ S |μ N (s) > c + qα (s) is a subset of U c = s ∈ S | µ(s) > c + (s) with probability at least 1 − α (Fig. 1, left panel). The particular example of Scheffé's method for contrasts in the linear model is carried out in Appendix E.2.
The simultaneous control over all c ± ∈ F(S) can be a drawback of SCBs because in general the derived multiple test on the family H c 0,s s∈S for c ∈ F(S) has a suboptimal statistical power. As relevant research questions mostly require simultaneous control over a small subset C ⊂ F(S) (Fig. 1, middle and right panel), our (1 − α)-SCoPE sets over (C − , C + ) seek to tune q α such that the inclusionsL c − qα ⊆ L c − andÛ c + qα ⊆ U c + hold simultaneously only for all c ± ∈ C ± ⊆ F(S) with probability at least 1 − α.
An Asymptotic SCoPE Sets Theorem
In our main theorem we construct asymptotic (1 − α)-SCoPE sets over (C − , C + ) from an estimatorμ N of µ. In order to circumvent problems of measurability, we work in the J. Hoffmann-Jørgensen framework of weak convergence [50,. Our main assumption is slightly stronger than that the restriction of the estimatorμ N , N ∈ N, to a set U ⊆ S which is specified later satisfies a uniform limit theorem (ULT) in ∞ (U ). This means that there exists a strictly positive sequence (τ N ) N ∈N converging to zero (the inverse of the usual rate) and a positive function σ ∈ ∞ (U ) such thatμ N −µ τ N σ converges weakly in ∞ (U ) to a tight limiting process G with sample paths in ∞ (U ). If we define the preimages of µ under the sets C ± to be µ −1 C ± = s ∈ S | ∃c ∈ C ± : µ(s) = c(s) then our main result Theorem 1 -up to technical details-has the form lim N →∞ P ∀c − ∈ C − ∀c + ∈ C + : The proof is surprisingly simple and similar to the proofs of the main results in [27] and [43]. It consists of first showing that the r.h.s. is a lower bound of the lim inf of the l.h.s. of (1). This is established using Lemma 2 which provides algebraic sufficient conditions so that L c − q ⊆ L c − andÛ c − q ⊆ U c − based on suprema of G N =μ N −µ τ N σ over certain sets U ± N and S \ U ± N . Since the sets U ± N converge in Hausdorff distance to the set µ −1 C ± the max-sup expression with G replaced by G N and µ −1 C ± replaced by U ± N converges weakly to the max-sup expression of G on the r.h.s. of (1) under continuity assumptions on G N which are detailed in Section 4. The lower bound then follows using asymptotically tightness properties on an error process defined on S \ U − N ∪ U + N and an application of the Portmanteau Theorem. Similarly, the lim sup of the l.h.s. can be bounded from above by the r.h.s. based on simple algebra and the Portmanteau Theorem applied to the random variable where in the max-sup expression on the r.h.s. G is replaced by G N . This proof technique is condensed and generalized 2 in the SCoPE Set Metatheorem (Appendix B) which is the core of all our results.
Estimating q such that the families L c − q c − ∈C − and Û c + q c + ∈C + form asymptotic (1 − α)-SCoPE sets over (C − , C + ) requires estimation of the quantiles of the process on the r.h.s. of (1). In particular, the preimages µ −1 C ± -more precisely generalized preimages u −1 C ± in the case that µ or the functions in C ± are discontinuous -must be estimated. In Section 4.3 a Hausdorffdistance consistent estimator of u −1 C ± under the assumptions of our main theorem is discussed (see Theorem 2) and we sketch a general strategy based on the multiplier bootstrap, as used for example in [11], to consistently estimate the quantile q. Concrete estimation of q, however, depends on the particular probabilistic model.
In this work we focus on properties deducible from general assumptions on the estimatorμ N of µ. Nevertheless we demonstrate in Appendix 5.4 that Theorem 1 can be used to construct a strong α-FWER test for iid observations which in this scenario is more powerful than Hommel's procedure [22]. Here we also introduce the concept of insignificance values to post-hoc judge the validity of SCoPE sets. Moreover, our Section 6 on connections between SCoPE sets and multiple tests and the upcoming literature review will reveal that among others, the inference strategies from [5,6,9,10,11,13,43] can be viewed as applications of Theorem 1 which indicates its broad applicability.
SCoPE sets and Hypothesis Testing
A possible interpretation of Theorem 1 is that the r.h.s. of (1) which depends on the unknown function µ characterizes oracle limiting distributions for different asymptotic strong α-FWER tests. From the perspective of multiple hypothesis testing each statement of the formL c − qα ⊆ L c − suggests a test on H 0,s : µ(s) < c(s) versus H 1,s : µ(s) ≥ c(s) by accepting H 0,s if s ∈L c − qα and rejecting it otherwise. If q α corresponds to (1 − α)-SCoPE sets over (C − , C + ) with c − ∈ C − then this test controls the FWER at level α because P L Most importantly, the sometimes restrictive assumption of a ULT forμN is relaxed. L c − is the set of true null hypotheses. This interpretation allows to construct strong α-FWER tests based on SCoPE sets for complex hypotheses.
For b ± ∈ F(S) such that b − (s) ≤ b + (s) for all s ∈ S, a local Relevance Test is a multiple hypothesis test on the hypotheses for all s ∈ S. In Section 6.1 we construct a local relevance test based on SCoPE sets which asymptotically controls the FWER over S in the strong sense (Theorem 4). To the best of our knowledge no such test has been proposed yet. Only tests on the global hypothesis µ(s) ∈ b − (s), b + (s) for all s ∈ S have been discussed in the literature. Examples with b ± (s) = ±∆ for all s ∈ S are the test on having somewhere a relevant difference between the population means and for having somewhere a relevant difference between the covariance operators for two samples of C [0, 1] -valued random variables from [11] and [10] respectively. This strategy was recently extended to function-on-function regression [13]. Our Theorem 3 generalizes the test strategy of the aforementioned articles and shows that these tests are based on SCoPE sets. In consequence Theorem 3 explains that the validity of their testing strategy follows immediately from a ULT of the considered estimatorμ N of their target function µ which is always the first step in their proofs. Hence even our local relevance test from Theorem 4 can be applied to the probabilistic models and estimators discussed in [10,11,13]. The generalizations which Theorem 3 grants are a strategy to adapt the test-statistic to the asymptotic variance ofμ N or any other scaling function, allowing b ± , µ andμ N to be discontinuous and showing that S can be an arbitrary metric space.
Similarly, we derive from Theorem 1 a test in Section 6.2 for the hypothesis Such equivalence tests have been studied, among others, for multivariate data (S discrete and finite) [53, Chapter 7.3] and [51], for regression curves [12,31] and in the cases of the difference of two mean functions or the quotient of two variance functions in functional data in [15] and [9]. As in the case of the local relevance test our Theorem 5 generalizes the testing strategy of these articles and embeds it into the SCoPE sets framework. Last but not least, we derive in Section 6.3 a local Equivalence Test based on SCoPE sets. It is a multiple hypothesis test on the alternatives Since the role of the null and alternative hypothesis is interchanged compared to local relevance tests it is not surprising that Theorem 6 establishes that our local equivalence test based on SCoPE sets has similar properties as our local relevance test, for example, being a strong α-FWER test. We could not find any previous work on such tests. An application could be detecting which drugs are equivalent in an experiment where several drug-treatment comparisons (indexed by s) have been recorded.
Connections to the Literature
The first work we are aware of which quantifies the uncertainty of the lower and the upper excursion set above zero of a function µ defined on R D is [27]. Using an estimatorμ N of µ it constructs a lower and an upper excursion setL andÛ respectively such that both inclusions L ⊆ L 0 andÛ ⊆ U 0 hold asymptotically with a prespecified probability. They apply these sets to quantify the uncertainty of the excursion sets of a kernel density estimator above a single c ∈ R, but do not explicitly extract the asymptotic distribution. However, they even derive a non-asymptotic bound [27, Lemma 2.1.] which can also be derived from our Proposition 2 from Appendix B. Another recent work on confidence sets for a single level or upper excursion set of an unknown density using kernel density estimators is [33]. In particular they derive assumptions and rates for having asymptotically nominal coverage. We recommend it also for its broad overview on applications of level and excursion set estimation.
Assuming that S ⊂ R D is compact, c ∈ R andμ N satisfies a uniform central limit theorem, [43] finds the limit distribution of the inclusions Although is seems reasonable at first to use only upper excursion sets with "≥"-signs, it has two shortcomings. First their limit result holds only under the assumption that µ is not flat on the level set {s ∈ S | µ(s) = c} [43, Assumption 2.1.(a) and Lemma 1]. Hence their result cannot be connected to testing. Second, their inclusion statement does not provide a random partition of S while (1 − α)-SCoPE sets over {c}, {c} give a random partition of S into three sets, i.e., for which all inclusions hold (asymptotically) simultaneously with a probability depending on q. In other words, this partition consists of asymptotic (1 − α)-confidence subsets for L c and U c and a (1 − α)-confidence superset for µ −1 (c). The inclusion from [43] only yieldŝ . A superset for µ −1 (c) cannot be directly obtained from their main result [43,Corollary 1]. These problems persist in the applications of this work to geoscience [16] and neuroimaging [5,6] and the innovative work [29]. The latter generalizes [43] to intersections and unions of excursion sets of several functions µ 1 , . . . , µ K , K ∈ N, above a single c ∈ R. Our Corollary 3 generalizes the main theorem from [43]. It allows S to be an arbitrary metric space, requires weaker assumptions and provides a partition of S of the form (2).
To date only [34] is dealing with several excursion sets at the same time. Their main theorem shows that properly thresholding SCBs yields simultaneous confidence sets for all lower and upper excursion sets over c ∈ R. We generalize their main result in our Proposition 1 by showing that the control is even simultaneous over all c ∈ F(S).
Our Corollary 1 connects asymptotic (1 − α)-SCBs (among others, [8,47]) with (1 − α)-SCoPE sets over F(S), F(S) . Unsurprisingly, the fast and fair SCBs [25] can be used to generate (1 − α)-SCoPE sets over F(S), F(S) , too. Deriving this result requires the SCoPE Set Metatheorem since their key innovation is that the quantile parameter q is a function which is chosen such that the invalidation of the coverage is fairly spread over a partition of S = [0, 1]. We do not include this result here, since we restrict ourselves to constant q.
Recently, relevance tests [10,11,13] and equivalence tests [9,12] in the space of continuous functions over S = [0, 1] have been studied. These articles focus on using μ N ∞ = sup s∈S |μ N (s)| as the test statistic and derive limiting distributions depending on the set of extreme points of µ under the null and alternative hypothesis. Due to the focus on the supremum norm these tests cannot identify s ∈ S such that the considered hypothesis is invalidated. As explained earlier in detail, our Theorems 4 and 6 go a step further and provide strong α-FWER relevance and equivalence tests and therefore grant probabilistic bounds that all rejected s ∈ S are points at which the considered null hypothesis is correctly rejected.
Organization of the Article
In Section 2 we introduce notations and definitions required to understand our main results. In Section 3 we rigorously define SCoPE sets and provide the link between (1−α)-SCBs and (1−α)-SCoPE sets over F(S), F(S) . Our main result (Theorem 1) and the required assumptions are discussed in Section 4. It also contains a general strategy to consistently estimate generalized preimages and provides a pathway to estimate the quantile parameter q. Section 5 gives a short glance into applications of SCoPE sets such as providing simultaneous control over regions of interest, confidence regions of several contour lines or detection of contrasts in multiple linear regression. Connections between SCoPE sets and statistical hypothesis testing are explored in Section 6 and in Section 7 we discuss observations and consequences of the introduced methodology.
Notations and Definitions
In this article (S, d) denotes a metric space. S \ B, clB for the topological closure, intB for the interior and ∂B = clB \ intB for the topological boundary of B. The set F(S) denotes the set of functions f : S → R ∪ {±∞}. The set ∞ (S) ⊂ F(S) is the set of all bounded functions f ∈ F(S), i.e., f ∞ = sup s∈S |f (s)| < ∞ and C(S) ⊆ F(S) is the subset of continuous functions with respect to the topology generated by the metric d. If f ∈ F(S) and r ∈ R ∪ {±∞}, we write f ≡ r, if f is the constant function with value r and if no confusion is possible we identify r with the constant function with value r. For any f ∈ F(S) we define, as usual, sup Let (Ω, P, P) be a probability space. Our theory is based on the J. Hoffmann-Jørgensen theory of weak convergence. Thus, recall that the inner probability of a set V ⊂ Ω is given by P * (V ) = sup{ P(W ) | W ⊆ V , W ∈ P }, and its outer probability by P * (V ) = inf{ P(W ) | W ⊇ V , W ∈ P }. Because we will repeatedly use the statements (i) and (ii) of the Portmanteau Theorem [50,Theorem 1.3.4], we introduce the following notation.
Definition 1.
Let Ω N ⊂ Ω be a sequence of sets, X : Ω → R a random variable and q ∈ R. We write lim N →∞ P * Ω N = P X ≺ q under Assumptions (A) / (B) if the following two statements hold: Our asymptotic theory of SCoPE sets is developed in terms of preimages and graphs of functions. Therefore we extend these concepts to (possible uncountable) sets F ⊆ F(S). Recall that the graph of f ∈ F(S) is the set Γ(f ) = (s, r) ∈ S × R | r = f (s) . If F ⊂ C(S) or µ / ∈ C(S) we need to generalize the concept of a preimage of a set of functions in order to add all "touching points" of Γ(µ) and Γ(F) to the preimage. To make this idea mathematically precise we first introduce thickenings of the set F. Definition 4 (Thickenings of a Set of Functions). For F ⊆ F(S) and η > 0 the set F ±η = f ±ε ∈ F(S) | f ∈ F, 0 ≤ ε ≤ η is called ±η-thickening of F. The set F η = F −η ∪F +η will be called the η-thickening of F.
Obviously, µ −1 F , µ −1 ±F ⊆ u −1 F which explains the name generalized preimages. Its importance for us lies in the observation that, if clµ −1 ±Fη is compact for some η > 0 and u −1 for any positive sequence (η N ) N ∈N ⊂ R converging to zero [49,Corollary 5.30] and u −1 ±F is the unique closed set with this property. If there would be another A ⊆ S satisfying (4), it holds by the triangle inequality that and therefore u −1 ±F = clA [49, Problem 5.1. (3)]. All our results have a similar limit distribution. Therefore we introduce the following two notations for F, G ⊆ F(S) and f a real-valued function with appropriate domain: We also write T F ,F (f ) = T F (f ) and T F ,F (f ) = T F (f ).
Simultaneous Coverage Probability Excursion Sets
As explained in the introduction our main interest is providing inference strategies on level sets of a function µ given a functionμ N : Ω → F(S), N ∈ N, obtained from data. We callμ N an estimator of µ, yet we do not require that it is measurable.
Definition 6. Let f ∈ F(S). The lower and upper excursion sets of µ over f are Definition 7. Let α ∈ (0, 1), C ± ⊆ F(S) and c ± α ∈ F(S) correspond to a c ± ∈ C ± . Then the two families of set-valued functions L Remark 1. Our definition involves outer and inner probabilities, since even ifμ N is measurable it depends on the probabilistic model whether the union of the inclusion statements is measurable, especially if C − or C + is uncountable.
If R is identified with the constant functions, then the main result from [34] can be viewed as the first result on non-asymptotic (1 − α)-SCoPE sets over (R, R). It shows that any (1 − α)-SCB for a function µ allows to construct (1 − α)-SCoPE sets over (R, R). Our next proposition generalizes this result because it shows that R can be replaced by F(S). To stay within our notation we only discuss (1−α)-SCBs of the form μ N (s)−qτ NσN (s),μ N (s)+qτ NσN (s) . The general case of (1 − α)-SCBs of the form l (s),û(s) , s ∈ S, whereû,l ∈ F(S) withl(s) <û(s) can be treated analogously. It only requires to assume that the event ω ∈ Ω | ∀s ∈ S : µ(s) ≥ l(s) and ω ∈ Ω | ∀s ∈ S : µ(s) ≤û(s) are measurable. Proposition 1. Let S be separable, µ ∈ F(S), τ N > 0,μ be a R-valued andσ a R >0 -valued stochastic process and assume that the processμ (s)−µ(s) τ Nσ (s) is separable. Then Remark 2. The proof of the above result establishes equality between sets. Thus, the measurability assumptions implicit in imposing thatμ andσ are stochastic processes and the separability can be removed, if the probabilities are replaced by outer or inner probabilities.
Assumptions
Hereafter we assume µ ∈ ∞ (S) and (μ N ) N ∈N is a sequence of estimators of µ, i.e.,μ N : Ω → ∞ (S), σ ∈ ∞ (S) satisfies 0 < o < σ(s) < O < ∞ for all s ∈ S and (τ N ) N ∈N ⊂ R is a positive sequence converging to zero. Furthermore, we define a sequence (G N ) N ∈N by Definition 8. For C ⊆ F(S), an estimatorμ N of µ with values in ∞ (S) is said to fulfill a uniform limit theorem on Cη (short Cη-ULT) if the following conditions hold: (ii) There is a K > 0 such that for every s ∈ S \ µ −1 Cη it holds almost surely (a.s.) that for all c ∈ C and Z N being a sequence of functions Ω → ∞ (S) such that inf s∈S τ −1 N Z N (s) is asymptotically tight in the sense of [50, p.21].
Remark 3. At first, the second condition might appear abstract, yet it only means thatμ N (s) − c(s) and µ(s) − c(s) for all c ∈ C have with probability tending to one the same sign for all s ∈ S sufficiently far away from the generalized preimage u −1 C . This follows immediately from (7) and inf s∈S τ −1 N Z N (s) being asymptotically tight because Remark 4. To illustrate that the assumption of having a Cη-ULT is not restrictive assumê µ N = µ + σZ N . Thenμ N satisfying a Cη-ULT means that τ −1 NZ N G on cl µ −1 Cη and it holds for all c ∈ C and all s ∈ S \ µ −1 Cη that The remaining requirement is that inf s∈S τ −1 NZ N (s) is asymptotically tight. Given two sets C ± ⊆ F(S) andη > 0 we will need the following assumptions.
The definition of u −1 ∓C ± and Assumption (A3) might appear cryptic at a first glance. The important observation is that the Assumptions (A1)-(A3) imply weakly in R for any sequence (η N ) N ∈N converging to zero. This is proven in Lemma 6 in Appendix A.3. The Hausdorff-convergence from (A3) is a crucial ingredient to prove this result which can fail if µ −1 C ± instead of the generalized preimage u −1 ∓C ± are used (Fig. 2). If S is compact, using u −1 ∓C ± is only necessary if µ is not continuous or there is no set C ⊆ C(S) such that Γ C = Γ C because otherwise it can be shown that u −1 ∓C ± = cl µ −1 C ± (Appendix C). Surprisingly, our proofs show that no continuity assumption on µ is necessary. We only need the continuity of G and G N in neighborhood of u −1 ∓C ± as stated in (A2) and (A4).
An Asymptotic SCoPE Sets Theorem
We can now state our main theorem about SCoPE sets which is a consequence of the more general SCoPE Set Metatheorem (Appendix B).
Although the bands defined byμ N ± qτ N σ do not contain µ(s 0 ) in both panels, all inclusions of the excursion sets are satisfied.
Remark 8. From the proof of Lemma 2 it can be seen that any of the inclusions S \Û c − q ⊆ S \U c − or S \L c + q ⊆ S \ L c + can be added to the l.h.s. of (8). However, if we want to replace for example anyL c − q ⊆ L c − by S \Û c − q ⊆ S \ U c − , then (A4) must be replaced by a stronger condition to obtain exact SCoPE sets, compare Appendix D.
implies that the SCoPE sets inclusions fails for some c ∈ C. This cannot be guaranteed at points of discontinuity of µ or c, even if continuity of G N in a neighborhood of u −1 C \ cl µ −1 C is assumed (Fig. 3). Setting C ± = F(S) in Theorem 1 implies S = u −1 C = cl µ −1 C = int µ −1 C and thus (A2)-(A4) are satisfied. This shows that the lower and upper excursion sets defined by thresholding the asymptotic (1 − α)-SCB obtained from an ULT yield (1 − α)-SCoPE sets over F(S), F(S) . Hence the next corollary is the asymptotic version of Proposition 1.
The next corollary bounds the preimage of µ for several bands defined by for all s ∈ S, then the above Corollary yields the following inclusion of partitions of Ŝ which asymptotically is satisfied with at least the q dependent inner probability on the r.h.s. of Corollary 2. If b ± ∈ R this becomes the more familiar: Remark 11. The special case b ± = b ± k for all k ∈ N and b − (s) ≤ b + (s) is particularly interesting. Readers familiar with [9,10,11] might spot the similarities between their limiting distributions and the limiting distribution of Corollary 2. This is not a coincidence as will be explained in Section 6 where we derive relevance and equivalence tests from Corollary 2.
Our last corollary generalizes the main theorem from [43]. Hence we assume that S is compact and µ, c ∈ C(S) which yields that u −1 ∓{c} = µ −1 c and Assumption (A4) is always satisfied. Most importantly, we do not require their restrictive non-flatness Assumption 2.1(a).
Estimating the Generalized Preimages and Bootstrapping the Quantile of SCoPE sets
Estimation of q such that the families from Theorem 1 are (1−α)-SCoPE sets requires estimation Under (A1) we can then derive a Hausdorff-distance consistent estimator of u −1 ±C and u −1 C by replacing µ byμ N in the definition of cl µ −1 Cη and choosing η depending on N appropriately. More precisely, if (k N ) N ∈N is a positive sequence converging to zero, we define the thickenedplugin-estimators of u −1 ±C and u −1 C bŷ If µ is continuous then The estimatorû −1 C can be interpreted to be derived from a SCB which gives an interpretation of the factor k N .
Identification of k N with the quantile of an SCB can help to justify the choice of k N . Nevertheless, tuning k N by requiring P sup s∈S |τ −1 The proof of consistency of the estimator (10) is similar to the proof of consistency of the estimator of the extremal set in [11]. It requires that k N converges to zero at an appropriate rate which implies thatα N goes to zero, too. We can interpretû −1 C = S as the rejection rule for a hypothesis test with null hypothesis µ ∈ Γ(C) and alternative hypothesis µ / ∈ Γ(C) at significance level at mostα N . Thatα N must converge to zero as N tends to infinity is nonstandard and might seem inappropriate at a first glance since usually the significance level in Figure 4: Illustration for C = {c} why the additional conditions to get Hausdorff convergence ofû −1 ±C to u −1 ±C are necessary. The problem is that s 0 / ∈ u −1 +C and µ is getting arbitrary close to c in any neighbourhood of s 0 with the wrong sign. hypothesis testing is fixed to an acceptable threshold independent of N . This mathematical necessity resembles a philosophical question about practical data analysis: Is not the scientific cost for a Type I error in a large sample size experiment much higher than for a small data set because we tend to put more trust in large than small sample sizes? The core of the scientific method is reproducibility and evaluating coherence within our empirically collected knowledge. The later is usually less difficult to achieve for smaller sample sizes since in general such experiments are easier to repeat. Hence should large sample size experiments not pass higher standards if we want to draw conclusions from it?
Our next theorem strengthens this view from a mathematical perspective.
If additionally µ is continuous on clµ −1 Cη \ intµ −1 C for some η > 0 and Γ(∂C) is closed, then d H û −1 ±C , u −1 ±C → 0 in outer probability as N tends to infinity. Remark 12. The concept of the boundary ∂C of a set C ⊆ F(S) is introduced in Appendix C. Note that this is not a topological boundary since we do not introduce a topology on F(S). The condition Γ(∂C) being closed is for example implied by Γ(C) ⊆ S × R being closed or if there is a C ⊆ C(S) such that Γ(C ) = Γ(C).
In principle, the Hausdorff consistency results of the above theorem allow us to estimate the quantile of SCoPE sets using the bootstrap along the lines described in [11]. A general strategy (not necessarily the best in a given probabilistic model) to achieve this is to show within the assumed probabilistic model that realizations of a bootstrap processes B weakly in ∞ (cl µ −1 Cη ) R+1 . Here G (1) , . . . , G (R) are i.i.d. copies of G. A helpful simplifying idea to achieve this for complicated statistics can be functional delta residuals [46]. Combining (12) with Theorem 2 and a generalization of Lemma B.3 from [11] (compare also Appendix A.3) yields for two sequences of sets C N N ∈N and D N N ∈N in S converging in outer probability in Hausdorff-distance to C, D ⊆ S that (with slight abuse of the notation (5))
Applications of SCoPE Sets
Theorem 1 is affluent in interpretations because it allows to extract information about any combination of excursion sets of µ and thereby allows to draw conclusion about its image.
Hence we give only a short glance into possible applications.
Confidence Regions for Contour lines
SCoPE sets offer the possibility to provide confidence regions for contour lines of a target function µ derived from an estimatorμ N . Assume that c 1 , . . . , c K ∈ R are the contour values of interest, then applying Theorem 5 to the sets Remark 13. Controlling the inclusion of excursion sets implies control of the inclusion of contour lines. However, the reverse is not true, as already observed in [43].
Regions of Interest Analysis
In applications, such as neuroimaging, the researcher might be interested in reporting only the results of a statistical analysis specific to regions of interest (RoI). Usually, however, it is not evident before collecting the data, which RoI from a predefined set needs to be reported. SCoPE sets offer a solution to this problem.
A set of RoIs consists of subsets R k ⊂ S, k ∈ {1, . . . , K}. Define for each k ∈ {1, . . . , K} the indicators of a RoI by . . , K} allows the researcher to inspect the confidence subsets of the RoI specific lower and upper excursion sets and report only the results of the interesting RoI's without making mistakes due to multiple comparisons. Similarly, more complicated questions can be answered about the level sets on the RoI's by including more RoI adapted functions into C ± .
Scheffé Type Inference For Multiple Linear Regression
Interestingly, Theorem 1 offers novel simultaneous inference strategies for contrasts in multiple linear regression models. Here we only discuss SCoPE sets over {0}, {0} . More complicated SCoPE sets can be found in Appendix E.2. Moreover, we restrict ourselves to the case of homoscedastic Gaussian errors, although our results extend to more complicated settings. Let Interpreting a ∈ R K as the parameter set, we define a stochastic process indexed in R K and its asymptotic variance byμ N (a) = a Tβ N , σ 2 (a) = ξ 2 a T Xa. Since all involved quantities are Gaussian and continuous in a, we obtain weakly in C R K , where G is the zero-mean Gaussian process with covariance function Since G N (a) = G N (ã) and G(a) = G(ã) withã = a/ a , the domain of the processes should actually be the compact space A standard task in multiple linear regression is finding contrasts b ∈ S K−1 using the observations y N such that with high probability b ∈ {a ∈ S K−1 | a T β = 0}. This can be achieved for example using SCBs [40]. The asymptotic analogue of Scheffé's (1 − α)-SCBs for contrasts are given by the intervals with endpointŝ where . Based on this and using Theorem 8.5 from [35] the asymptotic version of Scheffé's test rejects the null hypothesis of a T β = 0 for all a ∈ S K−1 at significance level α, if which is equivalent to the existence of a ∈ S K−1 such that zero is not contained in the interval given by (14). Corollary 1 shows that this SCB contains more information than allowing us to perform a valid hypothesis test for a T β = 0 for all a ∈ S K−1 . We actually know that This implies for c ≡ 0 that all contrasts b ∈ S K−1 contained in either of the two setŝ are asymptotically with probability at least 1 − α correctly discovered to be non-zero contrasts, This result holds independent of the actual value of β. The drawback is low detection power since it constructs confidence subsets for more functions than just c ≡ 0. The power can be improved by applying Corollary 3.
Corollary 5. Assume the multiple linear regression model depending on N as defined above. Let c(a) = 0 for all a ∈ S K−1 and q ∈ R ≥0 . Then Here 1 β=0 is the indicator function, i.e., one, if β = 0, and zero else, and u → χ 2 (u, k) is the cumulative distribution function of a χ 2 k -distributed random variable.
Corollary 5 suggests a more powerful strategy to detect non-zero contrasts which is tailored to control the excursion sets for This means that, if β = 0, all discovered contrasts are with probability 1 − α correctly identified to be non-zero. On the other hand, if β = 0, it holds that L 0 = U 0 = ∅. Thus, asymptotically the probability to find any non-zero contrast, i.e., The latter tests the statistical hypothesis a T β = 0 for all a ∈ S K−1 with asymptotic significance level 1 − χ 2 q 2 1−α,K−1 , K . Since q 2 1−α,K−1 < q 2 1−α,K , it is more likely to discover non-zero contrasts within the SCoPE sets framework than using Scheffé's SCBs. The price to pay is that for β = 0 incorrectly discovering at least one non-zero contrast is slightly larger than α, yet this probability can be quantified.
As an illustration assume 1 − α = 0.95, k = 4 and the researcher uses (1 − α)-SCoPE sets over ({0}, {0}) to find non-zero contrasts. This means that, if the true β = 0, then all discovered contrasts are with probability 0.95 correctly identified to be non-zero contrasts, and that the null hypothesis of β = 0 can be rejected at significance level 1 − χ 2 q 2 0.95,3 , 4 ≈ 0.1. The latter means that, if the researcher is unlucky and the true β is equal to zero, then having discovered any non-zero contrasts is an event which happens with probability ≈ 0.1.
A Simple Example of a SCoPE sets Analysis
The last paragraph of the previous section highlighted a different view through SCoPE sets on statistical hypothesis testing. The key observation was that the estimated q 1−α,K to obtain (1− α)-SCoPE sets for {0}, {0} can be used to judge the plausibility of the statistical hypothesis β = 0, since the asymptotic probability of discovering any non-zero contrast is given by 1 − χ 2 q 2 1−α,K−1 , K . We will call such a value an insignificance value because it helps judging the plausibility of our discoveries under the probabilistic model and allows to declare discoveries "insignificant". The idea of this section is to showcase on a simple probabilistic model what we call a insignificance analysis which from our viewpoint should be reported together with SCoPE sets.
Let us assume our observations are y 1 , . . . , y N and a reasonable probabilistic model is given by y 1 , . . . , y N ∼ N µ, diag(σ 2 ) iid. with y n = (y n1 , . . . , y nJ ), n ∈ {1, . . . , N }, for unknown µ = (µ 1 , . . . , µ J ) ∈ R J and σ = (σ 1 , . . . , σ J ) ∈ R J . A question of interest might be which µ j , j ∈ {1, . . . , J}, are non-zero. In this model for given k N the plugin-thickening estimator of There are two natural choices of the parameter k N obtained from the discussion in Section 4.3. First, k N = log(N )/κ for some κ > 0 and second, k N being an estimate of the quantile for a (1 − β)-SCB, i.e., satisfying The critical quantile from Corollary 3 for (1−α)-SCoPE sets over {0}, {0} is given by with Φ(q) = P N (0, 1) ≤ q and #A denotes the cardinality of the set A. An estimator of this quantile for unknown σ in the considered probabilistic model iŝ Another estimator tailored to this model can be given using an idea from [45]. Here the author proposed to estimate the number m 0 of true null hypotheses using the distribution of the p-values p 1 , . . . , p N from the tests on the hypothesis H 0 : µ j = 0 bŷ In our model the p-values are derived from Student's t N −1 -distribution. The resulting estimator of the quantile is thenq Simulation results of the validity of SCoPE sets for different µ and the estimatorq St 1−α of q and the estimators of q based on different k N are provided in Appendix F. They demonstrate that SCoPE sets using a reasonable estimator of the quantile q are a more powerful FWER controlling method to detect non-zero µ j 's than Hommel's procedure for i.i.d. samples [22].
In Fig. 5 we report the discoveries obtained from SCoPE sets for samples of size N = 100. In the top row we used the estimator (10) with k N = log(N )/10 and detected one location to be non-zero. In the bottom row we computed SCoPE sets for the another data set, but we used k N = log(N )/10 (bottom left) which detects 30 non-zero locations and k N = log(N )/3 (bottom right) which detects 24 non-zero locations.
We propose to accept these discoveries only after inspection of insignificance values. A first useful set of insignificance values obtained for the estimatorq 1−α of q 1−α for the discussed probabilistic model is Table 1. Judging from these the researcher should declare the discovery in the top left figure to be insignificant because there is a probability of IVq 80 = 24.5% to have at least one discovery if it would be the case that µ = 0 and a discovery of at least the observed height at the discovery would still appear with a probability of IV obs 80 = 14.1%. The top right figure has the same IVq 80 . Thus, the researcher should be suspicious about his discovery. However, the observational insignificance value IV obs 80 = 0.3% states that it is very unlikely that this discovery appeared by chance alone in the worst case counterfactual scenario where µ = 0. Hence this discovery should not be declared insignificant by the researcher although the choice of k N is questionable. Unfortunately in this particular case he commits an error of the first kind, but repeating the same or similar experiments as good scientific practice requires would correct the erroneous decision.
For the bottom left figure IVq 80 and even IV obs 80 are large. Since IVq 80,4 ≈ 0, the researcher can conclude that the observed amount of discoveries most probably did not appeared by chance and that there should be at least around 26 discoveries. Problematic is that IVq 80−m 1 = 39.4 means that it is highly likely to make a false discovery on the set µ −1 0 if the true number of discoveries would bem 1 = 30. Even if the true number of possible discoveries would be 80 −m 0 = 48 then the probability of making a false discovery is still IVq m 0 = 27.1%. Here it is implausible to claim that the inclusion statement is actually controlled at probability 90% for k N = log(100)/10. Only if #µ −1 0 ≤ 10 the control of the inclusion statement would be IVq 10 ≈ 10%. For the bottom right figure this is different. Here IVq 80−m 1 = 19.8% and IVq m 0 = IVq 32 = 11.5% and hence k N = log(N )/3 seems to be plausible since the data suggests #µ −1 0 to be aroundm 0 = 32. This insignificance analysis is compatible with our simulation results. Choosing k N = log(N )/3 is a decent choice since the simulated probability of satisfying the inclusion statement of the SCoPE sets is ≈ 88% while this probability is only ≈ 77% for k N = log(N )/10 (Appendix F Table 3). Table 1. The points represent the sample meanμ j = N −1 y jn . The red points are contained inÛq √ Nσ and the blue points inL −q √ Nσ . Thus, the corresponding j's represent the locations at which the SCoPE sets discovered a non-zero population mean. Points inside the grey area cannot be decided upon whether the population mean is smaller or larger than 0. (10) and four helpful insignificance values for the 90%-SCoPE sets over (0, 0) reported in Fig. 5. The last three rows contain how many true discoveries and false discoveries different statistical inference tools output. Here BH denotes the Benjamini-Hochberg procedure which controls the FDR.
A similar analysis can be designed if SCoPE sets over ({c − }, {c + }) are conducted for c − < c + , however, in that case obtaining worst case values of µ is not as simple as in the above scenario because these values can take values within the interval [c − , c + ]. Still we can guard ourselves against counterfactual conditional scenarios which seem to be plausible.
Hypothesis Testing using SCoPE Sets
This section gives details on how to use SCoPE sets as a test statistic for relevance and equivalence tests. In general, we do not recommend our tests since the questions about the preimage of µ which are conveyed in a statistical hypothesis can be directly answered by appropriate SCoPE sets which offer a more intuitive interpretation. We include the test interpretation only to compare SCoPE sets to the existing literature. Hereafter, we assume that the functions b + , b − ∈ F(S) satisfy b − (s) ≤ b + (s) for all s ∈ S and for simplicity in notation we assume whenever C ∈ F(S) is specified that u C = u ±C which follows for example from the continuity assumption on µ given in Theorem 2. Moreover, if H 0,s and H 1,s are null and alternative hypotheses for s ∈ S and we have a statistical test which decides between these two alternatives from data, then we define the set of true null hypotheses H 0 and true alternative hypotheses H 1 and their estimates as
Relevance Tests
We first want to construct a global relevance Test (grT) based on SCoPE sets, i.e., a test on the alternatives At a first glance Theorem 1 with C ± = {b ± } and assumingμ N satisfies a F(S)-ULT suggests that we can determine a critical quantile q α such that the above test asymptotically has significance level α since it implies for all q that lim sup N →∞ Unfortunately, the r.h.s. is zero for all q in the highly likely scenario that Thus, usually it is impossible to tune q as desired. In order to remove this ambiguity we got inspired by [11]. Note that for b Thus, the idea is to choose q gr α to be the smallest q such that is not the empty set. This follows directly from the definition of b ± ∆ e and the definition of the generalized preimage because there is a sequence ( The following theorem is a generalization of the test strategies proposed in [10,11,13] and is providing an asymptotical procedure for a global relevance test over S. Its proof is similar to the proof of Theorem 4 which generalizes the upcoming theorem to provide a local relevance test controlling the FWER in the strong sense and therefore we leave the proof of the following result to the reader.
In order to see that this result generalizes the testing strategy from [11] ( [10,13] can be treated similarly) we translate their notations into the notations of this article. Assume thatμ N is an estimator of a continuous function µ satisfying a ULT in C(S) with σ ≡ 1 and S = [0, 1]. 4 This estimator could be the difference of the sample means from [11]. Based on their Theorem 3.1 they propose to test the global null hypothesis H rel 0 for b ± ≡ ±b, b ≥ 0, by rejecting the null hypothesis if μ N ∞ > b + q gr α √ N . That q gr α is identical to their quantile u 1−α,E [11, eq. (3.14)] can be seen from the fact that their extremal sets E − and E + satisfy As it can be easily verified thatL Theorem 3 indeed generalizes their testing strategy to allow adaptation to some scaling function σ, allows b ± and µ to be possibly discontinuous functions and clarifies what kind of continuity is required in the ULT.
Using Theorem 1 we can similarly construct a strong α-FWER local relevance Test (lrT), i.e., a multiple hypothesis test for the alternatives which controls the FWER in the strong sense. The main difference is that we replace ∆ e by the smallest ∆ > 0 such that The lrT based on SCoPE sets accepts H rel Remark 14. For the lrT it can happen that even if (Fig. 6).
Theorem 4. Consider the lrT from Definition 9, set C ± = b ± ∆ and assume (A1). The critical set is empty because H rel 1,s is true at the point s with minimal distance between µ and b − and b + . Right: The critical set is non-empty because H rel 0,s is true at the point s with minimal distance between µ and b − and b + .
Hence this test is more powerful than the standard asymptotic single-step test which replaces The main difference between the grT and the lrT based on SCoPE sets is the definition of the critical quantile under H rel (Fig. 7). Interestingly, it is impossible to claim that either of the two tests has a higher statistical power. The reason is that the quantiles depend on the covariance structure of G on where s * is the unique global extreme point of µ. In this case the asymptotic theoretical quantile is simply the quantile of G(s * ). 5 If the variance of G is assumed constant over S this means that the grT is often more powerful than the lrT to detect a departure from H rel 0 . However, it is possible to construct alternatives such that the roles are reversed (Fig. 7).
Equivalence Tests
We assume in this section additionally that inf s∈S | b + (s) − b − (s) | > 0. An equivalence Test (eT) is a test on the hypothesis The eT based on SCoPE sets is similar to the gRT based on SCoPE sets. The main change is that the rejection condition becomes the acceptance condition and that the sign in the definition of b ± ∆ is changed such that at least one of the shifted curves touches the most extreme values ± µ ∞ of µ under H eqv 0 , compare Fig. 8.
where ∆ e has been defined in Theorem 3. Let q e α be such that P Let ∆ e > 0, then lim N →∞ P * H eqv 0 is rejected = 0 .
Remark 16. If we assume that µ,μ N , b ± ∈ C([0, 1]) and σ ≡ 1, then simple algebra shows that the testing strategy proposed in Definition 10 is identical to the testing strategy from [9], and their theoretical quantile is identical to q e α . The only difference is that in the definition of the quantile they use " < ", which is irrelevant since in their probabilistic model the cumulative distribution function of T b + ∆ ,b − ∆ (G) being continuous. Thus, our eT based on SCoPE sets generalizes the testing strategy proposed in [9].
Local Equivalence Tests
If the alternatives of an lrT are interchanged, i.e., we call a test on this alternatives a local equivalence Test (leT). Interestingly, an eT is not a global leT because a global leT tries to answer the question whether µ is always outside of the band defined by b − and b + .
Definition 11. Let b ± ∆ = b ± ± ∆ with the ∆ from Definition 9. For α ∈ (0, 1), let q le α be such that P Remark 17. The changes between the lrT and the leT based on SCoPE sets are subtle. First, the condition for acceptance and rejection are exchanged as well as the role of b − α,le and b + α,le in the lower and upper excursions and b − ∆ and b + ∆ in the limit distribution. Second, there is a sign change in b ± α,le compared to b ± α,r .
Theorem 6. Consider the test from Definition 11. Let C ± = b ± ∆ , assume (A1) and Remark 18. A standard approach for an eT of a parameter µ ∈ R is the Principle of Confidence Interval Inclusion (PCII) [53, Chapter 3.1]. Let X denote the observations and C ± (X; α) the one-sided (1 − α)-confidence bounds for µ, i.e., . By construction the interval C − (X; α), C + (X; α) constitutes a (1 − 2α)-CI for µ. This explains the name of the principle. The first conservative eT was derived in the above fashion in [54] using a (1 − α)-CI. The conservativeness is a relic of using a CI instead of SCoPE sets.
To see this, assumeμ N satisfies The connection to the leT based on SCoPE sets is as follows. It can be easily verified that the quantile q le α is given by which is exactly the test derived from the PCII if G has a symmetric distribution. In fact, eTs of this type are under weak conditions asymptotically optimal [36].
Discussion
In this article we refined, extended and unified different statistical inference tools for a target function µ from an estimatorμ N which control FWER-like criteria over a metric space S.
In particular, we demonstrated that CoPE sets [43], SCBs and recently proposed tests for C(S)-valued data under the supremum norm, among others [11], can be derived from the same general principle expressed in Theorem 1. Our abstract viewpoint allowed us to weaken the assumptions of the aforementioned methods and clarify some of their conceptual shortcomings, for example, by changing the definition of the inclusion statement from [43]. We will finish the current endeavor by highlighting a few observations which might not be obvious on reading our work for the first time.
Measurability of the SCoPE Sets Inclusions
The J. Hoffmann-Jørgensen theory of weak convergence allows to elegantly circumvent problems of measurability and is still rich enough to be useful as a foundation for asymptotic statistics. If measurability is required, S needs to be separable. Measurability reduces then essentially to proving that the set is measurable. The latter is satisfied, if τ −1 N μ N (s) − c(s))/σ(s) is P − B(R) measurable 6 for all s ∈ S and that the sample paths are continuous on S \ L c − . The inclusionÛ c + q ⊆ U c + can be treated analogously. Hence for countable sets C these conditions are sufficient to remove the dependency on outer and inner probabilities in our theorems.
SCoPE Sets for Estimators over Discrete Domains
Hidden in the SCoPE sets framework are consequences for asymptotic statistical inference on a target function µ if S is finite which means that µ can be considered a multivariate andμ N a random vector. If S is endowed with the discrete-topology it holds that ∞ (S) = C(S).
Due to S being finite the probability that c is asymptotically one for any c ∈ F(S) and all q ≥ 0 because there exists δ > 0 such that This means that asymptotically any FDR (e.g., [3]) or k-FWER method (e.g., [23]) is inferior to FWER methods since all detect with probability one the true set while FDR and k-FWER methods have by construction more false positives on µ −1 c . Hence the higher power in detection for finite N of methods controlling the FDR or k-FWER turns inevitably into a drawback asymptotically compared to FWER control.
Another observation is that the generalized preimages u −1 C − and u −1 C + are likely to be empty, if C − and C + are finite. Hence Theorem 1 holds true for any q ∈ R. In practice this is not an issue since the generalized preimages need to be estimated, for example, through the estimator from Section 4.3. Thus, the researcher usually obtains a non-zero q α for finite N and his pre-specified α ∈ (0, 1). In case that the estimates of the generalized preimages are empty he can reject at level 1 −α N that µ ∈ Γ C − ∪ C + , compare Section 4.3. By Theorem 2 such a rejection occurs with probability tending to one if u −1 C ± = ∅.
On the Assumption of Uniform Limit Theorems
Although the assumptions (A1)-(A3) are mild for some probabilistic models proving a ULT as required in (A1) can be difficult. Assumptions (A1)-(A3) are only used to ensure the weak convergences and asymptotic tightness of T C − η ,C + η (G N ). The SCoPE Set Metatheorem proven in the Appendix B needs considerably weaker conditions to obtain asymptotic SCoPE sets. Essentially it requires that the real-valued random variables on the l.h.s. of (19) only converge in distribution. Using this result it might be possible to integrate even the testing strategy from [7] into the SCoPE sets framework because their testing strategy is very similar to the testing problems discussed in this work, yet they cannot rely on a ULT of the underlying statistic.
Stepwise Constructions of SCoPE Sets
The works [37] and [38] also identified the oracle test on a hypothesis of the form H 0,s : µ(s)−c(s) ≤ 0 versus H 1,s : µ(s)−c(s) > 0 for S being a discrete set. In our notation the oracle test based on SCoPE sets accepts H 0,s if s ∈L c − qα and rejecting it otherwise. Their proposed step down procedure, for example, using the bootstrap or permutations, can be interpreted as approximations of the set S \L c − qα by an iteratively constructed excursion set such that S \ L c ∩ = ∅ is smaller than a prespecified probability and it immediately can be extended to provide an approximation of asymptotic (1−α)-SCoPE sets over (C − , C + ). However, it does not incorporate the estimation of µ within the probabilistic model and therefore this construction might still be less powerful in general than incorporating carefully information about estimating µ. We leave exploring this question in more detail to future work.
SCoPE Sets versus Hypothesis Testing
What are the advantages of SCoPE sets over hypothesis testing? We first remedy a misconception about CoPE sets. In [5] CoPE sets are motivated as a solution to the paradox caused by the fallacy of the null hypothesis [39]. They write "[...] the paradox is that while statistical models conventionally assume mean-zero noise, in reality all sources of noise will never cancel, and therefore improvements in experimental design will eventually lead to statistically significant results. Thus, the null hypothesis will, eventually, always be rejected [30]. [...]" and later they write "[...] Unlike hypothesis testing, our spatial Confidence Sets (CSs) allow for inference on non-zero raw effect sizes. [...]". The fallacy of the null hypothesis can be an important practical problem, yet CoPE sets fall short being a conceptual solution. They still assume a zero-mean noise model. Therefore their inference on level sets c of the true signal suffers from the same problem of finding spurious signals above c not caused by the underlying true function.
To make this point more clear any strong α-FWER test on the alternatives H 0,s : µ(s) = c vs. H 1,s : µ(s) = c would be a solution to the fallacy of the null hypothesis if CoPE sets are. An example are the mass-univariate tests in neuroimaging proposed in [55]. These tests are equivalent to a (1 − α)-SCB since they compute the maximum over all s ∈ S of an error field. Let us denote withq α the quantile-parameter of the mass-univariate test with significance level α. If the assumed probabilistic model is reasonable and the null hypothesis µ(s) = c is tested, then the rejection regionsL c − qα andÛ c + qα converge for N tending to infinity to L c and U c and by Corollary 1 it holds This means a standard mass-univariate, strong α-FWER test differs from CoPE sets at level c only by providing conservative CoPE sets.
If the mean of the error processes is bounded within [a−c, b−c] with a < c < b, a possibility to dissipate Meehl's concern are (1 − α)-SCoPE sets over {a}, {b} . This strategy estimates the sets L a and U b instead of L c and U c which might be contaminated by spurious signals from the error process. Similarly as for CoPE sets, any strong α-FWER relevance test could be used.
So what are the advantages of SCoPE sets over hypothesis testing? First and foremost SCoPE sets break with the dogma of phrasing research questions in terms of statistical hypotheses. They emphasize what really matters: a quantifiable observable µ and what can be concluded from an experiment about preimages which are relevant for the researcher. Secondly, they clearly state the oracle limiting distributions for many α-FWER controlling tests and thereby disclose the actual target of a multiple test. Moreover, SCoPE sets together with insignificance values (Appendix 5.4) are a more informative and objective reduction of data. This is similar to the fact that in a parametric model with parameter space Θ confidence sets, if available, are always a less subjective reduction of the data than just reporting the results of a single hypothesis test for some θ 0 ∈ Θ. Mathematically, this is crystal-clear because the confidence set obtained from a family of point hypothesis tests on H 0 : θ = θ 0 vs. H 1 : θ = θ 0 contains all θ 0 ∈ Θ such that the observation falls into the acceptance region of the test on θ 0 . Thus, a confidence set reports all θ 0 which cannot be rejected by the data ([24, Thm 3.5.1]) and not just the results of a single test chosen by the researcher.
[52] Ronald L Wasserstein and Nicole A Lazar. The asa statement on p-values: context, process, and purpose. The American Statistician, 70 (2) A Auxiliary Lemmata
A.1 A Lemma on Inner Probability
The following result should be well-known. We include it for completeness since we will use it often in our proofs.
Lemma 1. For all A, B ⊆ Ω it holds that
Proof. This follows from (A ∩ B) * = A * ∩ B * (e.g., VW 1.2 Exc.15), since Note that for A ⊂ Ω the set A * ⊆ A is the (always existing) measurable set such that P * (A) = P(A * ). The claim follows from P * ( B ) = 1 − P * ( Ω \ B ).
Lemma 2.
Letη > η > 0, define c ± q = c±qστ N and assume there exists K > 0 and Z N : Only the statements for open excursion sets are proven. The proofs for the closed excursion sets are almost identical.
We begin with the proof of (i). Assume Thus, µ(s 0 ) ≥ c(s 0 ) and the first inequality of the assumptions yieldŝ . Hence, the second inequality of the assumptions yieldŝ Collecting the results yields L c ⊆L c − q .
We prove (ii) similarly. Note thatÛ . This together with the first inequality of the assumptions yieldŝ . Together with the second inequality of the assumptions this yieldsμ Proof. We only proof the first claim as the proof of the second is similarly. The assumption δ > 0 yields for all s ∈ U b . Using this we obtain Applying the lim inf to both sides implies lim inf N →∞ P * Û c + q ⊆ U b = 1 by the asymptotic tightness assumption.
In both cases applying the lim sup the r.h.s. converges to zero by the asymptotic tightness assumption.
Proof. We only show one of the two claims. The proof of the other is similar.
Applying the lim sup to both sides implies that the r.h.s. converges to zero as N tends to infinity by the asymptotical tightness assumption.
A.3 Lemmas on Uniform Convergence
This appendix collects some facts about uniform convergence and supremum statistics.
Proof. For statement (i), fix > 0 and N > 0, without loss of generality, assume that sup s∈A N f (s) ≥ sup t∈A f (t). Let s * ∈ A N be such that sup s∈A N f (s) = f (s * ) + and t * ∈ A such that d(t * , s * ) < ε N (exists since d H (A N , A) < ε N ). Then Since can be arbitrarily small, Using sup s∈B f (s) a similar calculation yields which proves the claim.
For statement (iii), note that since A N ⊃ A it is possible to replace s ∈ A N by s ∈ A N \ A and t ∈ A by t ∈ A \ int(A) in the supremum on the r.h.s. of (i), i.e., Since f is uniformly continuous on A N \ int A ⊆ A 1 \ int A , r.h.s converges to zero as ε N → 0. Statement (iv) follows directly from the triangle inequality and (ii) and (iii). Proof. This is a consequence of the extended continuous mapping theorem [50, Theorem 1.11.1] and Lemma 5(iv). More precisely, define the maps The claim follows from the extended continuous mapping theorem, if for any sequence (f N ) N ∈N ⊂ ∞ (A 1 ∪ B 1 ) converging to f in ∞ (A 1 ∪ B 1 ) such that the restrictions of f N and f to A 1 \ int A ∪ B 1 \ int B are uniformly continuous, it holds that H N (f N ) → H(f ). Using max(a, b) = 2 −1 a + b + |a − b| for a, b ∈ R and triangle inequalities yields which converges to zero by Lemma 5(iv). Note that we do not need to assume G to be separable since our D N in [50, Theorem 1.11.1] is always the subspace of ∞ (A 1 ∪B 1 ) where the restriction to
B An Asymptotic SCoPE Set Metatheorem
In this section we prove a general SCoPE Set Metatheorem. All theorems of the main manuscript are essentially corollaries of this result. In particular, it weakens the assumption onμ N since we do not require that it satisfies a ULT.
Letμ N : Ω → ∞ (S), N ∈ N, and (τ N ) N ∈N a positive sequence converging to zero. Define a map G N : Ω → ∞ (S) by Since we do not assume that the map G N is measurable, weak convergence involving G N is always understood in the sense of [50,Definition 1.3.3]. For C ⊆ F(s) we define the abbreviation U ±η N C = clµ −1 C ±η and require the following set s. continuous sample paths on V .
Moreover, assume that q ± ∈ ∞ (S). We introduce a similar quantity as in (5) for A, B ⊆ S and f, q ± real-valued functions with appropriate domains: Let G q − ,q + and H q − ,q + be random variables with values in R,η > 0 and (η N ) N ∈N a positive sequence such that lim N →∞ η N τ −1 N = ∞. We require the following assumptions: With this at hand, we can prove our SCoPE Set Metatheorem.
Proof. In the definitions of all sets involving preimages we can replace C ± byC ± = f ∈ F(S) | f ∈ Γ(C ± ) without changing these sets. We begin with proving the first and the second claim. For any η > 0 such thatη > η, Lemma 2(i) yields that impliesÛ c + q ⊆ U c + for all c + ∈ Γ(C + ). Let (η N ) N ∈N be a sequence of positive numbers such that η N → 0 and η N τ −1 N → ∞. Combining Lemma 1, eq. (22) and (23) yields The last two summands can be made arbitrarily small since the asymptotically tightness condition implies that for any > 0 we find K > 0 such that for all δ > 0 lim inf The same argument applies to the second summand.
which proves the first claim.
which finishes the proof of the second claim.
In order to prove the third claim, we first establish which is equivalent to Assume s * ∈ µ −1 C + such that G N (s * ) > q + (s * ). Hence there is c + ∈ C + such that µ(s * ) = c + (s * ). Thus, s * / ∈ U c + , yet G N (s * ) > q + (s * ) implies s * ∈Û c + q . Assume s * ∈ U C + \ µ −1 C + such that G N (s * ) > q + (s * ). By the continuity assumption in the definition of U C + we find an s ∈ µ −1 C + such that G N (s ) > q + (s ). This, again implies the existence of an c + ∈ C + such that s / ∈ U c + , but s ∈Û c + q . The case if s * ∈ U C − such that −G N (s * ) > q − (s * ) implies ¬(E2) is almost identical to the previous argument and therefore omitted.
Since (E1) ⇒ (E2) and (M4) holds, an application of the Portmanteau Theorem yields The proof of the SCoPE Set Metatheorem can be thought of as taking the limit of the following non-asymptotic bounds.
Proposition 2. Let C − , C + ⊆ F(S), q ± ∈ F(S) and η N <η. Define c ± q = c ± ± qτ N σ for all c ± ∈ Γ C ± . Then Remark 19. Using Lemma 1 it can be easily verified that Lemma 2.1 from [27] is the special case of the above proposition with C ± = {0}. The benefit of our version is that it more clearly shows that the probability oof the lower bound needs to be tuned by q ± to obtain valid nonasymptotic control of the inclusions and the gap in exact control is given by the difference between the given upper and lower bound.
In the main article we introduced the generalized preimage u −1 C which collects all points in S such that either s, µ(s) ∈ Γ(C) or s, µ(s) is a touching point of Γ(C) in the sense that the graph Γ(µ) gets arbitrary close to Γ(C). The sharp upper bound (A4) in Theorem 1 holds true for example if u −1 C = µ −1 C . Therefore we now discuss fairly general conditions under which A key concept we will need is the boundary of a set C ⊆ F(S). Here ∂I is the topological boundary of I ⊆ R under the standard topology. While the boundary of C is not a unique set, any two boundaries D and D satisfy that Γ(D) = Γ(D ). Therefore Γ ∂C is a unique set.
Lemma 7. Let C ⊂ F(S), µ −1 Cη be compact and η N n∈N ⊂ R a positive zero-sequence. Assume that Γ ∂C is closed and the restriction of µ to cl µ −1 Cη \ int µ −1 C is continuous. Then Cη N for all N . Therefore convergence in Hausdorff distance of µ −1 Cη N to µ −1 C follows, if for any ε > 0 there exists an N 0 ∈ N such that for all N > N 0 it holds that µ −1 Cη N ⊂ M ε . To this end, assume the contrary. Then, there exists ε > 0 such that for some subsequence (N k ) k∈N there are s N k ∈ µ −1 Since for large enough k the sequence (s N k ) k∈N is contained in the compact set µ −1 Cη it can w.l.o.g. be assumed that it converges to a limit s * ∈ S, say. Assume that k is large enough such that µ −1 Thus, lim k→∞ c N k (s N k ) = µ(s * ). Since Γ ∂C is closed and s N k , c N k (s N k ) ∈ Γ ∂C for all k ∈ N, its limit s * , µ(s * ) is contained in Γ ∂C . By (24) it follows that s * , µ(s * ) ∈ Γ(∂C) ∩ Γ(µ). This implies s * ∈ µ −1 C , since Γ(∂C) is closed. A contradiction.
Remark 20. The importance of Γ(∂C) being closed is visualized in Fig. 9 which gives an example for C = {c} where c is discontinuous at s 0 ∈ S. The problem here is that Γ(µ) intersects Γ ∂{c} = Γ(c) at a point which does not belong to Γ(c). This situation can be circumvented by finding a setC with Γ C = cl Γ(C) . This means adding functions such that their graph may pass through the boundary points of Γ(C) while otherwise being contained in Γ C . This is illustrated in the right panel of Fig. 9. D The Difference between " > " or " ≥ " Excursion Sets This section explains why it is more natural to consider statements of the formL c − q ⊆ L c − and U c + q ⊆ U c + than the statements S \Û c − q ⊆ S \ U c − and S \L c + q ⊆ S \ L c + which are used in the literature. For example, [5,6,29,43] The main issue with excursion sets using "≥" instead of ">" is that a so called open ball condition is required to prove sharp upper bounds. This condition is restrictive since it means that µ cannot be flat on Γ ∂C as illustrated in Fig. 10. Because this section only serves an illustrative purpose, we simplify the proof by assuming u −1 C = µ −1 C . This means that µ −1 C ⊆ S is closed. Furthermore, we require the following assumptions: (A2') There exist anη > 0 such that the restriction of G N to µ −1 Cη has almost surely continuous sample paths. The following theorem shows that SCoPE sets using excursion sets with "≥" instead of ">" require the stronger assumptions (A2') and (A5) to obtain an upper bound.
(A5) Assume that for all open
If also either µ −1 ∂C ∩ µ −1 C = ∅ or else (A2') and (A5), then Proof. The proof of the lower bound follows along the lines of the proof given in Theorem 1. Therefore it is omitted. Only the upper bound requires a more complicated proof.
The idea is similar to the proof of the upper bound of the SCoPE Set Metatheorem 7. The only difference is that it is necessary to prove for the subset of Ω whereμ N has continuous sample paths on µ −1 Cη . Assume (E1) holds, but not (E2). W.l.o.g. assume there exist an s * ∈ µ −1 C + such that , which contradicts (E2). On the other hand, for Similarly, the case s * ∈ µ −1 C − can be treated. Hence as in the proof of Theorem 7 an application of the Portmanteau Theorem finishes the proof.
E Further Results on SCoPE sets for the Linear Model
In the main manuscript we derived asymptotic SCoPE sets over {0}, {0}) for the linear model. Here we derive further results.
E.1 Asymptotic SCoPE sets
The next theorems provide asymptotic SCoPE sets over C − , C + ) with C − = {−∆}, C + = {∆} and with C ± = δ | δ ∈ [−∆, ∆] for the linear model. As usual, we identify a constant with the function being constant over S K−1 . The proof is a simple application of Theorem 1, if one realizes that ∆ ≤ β implies that a ∈ R K | a T β = ∆ ∩ S K−1 = ∅ and ∆ > β implies a ∈ R K | a T β = ∆ ∩ S K−1 = ∅. Corollary 6. Assume the setting of Corollary 5. Let ∆ > 0 , q ≥ 0 and ε ∼ N (0, I K×K ). If ∆ ≤ β , then If additionally X = I K×K , then where the two Gaussian random variables are independent of each other.
Otherwise, if ∆ > β , then Proof. We only need to determine the limit distribution, if ∆ ≤ β . Using Theorem 1 and simple algebra yields If additionally X = I K×K we can simplify the limit distribution further using Lemma 9: Here P ⊥ E is the orthogonal projection onto the hyperplane E = {x ∈ R K : β T x = 0} and the two Gaussian random variables are independent of each other.
Remark 23. The interpretation of this result is similar to the interpretation of Corollary 5. The difference is that it is designed to detect contrasts such that |β T a| > ∆ for some ∆ > 0. The limiting process depends on the true β (in the case of X = I K×K only on β ) and most importantly in the case β < ∆ the probability of detecting any contrast such that |β T a| > ∆ is asymptotically equal to zero. Corollary 7. Assume the setting of Corollary 5. Let ∆, q ≥ 0 and ε ∼ N (0, I K×K ). If ∆ ≤ β , then If additionally X = I K×K , then where 1 A is one if A is true and zero else. Otherwise, if ∆ > β , then Proof. The limit distribution in the general case follows from Theorem 1 and simple algebra. In order to get the limit process for X = I K×K , recall that P ⊥ E is the orthogonal projection onto the hyperplane E = {x ∈ R K : β T x = 0}. With this at hand and Lemma 9 we obtain where 1 A is one if A is true and zero else.
E.2 Multiple Linear Regression and Scheffé's Test
Assume the simple multiple regression model, i.e., such that X has rank K + 1 < N . The BLUE of β is given byβ = (X T X) −1 X T y and the variance ξ 2 of the error can be estimated by s 2 = y − Xβ 2 /(N − K − 1). In practice often not β, but different linear contrasts a T β, a ∈ R K , are of interest. Interpreting a ∈ R K as the parameter set of a stochastic process, we can define the processeŝ µ(a) = a Tβ σ 2 (a) = s 2 a T X T X −1 a , which are the estimates of µ(a) = a T β and the variance σ 2 (a) = s 2 a T X T X −1 a. Since all involved quantities are continuous in a we obtain the following Corollary of Proposition 1.
Proposition 3 (Scheffé Type CoPE Sets). Assume the previously described multiple linear regression setting, then Here F x; K + 1, N − K − 1 for x ∈ R denotes the cumulative distribution function of a F K+1,N −K−1 distributed random variable.
F Exemplary Simulations of SCoPE sets for IID Observations
In this section we compare the SCoPE sets methodology in its simplest form which provides control over C ± = {0} to multiple testing strategies. We use the model discussed in Section 5.4 with σ ≡ 1. We consider four difference models A, B, C and D which differ in the population means µ A j = 0, j ∈ {1, . . . , 80}, and µ C j = sin j/(2π) , j ∈ {1, . . . , 100}. Samples from these models are shown in Fig. 11. The results deliver a simple message: SCoPE sets are at least as powerful and often more powerful than multiple testing procedures controlling the FWER while even offering information about the correct sign. The multiple testing procedures we compare to are Hommel's procedure [22] which is based on the closure principle [28] and is to the best of our knowledge one of the most powerful strong α-FWER tests for iid data and the Benjamini-Hochberg (BH) procedure which controls the false discovery rate. Of course, SCoPE sets cannot outperform the latter in terms of average true discoveries for small sample sizes since FDR targets a much looser error criterion which allows a higher number of false detection than the FWER. However, we included it to demonstrate that FDR control becomes for large sample sizes inferior to FWER control for discrete data as pointed out in Section 7.2 of the main manuscript. The latter can be best seen in the last three columns of Table 3 and Table 4.
Since µ = 0 in Model A no true discoveries are possible. All methods except for using log(N )/10 and log(N )/5 converge quickly to the nominal coverage of 0.9 compare Table 2. As we argued in Section 5.4 of the main manuscript a researcher would need to perform an insignificance analysis which means that many declared discoveries in the cases of SCoPE sets using log(N )/10 and log(N )/5 as the thickening factor for the estimator of Theorem 2 would be declared insignificant. Nevertheless, these choices are too small as can be seen also in Table 3 and Table 4. Remarkably, is that estimating the quantile using Storey's trick, i.e., usingq st 0.9 , performs extremely well and converges quickly to the oracle values of average true and false discoveries. But even the very conservative choice of using a 0.9-SCB to estimate the setμ −1 0 is almost as good as Hommel's procedure. Model D is a special case since it demonstrates the case that µ −1 0 = ∅. The results can be found in Table 5. Here our oracle estimator does not provide good values for the simulated sample sizes since we correctly chose q = 0. However, it is obvious that the probability of the containment of the upper and lower excursion sets slowly converges to one as predicted by the theory. Moreover, as argued in Section 7.2 the estimators do not suffer from this problem because the estimates of the quantile are usually not equal to zero. Table 2: Simulation results of Model A. The columns with title "Cov" contain the empirical probability (in percentage) of the validity of the inclusionsL −qσ/ √ N ⊆ L 0 andÛ qσ/ √ N ⊆ U 0 . For the multiple testing methods Hommel and BH we do not report a "Cov"-value since it is not their primary target and can only be derived through the error of the third kind. The columns with title "FD" contain the average number of false discoveries and "TD" contains the average number of true detection. Note that for SCoPE sets a false detection means making a directional error, i.e., an Type III error, or an Type I error, while for the multiple testing methods (Hommel, BH) a false detection only means making a Type I error. Hence the r.h.s. of the identity which we want to prove is well defined and it is enough to show that the equivalence of (E1) ∀s ∈ S :μ(s) − qτ NσN (s) ≤ µ(s) ≤μ(s) + qτ Nσ (s) Assume that s * ∈Û c + +qτ Nσ , i.e.,μ(s * ) − qτ Nσ (s * ) > c + (s * ). By (E1) it holds that µ(s * ) ≥μ(s * )−qτ Nσ (s) which implies µ(s * ) > c(s * ), i.e., s * ∈Û c + . Similar, s * ∈ L c − combined with µ(s * ) ≤μ(s * ) + qσ(s * ) shows that s * ∈L c − −qσ . Case (E2)⇒(E1): By (E2) it holds thatÛ µ+qτ Nσ ⊆ U µ = ∅ and S = L µ ⊆L µ−qτ Nσ which is equivalent to (E1).
G.2 Proof of Theorem 1
Proof. We want to apply the SCoPE Set Metatheorem 7 in the case that q ± ≡ q. By (A1)-(A3) we obtain from Lemma 6 Therefore (M1) is satisfied. Condition (M2) holds since the random variable T q,q Assumption (M3) is part of Assumption (A1).
G.3 Proof of Theorem 2
We first establish that the estimateû −1 C is with inner probability tending to one inside the set cl µ −1 Cη .
Proof. We prove that lim inf N →∞ since it implies the other statement. Define sgn c,f (s) = sgn f (s) − c(s) for f ∈ F(S) and the setμ Assume that s ∈μ −1 C ∩ S \ µ −1 Cη . From Definition 8(ii) and (10) combining this with Remark 3 shows that lim sup N →∞ which is the claim.
Using the above Lemma we can prove Theorem 2. The proof follows the idea of the proof of Theorem 3.6 from [11], yet it is more involved since it is significantly more general.
Proof. We begin with proving the Hausdorff convergence of the estimator of u −1 C . To begin with let η N = ρok N τ N for some ρ ∈ (0, 1) and recall the definition ofμ −1 C from (27). Assuming N large enough such that η N <η we obtain The latter converges to one since sup s∈cl µ −1 Cη |G N (s)| is asymptotically tight. Therefore we have lim inf N →∞ P * cl µ −1 and under the continuity condition on µ that lim sup N →∞ To see the first statement note that wheneverû −1 C ⊆ cl µ −1 Cη it holds that Let us assume that sup s∈μ −1 C inf c∈C |µ(s) − c(s)| converges to zero outer almost surely. By Egorov's Theorem (Lemma 1.9.2(iii) from [50]) this is equivalent to ∀δ > 0 ∃A ⊆ Ω ∀ε > 0 ∀ω ∈ A ∃N ∀N > N : and assume that sup s ∈û −1 C inf s∈u −1 C |s − s | → 0 is not converging outer almost surely, i.e., ∃δ > 0 ∀A ⊆ Ω ∃ε 0 > 0 ∃ω 0 ∈ A ∀N ∃N > N : Here and latter for convenience we explicitly indicated where the dependence on ω/ω 0 is. Let δ from (32). For this δ we choose the corresponding A from (31). Hence by (32) there is a ε 0 and ω 0 ∈ A such that sup We will now construct a contradiction between (31) and (32). From 34 we obtain a sequence (s N ) N ∈N ⊂û −1 C (ω 0 ) such that for all N > N . By replacing (s N ) N ∈N by a convergent subsequence since S is compact we can w.l.o.g. assume that s N → s * for N → ∞ with and therefore inf If s * ∈μ −1 C (ω 0 ) for all N > N , then by (31) we have that s * ∈ µ −1 C which contradicts (35). More general, if s * ∈ clμ −1 C (ω 0 ) =û −1 C (ω 0 ) for all N > N we find a sequence (t N ) N ∈N such that t N ∈μ −1 C (ω 0 ) converging to s * . By (31) this means that s * ∈ u −1 C contradicting (35).
The same proof can be used to prove (30). The argument only breaks down in the last step. It does not follow in general from (31) that s * ∈ u −1 ±C , since it matters with which sign µ(t N ) − c(t N ) has. Under the continuity condition and the closedness of Γ(C) it follows from (31) and the reasoning in the end of the proof of Lemma 7 that s * ∈ µ −1 C ⊂ u −1 ±C .
H Proofs of the Results in Section 5
The proofs of this section are based on the following lemma.
I Proofs of the Results in Section 6
I.1 Proof of Theorem 4
Proof. It is helpful to remember that the set of true null hypotheses is H 0 = S \ ( L b − ∪ U b + ) and the set of true alternative hypothesis is We begin with the proof of (a). Recall from Definition 9 that b ± ∆ = b ± ∓ ∆. Since it follows that P * ∃s ∈ H 0 : s ∈Ĥ 1 = P * ∃s ∈ H 0 : s ∈L b − α,r ∪Û b + α,r = 1 − P * ∀s ∈ H 0 : The second inequality is a consequence of the first and the observation that P * ∃s ∈ H 0 : s ∈Ĥ 1 = 1 − P * H 0 ⊆Ĥ 0 = 1 − P * Ĥ 1 ⊆ H 1 .
This finishes the proof of (a). We To prove the case inf s∈S b + (s) − b − (s) ≥ M > 0 note that . Using this yields Applying the lim sup part of Theorem 1 with C ± = b ± ∆ and Lemma 3, which shows that the last two probabilities on the r.h.s. converge to 1, finishes the proof of (b).
Next we prove (c). Recall the notation: Hence Thus, are all asymptotically tight. For the latter two this follows from the weak convergence G N G on cl µ −1 {b + }η and Lemma 6. This means the proof of (c) is complete. Finally, we prove (d). Since ∆ > 0 it holds that The proof for H 1 therefore is essentially identical to the proof of the last statement in part (b). Moreover, we have that Since inf s∈S b + (s) − b + ∆ (s) > ∆ and sup s∈S b − (s) − b − ∆ (s) < −∆ the claim follows from Lemma 3.
I.2 Proof of Theorem 5
Proof. If ∆ e ≥ 0, then H eqv 0 is true and L b − = U b + = ∅. We begin with proving statement (a). If ∆ e = 0, then α,e +∆ e ⊆ U b + ∆ and the claim follows from Theorem 1 with C ± = b ± ∆ . If ∆ e > 0, then the claim follows from Lemma 3 using δ = ∆ e > 0, because This finishes the proof of statement (a). Next we proof (b). If ∆ e < 0, then H eqv 1 is true and Hence almost the same argument as in the proof of Theorem 4(c) yields the claim.
I.3 Proof of Theorem 6
Proof. For the leT we have that H 0 = S \ L b + ∩ U b − and H 1 = L b + ∩ U b − . We first prove (a).
In order to proof (b) we use our previous calculation we obtain P * ∃s ∈ H 0 : s ∈Ĥ 1 = 1 − P Here Applying the lim inf to this inequality and using the lim inf part of Theorem 1 with C ± = b ∓ ∆ on the first probability and recognizing that P * C N converges to one by Lemma 4, since inf s∈S b + (s) − b − (s) > 0, yields the claim. We now prove (c). Recall the notation introduced in (36). Since H η N 1 = L b + −η N ∩ U b − +η N implies that sup for all A, B, C, D yields P * ∀s ∈ H η N 1 : H 0,s is rejected = P The proof is almost identical to the proof of Theorem 4b) using the splitting according to Assumption (A4). Proof of (c): The proof is similar to the proof of part (c) from Theorem 4. | 2023-02-13T06:41:42.197Z | 2023-02-10T00:00:00.000 | {
"year": 2023,
"sha1": "5a0ad7a790ff0333c429874160ddf446164f324d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d76c4369724271c60e115beb4a465aa958499939",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
15767076 | pes2o/s2orc | v3-fos-license | Exact Solution of the Quantum Calogero-Gaudin System and of its q-Deformation
A complete set of commuting observables for the Calogero-Gaudin system is diagonalized, and the explicit form of the corresponding eigenvalues and eigenfunctions is derived. We use a purely algebraic procedure exploiting the co-algebra invariance of the model; with the proper technical modifications this procedure can be applied to the $q-$deformed version of the model, which is then also exactly solved.
Introduction
In a number of recent papers [1], [2], [3] it has been pointed out that coalgebras provide a simple and general mechanism to construct integrable Hamiltonian systems with an arbitrary number N of degrees of freedom. Moreover, as it relies upon the existence of a co-associative homomorphism, called "co-product" or "co-multiplication", from an algebra A to the tensor product A ⊗ A, this procedure works both in the standard "Lie-algebra" setting and in the so-called "q-Lie-algebra" setting.
In refs [1], [2] the authors dealt mainly with classical Hamiltonian systems, where the Lie algebra, or its q-deformation, is realized in terms of Poisson brackets, but they have stressed that the same results, "mutatis mutandis" do hold for quantum systems as well. To avoid any possible source of misunderstanding, from now on when using the word "quantum"we will refer to the "canonical" Dirac quantization, while in the context of "quantum groups" or "quantum algebras" we will rather use the word "deformed".
The scope of this paper is to build up and solve a concrete example of a quantum integrable system arising in the co-algebra setting, namely a quantum version of the Calogero-Gaudin (CG) system [4], [5], [6], both in the undeformed and in the deformed case.
Our starting point will be the Calogero-Van Diejen paper [4], where three integrable quantum hamiltonians related to CG have been considered. It turns out that the relevant results can be formulated on pure algebraic grounds, without resorting to a specific realization: the whole derivation will be then carried out in an abstract setting, holding for any infinite-dimensional representation; the Calogero-Van Diejen realization is recovered as a special case.
Accordingly, in section 1 we construct the complete set of commuting observables for one of the Calogero-Van Diejen Hamiltonians and solve the associated spectral problem.
In section 2, we turn to the deformed quantum system, and derive the spectrum and the common eigenfunctions for the corresponding set of observables. On the way, we are naturally led to introduce what we have called "z-harmonic polynomials" (or "q-harmonic polynomials", or "deformed harmonic polynomials"); to the best of our knowledge, they are new mathematical objects, defined as polynomials solutions of a suitably deformed version of the N-dimensional Laplacian.
In section 3, we mention some closely related open problems which, in our opinion, deserve further investigation.
To speed up the presentation of the results, most of the details of the calculations for the undeformed (resp. deformed) case are confined in Appendix I (resp. Appendix II).
1 The quantum undeformed case
Calogero-Van Diejen results
In [4] the authors discuss three different quantum versions of the classical CG, associated with the Hamiltonian function [7]: We will study the one which is associated with the Hamiltonian operator The Hilbert space chosen by Calogero and Van Diejen is the subspace H (+) generated by with inner product The domain of our operators will be the (linear) variety, everywhere dense in H (+) , of C ∞ functions with Fourier components of nonnegative frequency, periodic of period 2π in all variables together with their derivatives. The choice of H (+) is an admissible one, as it is an invariant subspace for all the commuting observables. As [b j , b † k ] = δ jk , we may use (in units = 1) the representation: in whichĤ becomeŝ We note explicitly that in the representation (3) the operator b † is nomore the hermitian conjugate to b, so that the Hamiltonian itself is nomore hermitian: however, the representation (3) has been only used as an intermediate technical step to derive the solutions, that at the end are recast in the original variables, thus restoring hermiticity. The basis elements corresponding to (2) in representation (3) are the monomials: We observe that the vacuum |0 is given in both representations by a constant. Acting iteratively on |0 with the single-particle creation operator we obtain: Formula (5) defines the (non-unitary) operatorT intertwining between the two representations. Calogero and Van Diejen have obtained the eigenvalues and eigenfunctions of the Hamiltonian in representation (3): , H 2m being an even harmonic polynomial of degree 2m in N variables The degeneracy of each eigenvalue is equal to the number of even independent harmonic polynomials of degree 2m, i.e.
Clearly, in the original representation the eigenvalues are the same, while the eigenfunctions, that we denote by Ψ k,m ( q) can be easily obtained from φ k,m ( x) through formula (5). Indeed, as φ k,m ( x) is an even homogeneous polynomial of degree k, it can be written in the form where the coefficients c( m, l) are determined by the particular choice for the basis of the harmonic polynomials. Accordingly we have:
Integrability of Calogero-Van Diejen system
Complete integrability of Calogero-Van Diejen system can be proved using an algebraic method to construct the integrals of motion. This method, first introduced by Karimipour [8] while dealing with integrability of (1), has been later on cast in a more general setting in [1], [2]. The basic idea is the following: suppose one is given a Poisson (resp. Lie) algebra g realized by means of analytic functions of canonical phase space variables (p, q) (resp. of canonical quantum operators (p,q)) with Casimir C ∈ U(g), and a coassociative linear mapping ∆ : U(g) → U(g) ⊗ U(g) (denoted as coproduct) such that ∆ is a Poisson (resp. Lie) homomorphism: It has been shown [1], [2] that coassociativity allows one to construct from ∆ in an unambiguous way subsequent homomorphisms Thus, we can associate to our algebra, (or better co-algebra) a classical (resp. quantum) integrable system with N degrees of freedom, whose Hamiltonian is an arbitrary (analytic) function of the N th coproduct of the generators and the remaining N − 1 integrals of motion are provided by ∆ (m) (C) 1 , m = 2, . . . , N. Incidentally, we notice here that Karimipour stuck on the particular case where the coproduct is the one related to the usual Hopfalgebra structure defined on a universal enveloping algebra, namely: Consider now the following sl(2) realization in terms of b, b † The Quantum Casimir operators read where, for a moment, we have used the notationX i The Hamiltonian can now be written aŝ So we have the following complete set of N independent commuting operators:
Solution to the Spectral Problem
In this section we will determine the spectrum for the complete set of commuting observables As we said in the introduction, we will work, as far as possible, in a pure algebraic setting. Accordingly, we will suppose thatX + ,X − ,X 3 , are Hilbert space operators providing an infinite-dimensional representation of sl(2). Moreover, we will assume that the hermitian operatorX 3 is bounded from below, i.e., there exists a state |0 (called "lowest weight vector") such that where λ min is the minimum in the spectrum of X 3 . From sl(2) commutation relations it follows that Due to the specific form of the co-product, the lowest weight vector for ∆ (n) (X 3 ) will be simply the tensor product of the lowest weight vectors for the single particle operators X 3 As in the one-particle case commutation relations imply that, for each n, the state N |0 · · · |0 belong to the kernel of the operator ∆ (n) (X − ): We will call N |0 · · · |0 the "ground state" of the system. Starting from the ground state, we will define the Hilbert space of the problem as the space generated by the basis: It is straightforward to see that substituting representation (7) in the generators, one obtains exactly the same Hilbert space defined by Calogero and Van Diejen.
The ground state turns out to be an eigenstate of all the Casimirs. In fact, writing them in the form and using (10), we obtain: Now we are ready to prove the following proposition: The eigenfunctions of the complete set (9) of commuting observables are of the form: where H (2m) s is an "s-particle harmonic polynomial", i.e. it satisfies: The harmonic polynomials are generated through the recursive formula: where the constant a i,s,m,m ′ must be chosen in such a way that (14) holds.
In formula (15) we used the notation: Proof: First of all we compute the commutator: from which it follows: If we act with ∆ (N ) (X 3 ) on an harmonic polynomial and we use repeatedly this last formula, we obtain: We distinguish two cases: if n ≥ s, then condition (14) implies Viceversa, if n < s then the Casimir operator ∆ (n) (C) obviously commutes with the operator X + (s) ; on the other hand, from the homomorphism property of the coproduct it follows that it commutes with the operator ∆ (s−1) (X + ) as well, so that ∆ (n) (C) acts directly on the harmonic polynomial H Now we have again two possibilities: if n ≥ s ′ then we just showed that H is an eigenfunction as well. If n < s, then we will have: We can iterate the above procedure until we reach the ground state H such that n ≥ s (i) . In both cases we know that it is eigenfunction of the operator ∆ (n) (C). Hence Proposition 1 is proved. • Condition (14) implies the following recurrence relation for the coefficients a i,s,m,m ′ : where for simplicity the labels s, m, m ′ have been omitted. Eq. (17) can be easily "solved", yielding the following closed formula for the coefficients a l,s,m,m ′ : (18) These results can be easily specialized to the realization (3) used by Calogero and Van Diejen. First of all we note that the sl(2) generators are expressed by: and the corresponding coproducts by: It follows that the ground state in this case is given simply by the constant function, with eigenvalue λ min n = n/4. The polynomials (15) are really harmonic (this is where the terminology comes from) and are given by the recursive formula: Actually, they form a basis in the space of harmonic polynomials.
The quantum deformed case
In [1] it has been shown how to associate to a Poisson-Hopf (Lie-Hopf) algebra a classical (quantum) integrable sistem and how to extend this procedure to q-algebras. In fact, q-algebras are obtained by Poisson-Hopf algebras through a process of deformation that preserve their Poisson-Hopf structure. It is therefore possible to associate to q-algebras integrable systems that are deformed version of the ones associated to the original algebra. Our aim is to analize the deformed version of the quantum system discussed in section 1.
The algebra to which this system is associated is U(sl (2)). The q-deformation of U(sl (2)), denoted by U q (sl (2)), is well known from the literature (see for example [11]): the generators satisfy the following commutation relations: and an admissible co-product is defined by: (we prefer to use z = ln q as deformation parameter). We are going to realize this algebra in terms of the operators b and b † introduced in section 1, in such a way that the relation (X + ) † = −X − will hold whenever it does in the non-deformed case. We will see in a moment that this condition guarantees the hermiticity of the deformed Casimirs (22). A natural choice is to put: f (z,X 3 ) being an analitic function of theX 3 variable with a parametric dependence on z. The realization (20) amounts to set to −1/2 the value of the one-body undeformed Casimir consistently with the "bosonic" realization (7).
Imposing that these generators satisfy the commutation relations (19) we obtain a functional equation for f (z,X 3 ) (see Appendix 2), a solution of which is given by: We observe that, assuming the form (21) for f (z,X 3 ), we need invertibility of (X 3 − 1) 2 + 1 in (20), and this condition is always verified ifX 3 is hermitian. The Casimir operator for this algebra is given by: It is easy to show that in the limit z −→ 0 we recover sl(2) generators and Casimir.
Having the co-product and the Casimir, we can define an integrable quantum system with HamiltonianH = ∆ (N ) (C z ) and integrals of motion given by ∆ (N ) (X 3 ), ∆ (m) (C z ), m = 2, . . . , N −1 that is the q-deformation (actually the z-deformation) of the one treated in section 1; moreover, we can easily solve the associated spectral problem.
Indeed, using the commutation relations (19), the n−body Casimir can be written in the following way: The crucial point is that, as in the undeformed case, the Casimir is the sum of a function of the coproduct of theX 3 generator plus the term ∆ (n) (X + )∆ (n) (X − ). This allows us to use the same procedure as in section 1 to construct the eigenfunctions for the complete set of commuting observables: We consider the same Hilbert space as defined in section 1. The lowest weight vector (|0 ) and its eigenvalue (λ min ) for the operatorX 3 are the same as for X 3 since it is unchanged under deformation. Since the coproduct for thẽ X 3 generator is itself unchanged, the lowest weight vector for the operator ∆ (N ) (X 3 ) will be again given by N |0 . . . |0 , which will be denoted again as the "ground state" of the system. From commutation relations (19) follows that even in this case the ground state belongs to the kernel of the operators ∆ (n) (X − ), n = 2, . . . , N.
We have the following proposition: The eigenfunctions of the complete set (23) of commuting observables are of the form: is an "s-particle deformed harmonic polynomial", i.e. it satisfies: These deformed harmonic polynomials are generated through the recursive formulaH where the functions a i,s,m,m ′ (z) must be chosen in such a way that (24) holds.
In proposition 2 we used the notation: The proof of this proposition proceeds in the same way as in the undeformed case, with some minor changes. The eigenvalues of the partial Casimirs corresponding to the ground state are defined by the formula: On the other hand, on a generic excited state, corresponding to a deformed harmonic polynomialH (2m) s we have (for n ≥ s): The recurrence relation for the coefficients a i,s,m,m ′ (z) is given by: which is manifestly the z − def ormed version of the recurrence relation (17).
Concluding remarks
As it is well known, the Calogero-Gaudin system is superintegrable, both at the classical and at the quantum level. An algebraic explanation for that property has been recently proposed by Ballesteros et al. [12], in the context of the "two-photon algebra". An alternative interpretation relies on the fact that the "two-body Casimirs" C (ij) 2 := (∆ (2) (C)) i,j , which (Poisson) commute with ∆ (N ) (C), are actually the squares of the generators of SO(N); this readily entails that the quantities: commute in pairs for any choice of the (distinct) numbers {λ j }.
The quantum system characterized by {∆ (N ) (C), I j } has been extensively studied in the recent past for finite dimensional representations of sl(2) [5], [6], [14] through Bethe Ansatz and/or Quantum Inverse Scattering Method; for an infinite-dimensional representation, we refer again to [9].
What about superintegrability of the q−deformed system? So far, no q−deformed analog of the family {∆ (N ) (C), I j } has been found [13], and moreover a strong -though not compelling -no-go argument has been recently raised in [15], based on the underlying r−matrix structure. We are actively working on this point to achieve a definite answer.
A further open issue is the solution of the spectral problem for the quantum deformed CG model in a finite-dimensional representation of sl q (2). Work is in progress on that, and we expect to get the results shortly: indeed, due to its purely algebraic nature, the approach we have followed here can be applied with the proper technical modifications to the finite-dimensional case as well.
Appendix 1
In this appendix we want to show that the set of eigenfunctions φ k,m,s (13) form a basis respect to the Hilbert space of the problem.
To this aim we give the following proposition If s = 2 we can apply our recursive formula (15) to the state H (0) 0 so that we can construct only one harmonic polynomial for each value of m, i.e. h(2m, 2) = 1 . For s = 3 and m fixed, the recursive formula (15) can be applied either to H (0) 0 or to a two particle harmonic polynomial with m ′ < m, so that Following this line of reasoning it is clear that, given m, for a generic s we have . . .
In our case it means that proves our claim. We can write h(2m, s) in the form: so that the total number of harmonic polynomials of degree 2m in N variables is given by: Rescaling the indices and repeatedely using (27), we have: We want now to show that our eigenfunctions form a basis for the Hilbert space of the problem. We recall that the Hilbert space was generated by the monomials (11). We can decompose this space as the direct sum of the spaces of homogeneous polynomials of degree m for m = 0, . . . , ∞ that we denote with P [F (z,X 3 + 1) − F (z,X 3 − 1)] = sinh(zX 3 ) sinh z A solution of (35) is given by: where ρ(z) is an arbitrary function of z. | 2014-10-01T00:00:00.000Z | 1999-10-18T00:00:00.000 | {
"year": 1999,
"sha1": "11512caf137ca95f5287729cb65a8a8b4a287c3a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/solv-int/9910008",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ee8816187935c7269b50e723c88354093374ae71",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
134628536 | pes2o/s2orc | v3-fos-license | Combination of Organic and Inorganic Fertilizer Improves Wheat Yields and Soil Properties on Nitisols of Central highlands of Ethiopia
A field experiment was conducted for three consecutive cropping seasons (2015-2017) on farmers’ fields in Welmera district of Oromiya Regional State with the objective of quantifying the effect of organic and inorganic fertilizers on growth and yield of wheat and soil chemical properties. The treatments included eleven selected combinations of organic and inorganic nutrient sources including farm yard manure, Compost, Nitrogen and Phosphorus application. The experimental design was randomized complete block with three replications. Results showed that wheat yield, yield components and soil chemical properties were significantly affected by the application of organic and inorganic fertilizer sources. The highest wheat(Triticum aestivum L.) grain yield (4278.1 kg/ha) and biomass yield (1385.5kg/ha) were obtained from the applications of half doses of compost which is based on recommended N equivalent and half doses of the recommended nitrogen and phosphorus fertilizers (30kg/ha N and 34.5kg/ha P that half dose contains) followed by 4269.4 kg/ha and 13711.7 kg/ha for grain yield and biomass yield, respectively, due to the application of the full recommended N and P rates (60kg N/ha and 69kg P/ha)from inorganic fertilizers. Application of organic fertilizer improved organic carbon from 1.08% to 1.86%, total N from 0.14% to 0.28%, available phosphorous from 6.35ppm to 18.14pmm and pH from 4.04 to 5.08. The highest marginal rate of return was obtained from application of 75% compost (based on equivalent N rate) plus 25% N and P, which is economically the most profitable on Nitisols of the central Ethiopian highlands.
INTRODUCTION
Soil fertility is considered to be the major constraint in the highlands of Ethiopia due to continuous cultivation of these soils without adequate replenishment for long years. This made highland soils deficient in nutrients particularly nitrogen (N) and phosphorous (P). Several studies have indicated widespread nutrient mining as compared to amendments both in quality and quantity resulting in a negative nutrient balance in Africa, (Henao J., et al, 2006;Nkonya, E., 2016). Hence, leading to severe nutrient deficiencies across ecological zones and consequently reducing agricultural productivity. In Ethiopia, the annual net loss of nutrients is estimated to be 40kgN ha -1 , 6.6kgP ha -1 and 33.2kgK ha -1 (Scoones I., et al, 1995). Nutrient depletion in Ethiopia has several causes. Application of organic fertilizer like crop residues and manure is limited because of competing uses for animal feed and house hold energy. Also problems in the fertilizer sector have restricted the wider use of inorganic fertilizers.
Wheat (Triticum aestivum L.) is one of the most important cereal crops widely grown by small holder framers under rain-fed conditions in Ethiopia. It ranks second next to tef (Eragrostis tef) in mid-altitude and to barley in high altitude areas in terms of area coverage. Productivity is generally low for cereals like maize (2.9 t ha -1 ), tef (1.3 t ha -1 ) and wheat (2.0 t ha -1 ) (CSA, 2012). This is due to declining soil fertility, low fertilizer usage, poorly performing and disease susceptible local varieties (Asnakew Woldeab et al., 1991;Jemmal Mohammed, 1994). It is true especially for N and P nutrients due to continuous cropping of cereals and low of fertilizer usage (Amsal Tarekegne et al., 2000).
One of the possible options to make use of low rate of chemical fertilizer application without nutrient deficiency of the soil could be recycling of organic wastes. But it is also difficult to attain sustainable productivity neither by inorganic fertilizers nor organic sources alone (Godara AS., 2012). The best remedy for soil fertility management is, therefore, a combination of both inorganic and organic fertilizers, where the inorganic fertilizer provides nutrients and the organic fertilizer mainly increases soil organic matter and improves soil structure and buffering capacity of the soil (Godara AS., 2012). The combined application of inorganic and organic fertilizers is also widely recognized as a way of increasing yield and improving productivity of the soil sustainably (Mahajan A., et al., 2008). There are also some research reports in Ethiopia that revealed the combined effect of organic (Vermicompost, compost and manure) and chemical (NP) fertilizer enhanced the yield of tef and reduced the amount of recommended chemical fertilizer by half (Girma C., et al., 2017). This experiment was therefore, carried out with objective of determining the effect of organic and inorganic fertilizers and their combinations on the yield and yield components of wheat. Journal of Biology, Agriculture and Healthcare www.iiste.org ISSN 2224-3208 (Paper) ISSN 2225-093X (Online) DOI: 10.7176/JBAH Vol.9, No.4, 2019 13 The major soil types of the trial sites are Eutric Nitisols (FAO-WRB, 2006). The crops widely grown in the study area include wheat (Triticum aestivum L.), barley (Hordeum vulgare L.), tef (Eragrostis tef), faba bean (Phaseolus vulgaris L.) and potato (Solanum tuberosum L.). Wheat variety (Digalu) was used as test crop in the experiment. The rates of organic fertilizers applied were calculated based on the recommended N equivalent rate of the inorganic source for the test crop. These treatment combinations were laid down in Randomized Complete Block Design (RCBD) with three replications. Samples were collected from well decomposed farmyard manure, compost and vermicompost before they are applied to the field. Then their N and P contents were analyzed in the laboratory to determine the rate of application of each treatment, which was based on recommended N equivalent rate for the test crop. The contents of N and P before application in the analyzed samples were 0.88% N and 0.68% P for conventional compost both on 55% dry weight basis and 1.72% N and 0.76% P for farm yard manure on 50% dry weight basis. Manure and compost were applied to the field three weeks before sowing and thoroughly mixed in the upper 15 to 20 cm soil depth. Nitrogen and P fertilizers were applied in the form of Urea and DAP respectively. To minimize the loss and increase its efficiency half rate of N was applied as split at planting and the remaining half was side dressed at tillering stage of the crop whereas all P rates were applied as basal application during planting time. The seed was drilled at the recommended seed rate of 150kg/ha in row on 10 th , 12 th and 16 th July of 2015, 2016 and 2017 respectively. All recommended agronomic management practices were carried out during the crop growth period.
Data Collection and Analysis
Composite surface soil samples were collected from experimental fields (0-20 cm depth) before treatment application. Similarly, soil samples were collected after harvest of the crop from each plot and then composited by replication to obtain one representative sample per treatment. The collected samples were analyzed for the determinations of pH, organic carbon (OC), total N and available P. Soil pH was determined with a pH electrode at soil: to water ratio of 1:1 (w/v) (Carter, 1993). Organic carbon was determined by the method of Walkley and Black (1934) and total N using Kjeldahl method (Jackson, 1958). Available P was determined following the procedures of Bray and Kurtz (1945).
Collected plant parameters were grain yield, above ground total biomass, plant height and spike length (average of 5 plants). Grain and biomass yield were measured based on plant samples taken from ten central rows (2m x 3m= 6m 2 ), plant height measurement (in cm) were taken from five randomly selected plants per plot from the soil surface to the tip of the crop at full maturity stage. At harvest, grain yield was adjusted to a moisture content of 12.5% and grain yield was recorded in kg/ha.
The agronomic data were subjected to analysis of variance using the GLM procedure using of SAS statistical computer package (SAS, 2002). The total variability for each trait was quantified using separate and pooled analysis of variance over years using the following model (Gomez and Gomoz, 1984): Where Pijk is the response variable (e.g. grain yield, biomass yield, etc), µ = grand mean, Yi= effect of the i th year, Rj (i) is effect of the j th replication (with in thein the i th year), Tk is effect of the K th treatment with i th year TY (ik) is the interaction of k th treatment within the i th year and eijk is the random error. Duncan multiples range test (DMRT) at 5% probability level was used to detect differences among means. Finally, economic analysis was done as follows the CIMMYT methodology (1988).
Initial Soil Characteristics
A soil characteristic of the experimental site before applying the treatments is summarized in Table 1. The particle size distribution of the surface layers of the experimental field indicated that the soil had a composition of clay (72.5%), silt (11.25%) and sand (16.25 %), which is categorized as clay. The soil pH (H2O) was 4.8, indicating Journal of Biology, Agriculture and Healthcare www.iiste.org ISSN 2224-3208 (Paper) ISSN 2225-093X (Online) DOI: 10.7176/JBAH Vol.9, No.4, 2019 14 strongly acidic soil reactions. The preferred range for most crops and productive soils is 4 to 8 (FAO 2000). Thus, the pH of the experimental soil is almost within the range for productive soils. The organic matter and total nitrogen content of the soil, before planting were found to be 2.26% and 0.13%, respectively. According to Westerman (1990) rating the organic matter in the soil was medium. Total Nitrogen (TN %) was rated as low according to the rating by Havlin et al. (1999).
The organic carbon content of the soil before planting was found to be 1.28%. Tekalign (1991) described percentage of carbon content C< 0.5, 0.5-1.5, 1.5-3.0, >3.0% as very low, low, moderate and high, respectively. Hence, the result showed that the total amount of carbon level was low. The cation exchange capacity was 15.2cmolckg -1 , which is rated as medium according to Murphy's (2007) rating. The carbon to nitrogen ratio (C/N ratio) value was 10.48, which signify a relatively high rate of mineralization and low rate of N immobilization. Normally, agricultural soils have a ratio of 10. The available phosphorus content was 8.65 mg kg -1 , which can be rated as low according to the rating by Tekalign (1991).
Soil Properties at the end of the Study
Soil chemical properties such as pH, organic carbon (OC), total N and available P measured for samples taken after harvesting were significantly (P<0.01) affected by the application of different rate of organic and inorganic fertilizers ( Table 2). The result indicated relatively higher pH levels, OC and nutrient concentrations for plots treated with manure and compost ( Table 2). The highest pH value 5.08 and 5.02 were recorded from full doses of farm yard manure and compost respectively .The average soil pH of the treatments was about 5.2, which is still acidic. The lowest soil pH (4.04) was recorded from the control plots. Similarly, Ano and Ubochi (2007) reported that application of animal manure and compost increased soil pH.
The values of OC were generally rated as low (Jones, 2003), relatively the highest OC, 1.86% and 1.82% were recorded from plots treated with full doses of compost and farm yard manure respectively and the least (1.08%) was from the control plot (Table 2). Likewise, the total N and available P determined after harvesting is rated high (Berhanu D., 1985). As mentioned above for OC, the highest soil total N (0.28%) was recorded from plots treated with full doses of compost. The lowest soil N content 0.14% was obtained from the control plots as usual. Similarly, the highest soil available P (18.14 ppm) was recorded from plots treated with one-fourth of compost + 50 % recommended Nitrogen and phosphorus fertilizers. But, all plots which received fertilizer, either alone or in combination did not significantly differ one from the other except the control plot which gave lower P values.
The above findings are in line with the reports of Eghball et al. (2004) that the residual effects of manure and compost applications significantly increased electrical conductivity, pH levels and plant available P and NO3-N concentrations where the lowest pH and nutrient content were observed on plots not treated with organic fertilizer. Sharma et al. (1990) also indicated that the use of organic fertilizer might have made the soil more porous and pulverized, to allow better root growth and development, thereby resulting in higher root cation exchange capacity (CEC). According to Sanchez (1976) the application of organic fertilizer directly influences the availability of native or applied phosphorus. Generally, the above results indicate that integrated use of nutrient sources have significant improvement in the overall condition of the soil as well as agricultural productivity if best alternative option is adopted in the area.
Effects of integrated nutrient application on wheat yield and yield components
The combined analysis of variance over three years revealed that the effect of cropping season was highly significant (p<0.01) on flowering date, plant height, grain and biomass yield of wheat and significant (p<0.05) on plant height, day to physiological maturate, spike length, thousand seed weight and grain yield. This study clearly indicated that productivity of wheat was significantly affected by different treatment applied. Thus, applications of inorganic and organic nutrient sources either alone or in combination had a significant (p<0.05) effect on all parameters, such as grain yield, biomass yield, plant height, physiological maturity, flowering date, spike length and thousand seed weight of wheat. The highest wheat spike length, grain and biomass yield (10.77cm, 4278.1 kg/ha and 1385.5 kg/ha respectively) were obtained from the application of 50% compost (based on N equivalence ratio) and half the recommended rate of N and P followed by full dose of recommended rate of N and P from inorganic fertilizer resulting in 4269.4 kg/ha grain and 13711.7 kg/ha biomass yields respectively. The negative treatment was taken the highest thousand seed weight and flowering date (42.52cm and 75.75 day) respectively. The rest set of treatments had given inferior yields under all tested parameters and the result from the control plot was also the least as usual (Table 3). 1.9 Therefore, the result of this study has clearly indicated that it is possible to fairly produce wheat through integrated nutrient application approach, rather than applying nutrient from one source. In line with the current result, research findings of Mamo et al., (2001) and Agegnehu (2012) indicated that wheat has showed significance response to the integrated soil fertility management treatments containing both organic and inorganic forms under farmers' field condition that they could be considered as alternative options for sustainable soil and crop productivity in the degraded highlands of Ethiopia. Moreover, the crop has responded differently to application of N and P on different soil types.
Economic Analysis
As farmers attempt to evaluate the economic benefits of shift in practice, partial budget analysis was done to identify the rewarding treatments. Yield from on farm experimental plots was adjusted downward by 10% for management difference to reflect the difference between the experimental yield and the yield that farmers could expect from the same treatment. The economic analysis was calculated as follows: TVC= the sum of cost input (N fertilizer price), AGY=grain yield x 10/100; AGY= total grain yield minus adjusted grain yield, GB=adjusted grain yield x total variable cost, NB=gross benefit-total variable cost, MRR% = change of net benefit divided to change of total variable cost x 100. Three years average market grain price of wheat (15 birr/kg), farm-gate price of N and P fertilizers (12 and 15 birr/kg) respectively and labour for sprayer valued at 50 birr/person-day were used. 1 The economic analysis further revealed that the application of 75% of compost plus 25% recommended N and P fertilizers provided the highest marginal rate of the return (MRR) of 1514.4% (Table 4) suggesting for one birr invested in wheat production, the producer would collect birr 15.14 after recovering his investment. Since the MRR assumed in this study was 100%, the treatment with application of 75% of compost and 25% RNP gave an acceptable MRR. Therefore, the application 75% compost (based on N equivalent rate) and 25% recommended N and P fertilizers mentioned above is found economical to be recommended on Nitisols of the study area and similar locations in the central highlands of Ethiopia.
CONCLUSION
The result demonstrate that the three years result were significantly different from each other most probably attributed to season differences and the carry over effect of the previous year fertilizer application as the plots were fixed during the experimental period. Integrated use of organic and inorganic fertilizers plays a critical role in a both short-term nutrient availability and longer-term maintenance of soil organic matter and sustainability crop productivity in most smallholder farming systems in the tropics. The effects of organic nutrient source such as farm yard manure are not immediate as inorganic nutrient sources, but their effects are long-lasting and sustainable. Interventions to increase nutrient use efficiency and reduce NP losses to the environment must be accomplished at the farm level through a combination of improved technologies and carefully crafted local policies that promote the adoption of improved N management practices while sustaining yield increases. Improved fertilizer products play an important role in the global quest for increasing nutrient use efficiency. The results of soil analysis after harvesting revealed that application organic fertilizer improved soil pH, OC, N and available P and exchangeable cations. The three year result showed that the integrated application of organic and inorganic fertilizers improved productivity of wheat as well as the fertility status of the soil. Applications of organic fertilizers not only improve the nutrient content of soils, but also improve the physical and biological condition of soils. | 2019-04-27T13:13:50.846Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "512d2cc50435d63b365dc316f97ee59a7f1dad27",
"oa_license": "CCBY",
"oa_url": "https://www.iiste.org/Journals/index.php/JBAH/article/download/46671/48190",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "705e6138609fe4a627a7b8829b4f8d212706c10c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
149481558 | pes2o/s2orc | v3-fos-license | Influence of lamb finishing system on animal performance and meat quality
This study aimed to assess the influence of lamb finishing systems on zootechnical performance, as well as on carcass and meat quality. The experiment was conducted at the APTA’s experimental farm. Thirty-three lambs were used – both sexes, initial age of 90 ± 3 days, Texel with Santa Inês, each animal being one experimental unit, with 6 males and 5 females per treatment. Treatments consisted of: lambs finished on pasture, in semi-feedlot or in feedlot. The lambs were slaughtered with average live weight of 35 kg. Weight gain and carcass measures were taken by ultrasound. After slaughter, carcass conformation and yield, pH, temperature, color, water retention capacity and tenderness were measured. Animals finished on pasture had lower weight gain, were slaughtered at an older age, with lighter carcass weight, smaller loin area, lower shank compactness index, besides lighter shoulder and shank weights, compared to the other production systems (p < 0.05). There was no difference between the semi-feedlot and the feedlot systems for the assessed characteristics. In conclusion, production systems affect animal performance, as well as carcass and meat quality, especially when it comes to important production aspects, such as slaughter age and yield of premium cuts.
Introduction
Lamb meat presents beneficial properties for human nutrition, is a source of proteins and essential amino acids, and has low concentration of lipids and saturated fat (Alves, Osório, Fernandes, Ricardo, & Cunha, 2014).In addition, its flavor, color and smell are more accepted by consumers compared to those of older animals.For these reasons, farmers seek to finish young ovines, with a short life between birth and slaughter compared to other ruminants.Thus, producers need to properly plan and execute their production system.In order to help producers in their decision-making, studies on production systems are necessary, taking into account different breeds, nutrition and climate (Macedo, Siqueira, & Martins, 2000a;Osório et al., 2012).
There are different raising systems for lamb finishing in Brazil, with the most common ones being: animals raised on pasture only, on pasture with supplementation, and in feedlot.Pasture lamb finishing usually happens at properties with forage availability and quality compatible with the lamb's requirement.A negative point is forage seasonality over the year and sanitary problems such as helminthiasis.Feedlot is used to optimize soil utilization and finish lambs more quickly but draws attention to production costs and sanitary issues like eimeriids and urolithiasis.In semi-feedlot system, the animal stays on the pasture and is given food supplementation.The latter is used when there is available forage and has helminths as point of attention.
The implications of production systems on performance, meat quality and economical aspects have been investigated by different researchers (Macedo et al., 2000a;Poli et al., 2008;Salgado et al., 2018), but it is still a point of discussion among scientists, producers and the meat industry.In this way, the objective of this study was to compare the influence of different lamb finishing systems on performance, as well as carcass and meat quality.
Material and methods
The experiment was run at the APTA's Regional Center of Northwestern São Paulo in the city of Votuporanga, state of São Paulo -Brazil.The city is located at coordinates 20º 25' 13" south latitude, 49° 58′ 42″ west longitude, at a height of 499 m, and has dry tropical climate.
The lambs had an average weight of 3.6 kg at birth and were weaned with an average live weight of 18 kg at 90 days, when the lamb finishing phase began (experiment start).Thirty-three mixed lambs were used -Texel with Santa Inês -males and females randomly distributed into three treatments, in which different finishing systems were compared: pasture, semi-feedlot and feedlot.Each treatment counted with six males and five males, and each animal was taken as one experimental unit.The experiment was considered completed when the average live weight of the lambs reached 35 kg, after which they were slaughtered.
Forage availability was ad libitum for animals raised on pasture, for minimum consumption of 3% of the live weight in dry matter.Forage amount was measured every 15 days with a ruler and the square of 1m 2 .Semi-feedlot lambs stayed on an Aruana grass pasture, supplemented with concentrate at 2% of their live weight.
Feedlot lambs were provided a diet composed of 20% of roughage and 80% of concentrate, with consumption adjusted ad libitum.The roughage source used in the diets of the feedlot animals was cut Tifton hay cut.The animals were fed twice a day; the offer level was adjusted to 20% of leftovers in order to keep ad libitum consumption; offers and leftovers were daily weighted to calculate dry matter consumption.
The diet was formulated so as to have a high protein level in order to meet the demand of animals still growing.The energy level did not surpass 3% of ethereal extract so that there was no supplementation nutritional difference between production systems.Samples of the food provided and the leftover were analyzed, every 15 days, to quantify dry matter, crude protein Association Official Analytical Chemist (AOAC, 2005), ADF and NDF (Van Soest, Robertson, & Lewis, 1991).
Endoparasitic infection was controlled by means of the Famacha method, in accordance with methodology proposed by Cintra, Ollhoff, and Sotomaior (2018), every 28 days, the animals' conjunctiva color was checked, comparing it with a graphic scale that sets five categories.Only those that presented problems were dewormed.The lambs were weighed at birth, every 14 days, and, when close to the slaughter weight, they were then weighed every 7 days, without prior fasting.Thus, it was possible to calculate the daily average weight gain from birth to weaning (WGBW) and the average daily weight gain from weaning to slaughter (WGWS) Upon reaching the average live weight of 35 kg (slaughter weight) for the treatment -both males and female -the carcass was analyzed by ultrasound, through device PIEMEDICAL Scanner 200 VET, in real time, with a 3.5 MHz transducer and an 18 cm acoustic guide.For reading, shearing and cleaning were done first in the area between the 12 th and the 13 th ribs on the animal's left side.The transducer was placed in a perpendicular way in relation to the length of muscle Longissimus dorsi.With the image, it was possible to read the loin eye area (LEA) and the subcutaneous fat thickness (SFT).This measurement allowed comparing values measured in vivo with measures obtained in the carcass and assessing the use of ultrasound as prediction instrument.
The experiment was finalized when the average weight of the animals per treatment reached 35 kg of live weight.The lambs' finishing period was 173 days for pasture animals, 60 days for semi-feedlot and 69 days for feedlot.After a solid fast of 16 hours, the animals were transported to Nhandeara Slaughterhouse, located in the municipality of Nhandeara, São Paulo, 35 km far from the city of Votuporanga, SP, in a sheepspecific truck.Slaughtering was done with federal inspection, that is, in compliance with humanitarian procedures, as required by the Brazilian legislation.First, the animals were numbed through stunning by penetrating captive bolt, and, immediately after, bloodletting was performed from the section of their necks' great vessels.Subsequently, skinning, evisceration and carcass cleaning were carried out, following normal procedures used by the industry.
The carcasses were identified and weighed individually for determination of hot carcass weight (HCW) and hot carcass yield (HCY).After 24 hours of refrigeration at an average temperature of 5 o C, the carcass was again weighed to obtain the cold carcass weight (CCW).The carcasses were cut in half and weighed, and the commercial cuts were separated: neck, arms (front and back), shoulder, rack, loin, breast-flank, and shank.Cut yields were calculated in relation to the weight of the cold half carcass.
The pH and temperature of muscle Longissimus in the area between the 12 th and the 13 th ribs were determined 45 minutes and 24h after slaughter, using a digital portable pH meter, model HI 99163 (Hanna Instruments, São Paulo, Brazil).Then, the loin area (LEA, cm 2 ) was defined by dividing muscle Longissimus dorsi, after a cross-sectional cut between the 12 th and the 13 th ribs, outlining it on tracing paper for later measurement with planimeter (Menezes et al., 2015).Subcutaneous fat thickness was measured in the same region (SFT, mm) of the loin, with the aid of a pachymeter The following carcass measures were taken in accordance with methodology described by Cezar and Sousa (2007): degree of fat, conformation, inner and outer leg length, rump and thorax width, in addition to carcass and leg compactness index.
The half carcass was divided into commercial cuts.The longissumus was boned, packed in aluminum foil and cooled at 4 o C; meat samples were taken to the Food Technology Institute at the Meat Technology Center for other analyses.
To determine cooking loss (CL) and shear force (SF), the methodology proposed by the American Meat Science Association (AMSA, 2012) was adopted.After color determination, the samples were weighed individually for initial weight determination.Then, a thermometer was inserted in the geometric center of each one of the samples, and the latter were put in an industrial electric oven at 170 o C until reaching internal temperature of 40 o C, after which they were turned and stayed there until reaching an internal temperature of 71 o C. Subsequently, the samples stayed at room temperature (22 o C) until cooling down, and were then weighed again.Afterwards, 4 to 6 cylinders were taken from each sample, parallel to the fibers, and were sheared using the Warner -Brazier device for shear force determination.
A fully randomized design was employed, in factorial arrangement, corresponding to 3 finishing systems (pasture, feedlot and semi-feedlot) and 2 sexes (female and male), and interaction between factors.All data were analyzed by analysis of variance through procedure MIXED of software Statistical Analysis System (SAS, 2013).For meat pH and temperature values, plots subdivided in time were used.Mean comparisons were done by Tukey's test, with statistical probability of up to 5%.
Results and discussion
The feedlot lambs' average diet consumption (roughage plus concentrate) based on dry matter was 0.90 kg animal -1 day -1 , and the semi-feedlot lambs' average concentrate consumption was 0.76 kg animal -1 day -1 .Feedlot and semi-feedlot animals reached slaughter weight 108 days sooner compared to pasture-finished lambs (Table 2).This happened due to lower ingestion of crude protein and energy, and higher fiber ingestion.These results reinforce the importance of pasture supplementation in case of insufficient quantity or quality to meet the nutritional requirements of the animal, especially if the production goal is to slaughter young animals.
Hot and cold carcass weights were similar among lambs finished in feedlot and semi-feedlot, and the lowest values of these variables were found for animals finished on pasture.There was no significant difference between treatments for carcass yield.
Daily weight gain from weaning to slaughter of lambs finished in feedlot and semi-feedlot, for both males and females, was higher compared to lambs finished on pasture.The females' absolute weight gain values were lower in relation to that of males, only in semi-feedlot.
The slaughter age of lambs finished in feedlot and semi-feedlot in this experiment corroborates with the first two slaughter ages of feedlot Santa Inês lambs studied by Queiroz et al. (2015), which were 145, 156 and 190 days, with live weight of 27.14, 33.84 and 34.85 kg, respecitvely.This indicates that the weight gain verified in this experiment fits the breed standards used, which are breedings that allow for higher meat production, muscle development and carcass conformation, being of great importance and capable of raising productivity in a shorter period of time.
As for carcass conformation characteristics, there was no influence from the different finishing types for most studied aspects, with the exception of carcass compactness index (CCI), which was lower for pasturefinished lambs compared to the other systems (Table 3).
With conformation, it is possible to assess the muscle development of the carcass.In most countries involved in sheep carcass trading, conformation is adopted as evaluation criteria, with carcasses of superior conformation being more valued (Macedo, Siqueira, Martins, & Macedo, 2000b).CCI is an indicative of the correlation between carcass weight and length and assesses the amount of tissue deposited per unit of length.This index is an indirect measure of conformation and is used to assess muscle production in animals with similar live weight (Simela, Ndlovu, & Sibanda, 1999).Factors such as breed and live weight at slaughter may influence the CCI; a higher index is likely to be found in cutting breeds, since muscle development is higher (Sañudo et al., 2006).This explains why animals finished in pasture system have lower CCIs, which is also corroborated by abovementioned studies.Cartaxo et al. (2009), studying Santa Inês lambs in feedlot, slaughtered at an age of around 150 days and with live weight of 26 kg, found a lower CCI (0.24) than that of the three finishing systems of this study.A similar CCI (0.23) was found for crossbred Texel x Esguerra lambs slaughtered with an average live weight of 25.9 kg (Blasco, Campo, Balado, & Sañudo, 2016).
The lambs raised on pasture presented smaller half carcass, shoulder and rack cuts than the others.However, as for cut percentage, there was no statistical difference (p > 0.05, Table 4).
According to Huidobro and Cañeque (1993), cuts that compose the carcass have different economical values, and their proportion is an important index for assessment of carcass commercial quality.The main commercial cuts of sheep carcass are shoulder, shank, loin and rack.In the present study, there was no difference in cut proportion between the three different finishing systems, but half carcass, shoulder and rack weights were higher for lambs finished in feedlot and semi-feedlot compared to lambs finished on pasture.This result corroborates with the lower indexes presented by pasture-finished animals.Grandis et al. (2016) found similar commercial cut proportions compared to those of this study.Mcmanus et al. (2013) observed that Santa-Inês lambs had an average half carcass weight of 7.09 kg, similar to that of lambs finished on pasture, and inferior to that of the other systems in this study.
Tissue composition varies by genotype, sex, diet, and slaughter weight and age (Grandis et al., 2016).The basic tissues that make up the carcass (muscle, bone and fat) are fundamental to determine the value of the carcass and its cuts (Osório et al., 2012).The finishing system did not influence the proportions of muscle, bone, fat and others.Nevertheless, loin muscle and fat proportion were influenced by sex; females presented lower muscle proportions and higher fat proportions in relation to males (Table 5).Queiroz et al. (2015) found a muscle, bone and fat proportion of 55.9, 20.9 and 23.6, res pectively, in male, uncastrated Santa Inês lambs slaughtered with a 3mm SFT.It is worth highlighting that the lambs were fed a high-concentrate diet, which favors a higher fat deposition in the carcass.
Initial and final pH values stood around 6.80 and 5.50 on average, respectively (Table 6).Initial and final temperature ranged from 34 to 9 o C, respectively.The finishing system and sex did not influence pH values and temperature.pH is the main indicator of the final quality of the meat, as it has a significant influence on quality parameters.For sheep, when measuring the pH of a recentlyslaughtered animal's carcass, the value should be around 7.0 to 7.3.Found pH24 values vary between 5.5 and 5.8, which are influenced by several factors, such the anim al's sex, slaughter age, production system, genetics and pre-slaughter management (Zimerman, Grigioni, Taddeo, & Domingo, 2011), indicating that the pH values in this study are normal, and the meat did not present quality issues.
Lambs finished in feedlot presented lower SF values, with a meat tender than the others, followed by semi-feedlot and pasture lambs.There was no significant difference between treatments or sex for CWL (Table 6).
It is possible to observe that the LEA of animals finished on pasture was inferior to that of animals finished in the other systems (Table 6).The correlation between measures obtained by ultrasound and in the carcass for SFT was 0.71, and 0.81 for LEA, values considered high, corroborating with the use of in vivo analyses for carcass prediction.LEA measurement is important to predict carcass yield because this index correlates with carcass meat percentage, and subcutaneous fat thickness measure, assessed in muscle Longissimus dorsi, highly correlates with carcass fat composition (Cartaxo & Sousa, 2008).Queiroz et al. (2015) found higher UFT and SFT values (2.97 mm and 3.02 mm, respectively) in Santa Inês lambs finished in feedlot and slaughtered upon reaching the average live weight of 33 kg, with average age of 155 days.Grandis et al. (2016), assessing male, uncastrated Santa Inês lambs fed different amounts of soybean cake, observed average mean value for LEA of 14.60 cm 2 , and 2.53 mm for SFT.The different results for SFT may derive from the difference between diet composition, slaughter weights and genetic groups.
Males presented higher L* and b* values compared to females, regardless of finishing system.Moreover, within each sex, that is, among males or females, animals finished in feedlot presented higher L* and b* values, followed by semi-feedlot and pasture.Females showed higher a* values compared to males, regardless of finishing system (Table 7).
Factors such as genetics, production system, nutrition, age and final meat pH may influence L*, a* and b* values.These values may change as slaughter weight increases, du e to muscle development, as it raises the amount of myoglobin, and fat deposition is evidenced, consequently decreasing the amount of water in the muscle.As a result, there are lower L* values, which represent luminous intensity.Although slaughter weight was the same for both sexes, males were slaughtered more heavier and were more muscular than the females (Table 6), which may have interfered with the meat color assessment.
Conclusion
In conclusion, production system affects animal performance, as well as carcass and meat quality, especially as to important production aspects such as slaughter age and premium cuts.Nutritional supplementation is important to produce a higher-quality meat.
Table 2 .
Performance and carcass data of lambs (males and females) in three finishing systems
Table 3 .
Carcass conformation measures of lambs (males and females) in three different finishing systems.
Table 4 .
Weights of commercial carcass cuts of lambs (males and females) in three different finishing systems
Table 5 .
Mean values for proportions of muscle, bone, fat and others of the L. dorsi of lambs (males and females) in three different finishing systems.
Table 6 .
Quality parameters of lamb meat (males and females) in three different finishing systems and 24 hours after slaughter; Temp -temperature 15 minutes and 24 hours after slaughter; CWLcooking weight loss; SF -shear force; UFT -ultrasound fat thickness; ULEA -ultrasound loin eye area; SFTsubcutaneous fat thickness; LEA -loin eye area.SEM -standard error of the mean; P -statistical probability; NS -nonsignificant; * significant at p < 0,05.
Table 7 .
Color parameters (L*, a, b) of lamb meats (males and females) in three different finishing systems.luminosity level; a* -red level; b* -yellow level; SEM -standard error of the mean; P statistical probability; NS -non-significant; * significant at p < 0.05.Capital letters distinguish means in the column, and lower-case letters distinguish means in the row, by t test. | 2019-05-12T13:39:12.776Z | 2019-04-08T00:00:00.000 | {
"year": 2019,
"sha1": "68bbef1f7b5b9b6f5acc0050302c2635573a95ea",
"oa_license": "CCBY",
"oa_url": "http://periodicos.uem.br/ojs/index.php/ActaSciAnimSci/article/download/44742/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f5216f74469a27a75be69f900154855f3594d0a6",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
237601251 | pes2o/s2orc | v3-fos-license | Caught in the middle: a thematic analysis of the experiences of Korean-Canadian caregiver-employees in the greater Toronto and Hamilton area
Background The objective of this study was to investigate the experiences of caregiver-employees (CEs) from the Korean-Canadian community in the Greater Toronto and Hamilton Area. Methods Nine participants were recruited and invited to partake in data collection, which consisted of the completion of a sociodemographic questionnaire as well as a qualitative, semi-structured interview. The interview transcripts were thematically analyzed. Result The thematic analysis revealed four primary themes, each of which had three sub-themes. The four primary themes are:: (i) tensions, (ii) adaptations to the dual role of being a CE, (iii) coping mechanisms, and (iv) desired changes to the status quo. Conclusion The result of this study suggest that Korean-Canadian CEs, as a consequence of their position at the convergence of Korean and Western cultural values, would be best supported through the provision of culturally sensitive supports and greater workplace accommodation.
Introduction
Canada, alongside most other industrialized countries, has been experiencing a significant demographic shift [1]. As indicated by the results of the 2016 Canadian census, longer life expectancies and falling fertility rates have contributed to the aging of the national population [2]. The same census revealed that, for the first time in the country's history, the share of seniors aged 65 years or older have superseded the share of children aged 14 years or younger [2]. Population estimates from 2019 indicate that this gap continues to widen, with seniors now comprising over 17.5% of the population and children accounting for less than 16.0%. Some projections predict that by 2031, the share of seniors will increase to as much as 22.7% [3].
Canada's aging population is predicted to impact the quantity and quality of health, social, and long-term care services for seniors [1]. The anticipated increase in demand for such programs has called into question the ability of Canada's formal healthcare services to cope without undergoing major reform [4]. As a result, it is important to consider the role of informal sources of long-term care, such as family caregivers. In 2012, an estimated 8 million Canadians were actively providing informal care and support for friends, spouses, parents, and other family members unable to independently take care of themselves [5]. Family caregivers provide an important safety net for many seniors with chronic health issues, and provide significant cost savings associated with formal healthcare services [5].
The growing number of seniors has been accompanied by decreases in the share of working-age adults in the national population [1]. Population projections released by the Ontario Ministry of Finance predict that the provincial working-age population, defined as those between the ages of 15 to 64 years, will fall from 67.2% in 2018 to 61.9% in 2046 [6]. The decline of the working-age population, coupled with a growing demand for long-term care services, may increase the likelihood of more individuals taking on the dual role of caregiver-employees (CEs), defined as individuals who provide unpaid care to a dependent while simultaneously engaging in paid employment [7]. Literature suggests that CEs often face unique challenges when attempting to manage the simultaneous responsibilities of providing unpaid care and carrying out paid work [8]. Some challenges faced by CEs include: higher levels of stress, detrimental physical and mental health outcomes, and emotional fatigue [8].
This study is concerned with the experiences of CEs of Korean-Canadian heritage. 'Hyodo' refers to a traditional Korean cultural value analogous to the Confucian tenet of filial piety [9]. It states that grown children have a responsibility to care for their parents, often as reciprocation for the care that they had received in their youth [9]. This value is not only adhered to by Koreans residing in their cultural homeland, but is followed by some Korean immigrants living abroad [9]. Korean immigrants in Canada thus lie at the confluence of traditional Korean cultural values -which tend to place a greater emphasis on familial obligations, and Western cultural values -which tend to place a greater emphasis on individualistic lifestyles [10]. As a result, the Korean diaspora in Canada, which consisted of 198,210 individuals in 2016, may provide insight into a culturally unique perspective on caregiving [11]. Although the vast majority of this community is likely of South Korean descent, there is a lack of data concerning the national origins of Korean-Canadians [12]. It should be acknowledged that the Korean-Canadian community may not necessarily be homogenous in this regard; the lack of data makes it difficult to distinguish South Korean immigrants from ethnic Koreans of other national origins, including North Koreans, Joseon, Koryo-Saram, etc. For context, it should be noted that the term 'Joseon' refers to a self-identification used by some ethnic Koreans residing in Japan, while the term 'Koryo-Saram' refers to a similar term used by some ethnic Koreans residing in the post-Soviet states of Central Asia [13].
There is a lack of research on the experiences of CEs in the Korean-Canadian community; however, research does exist on the experiences of CEs in the broader Korean immigrant diaspora. Much of this literature was produced in the United States, reflecting the experiences of Korean-American CEs. One study by Lee et al. concluded that filial expectations and interpersonal conflict often shaped the perspectives of American-born Korean caregivers [14]. Another study by Kim and Theis also identified the importance of filial piety, as well as the prohibitive effect of language barriers, for Korean-American family caregivers [15]. As these findings were specific to Korean-American CEs, assessing their relevance to the Korean-Canadian community provides a comparative opportunity. Further, identifying the particular challenges faced by Korean-Canadian CEs, as noted by members of the community themselves, provides the opportunity to develop culturally sensitive supports aimed at reducing the burdens they face. This study aims to understand the perspectives of Korean-Canadian CEs in the Greater Toronto and Hamilton Area (GTHA), with respect to the simultaneous management of their caregiving duties and their paid employment.
Subjects and methods
Ethics approval for this study was obtained from the McMaster Research Ethics Board (MREB#: 2175). Korean-Canadian CEs were recruited from the GTHA using the following strategies: online advertisements posted on social media, brochures distributed at community centres, posters placed at ethnic supermarkets, and presentations given to the members of adult day programs. Prospective participants were required to complete a screening questionnaire, which assessed whether they fit the selection criteria for inclusion in the study. The selection criteria were applied to all prospective participants to ensure that they: (i) were of Korean heritage, (ii) lived in Canada, and (iii) had provided care for a dependent with chronic health issues while simultaneously employed in the labour market. A total of 9 participants, consisting of 6 women and 3 men, completed the screening questionnaire and were subsequently found to be eligible to participate in the study. These participants were interviewed about their experiences as CEs. For additional information on the sociodemographic characteristics of the study participants, refer to Tables 1 and 2. After completing the screening questionnaire, participants who fit the selection criteria were invited to select a time and location, at their convenience, for data collection. Upon meeting with the researcher, participants were assured that all information provided during data collection would remain anonymous and confidential. The researcher also explained that participants were permitted to decline to answer any questions asked, and had the right to retroactively withdraw any information from the study. All participants provided their written and verbal informed consent prior to data collection. This was documented through the use of oral informed consent logs and signed letters of informed consent. There were two phases of data collection. In the first phase, participants responded to a sociodemographic questionnaire consisting of ten close-ended questions. In the second phase, participants attended an interview which, based on the availability of each participant, lasted between 0.5 to 1.5 h. A semi-structured, biographical approach was used for the interviews. In the interviews, participants were encouraged to lead the conversation while the researcher provided as few prompts as possible. In the instances where prompts were provided, they were posed as open-ended questions which explored the experiences of participants and the stressors they faced as a result of their dual role as CEs. Listed below are several examples of open-ended questions asked during the interview: Please tell me about your experience caring for your family member/friend/etc. How has your daily routine changed since taking on the role of a caregiver? Have there ever been any conflicts between you and the recipient of your care?
It should be noted that only two people were present at each interview: the researcher conducting the interview, and the participant being interviewed. To compensate participants for their time, each received a $25 coffee shop gift card.
Interviews were audio-recorded whenever informed consent was received (n = 4). In such cases, the audio recordings were used to transcribe the interviews verbatim. When informed consent for audio recording was not received (n = 5), the researcher instead received informed consent to manually transcribe the interviews verbatim as they occurred. In such cases, the researcher transcribed the interview themselves, during and immediately following the interview. As the interviews were conducted in the preferred language of the participants, all interview transcripts were translated into English whenever necessary (n = 2). The researcher was conversationally fluent in both English and Korean. It should also be noted that, when possible, follow-up interviews were conducted to verify the accuracy of the data collected from the study participants.
A thematic approach was used to approach and analyze the data [16]. Interview transcripts were handcoded, which allowed for the organization of recurring ideas into broader categories. Each interview transcript was repeatedly analyzed until no additional categories could be identified by the researcher. The resulting categories were then consolidated and reduced to a manageable number. The categories created through this process served as the basis for the study themes. The items organized under each theme were further categorized into sub-themes, based on recurring elements within the broader theme. The themes and sub-themes identified in this manner were interpreted to reflect the key ideas underlying the responses of the participants in their interviews.
Results
A thematic analysis of the interviews revealed four themes underlying the experiences of Korean-Canadian CEs: (i) tensions, (ii) adaptations to the dual role of being a CE, (iii) coping mechanisms, and (iv) desired changes to the status quo. Each of the primary themes has been further subdivided into three sub-themes. In this section, each primary theme and associated subthemes will be discussed in detail.
Theme 1: tensions
This theme addresses the internal and external conflicts generated by each participant's caregiving duties. The sub-themes address how engaging in these duties has caused participants to: (i) experience emotional distress; (ii) neglect their own needs; and (iii) strain their social networks.
Emotional distress
All 9 participants reportedly experienced some form of emotional distress due to their caregiving duties. Participants typically described their emotional distress as consisting of feelings of anxiety, depression, and stress. One participant (Caregiver 1), explained: Participants stated that emotional distress contributed to feelings of burnout, compassion fatigue, and exhaustion with respect to their caregiving duties. Participants explained that this emotional distress not only had a negative impact on their caregiving, but also on other elements of their day-to-day life, such as their work and leisure. For instance, Caregiver 4 described the impact of compassion fatigue on her workplace interactions.
"There are definitely days when I come into work more exhausted. I worry of becoming emotionally numb to what I do, which would be a real big problem, since I work in healthcare …. I don't want to be this numb and unfeeling husk that treats people like quotas. I don't want my sympathy to burn out. I worry that if I don't manage to catch a breath every now and then, and get a break, then that's the direction I'm heading."
Neglect of own needs
Five participants felt that they had to neglect their own physical and psychological needs in order to fulfill their caregiving duties. Some participants explained that this was because they did not have the time or financial resources to commit to both sets of responsibilities. For instance, Caregiver 3 stated: "It's hard to pick a single thing about it, because it feels like I'm always tired. There's no time for a break, because there is always something that needs to be done. It can be frustrating." When presented with the option of caring for their dependents or attending to their own needs, they were more CEs willing to make sacrifices to the latter in favour of the former. In other cases, participants stated that the neglect of their own needs was the result of their inability to distance themselves, both physically and emotionally, from their caregiving. This idea was discussed in the following excerpt from Caregiver 6: "It can be so hard to get time off. Even when I do have some time to myself, I always find myself worrying about my mother and how she's doing. Now it feels like a large part of my life revolves around her, even when she's not actually there next to me."
Straining of social networks
Six participants stated that their caregiving duties have become a source of conflict in their personal relationships. In some instances, this conflict occurred when care recipients, especially parents, moved into the same household as their caregivers. In such cases, the resulting changes to domestic life were often significant. For instance, participants reported that after their care recipients moved in with them, members of their household often had to make changes to elements of their day-today life, including their sleep regimens, the distribution of chores, and household expenditures. These changes to the status quo of domestic life often manifested as conflicts between caregivers and other residents of the same household. Caregiver 1 explained how such tensions have strained her relationship with her son: "My son loves his grandmother, I know he does, but I understand why it is hard for him to be living with her when she is in this state. I remember my son and I had a fight where he mentioned he was too embarrassed to bring friends over because he didn't want them to see his grandmother. I was furious and just so shocked that he would even say that." Participants also faced, or at least anticipated, disagreements within their family network regarding their approach to their caregiving duties. It was reported that such tensions were the result of the discrepancies between two differing perspectives on caregiving: a hands-on approach associated with traditional Korean culture, and a relatively hands-off approach associated with Canadian or Western culture. This discrepancy is highlighted in the following quote from Caregiver 8, regarding his decision to place his mother in a nursing home: "My brother would be furious with me if he found out I was asking for outside help. He would think I am failing as a son …. He was always rather traditional, more than me. I think it is because I live in Canada while he has always lived in Korea. He would think that I should be ashamed for being unable to give my mother a good life on my own." In summary, participants report that their caregiving was often a source of conflict, both at an interpersonal and intrapersonal level. The resulting tensions were often catalyzed by the filial expectations associated with Korean cultural values. For instance, participants often prioritized their caregiving duties over most of their own needs. This placed further strain on participants in their role as CEs. Furthermore, as seen in the third sub-theme, the differences between Western and Korean attitudes on care were also a source of conflict. This conflict was typically seen between individuals with different levels of attachment to traditional Korean cultural values.
Theme 2: adaptations to the dual role of a caregiveremployee
This theme addresses the strategies participants used to balance their dual role as both caregivers and employees, including the set of responsibilities carried out in either role. The sub-themes address how participants managed their work-life balance by: (i) decreasing the physical distance between their work and home, (ii) reducing their commitments to their paid employment, and (iii) seeking external support with caregiving duties.
Decreasing the physical distance between work and home
Three participants reduced the physical distance between their home and their workplace in order to better manage their work-life balance. This decision was often influenced by participants' desire to reduce both their exhaustion as well as the time spent commuting, which they hoped would allow them to set aside more time and energy for their caregiving duties. For instance, Caregiver 8 explained his decision to work from home: "After my mother's fall, I had to work from home. That was a bigger change than I would've thought. I thought working at the office was tiring because I would go from working at the office to coming home to take care of my mother. I was leaving work to come back to more work. It was too much and nothing was getting enough of my time." However, Caregiver 8 explained that his decision to work from home has made it difficult for him to distinguish the distress resulting from his paid employment from the distress caused by his caregiving duties. He claimed that his inability to separate these sources of distress has in turn compounded the magnitude of distress he feels: "After I started to work from home, it made me realize how good I had it. Before, I had a change of setting from the office to home. Then my house became my office, and I combined those two sets of responsibilities. It no longer felt like I was having to balance taking care of my mother and taking care of my work. It felt like I was doing both at once."
Reducing commitments to paid employment
Eight participants have tried to prioritize their caregiving duties by reducing their commitments to their paid employment. Several different approaches were used to this end. For instance, after she began caregiving for her mother, Caregiver 6 decided to step down from her fulltime position to work part-time, accepting a decrease in salary in order to allocate more time to her caregiving: "I am part of a cabin cleaning team, which I do part-time. I used to work full-time, which is what I did for the last 20 years, but within the last year or so, when I started caring for my mother, I decided it would be best to scale it back. I felt more comfortable that way, knowing that I could still get a comfortable salary, with my husband in his job still, while also having the time to come home and care for my mother for most of the day." In another instance, Caregiver 1 explained that she had applied for short-term disability leave. She explained that after the first month of her disability leave, she faced pressure from her manager to return to work. She stated that her manager presented her with an ultimatum, asking her to either return to work or be let go. Viewing this incident as an indicator of the incompatibility of her caregiving and her paid employment, Caregiver 1 eventually decided to quit altogether in order to focus on providing care for her mother: "When my mother's dementia started to get worse and more problematic, I had to quit work to take care of her. Well, I had short term leave as well, but it was only for a month. After the month was up, my manager told me that I'd either have to come back to work, or else they'd have to let me go. My company was not at all flexible, so I decided to go." Seeking external support with caregiving: Six participants have accessed external support services and programs, beyond their personal social network, to assist with their caregiving. Examples of external support accessed by participants include adult daycare centres, nursing homes, and psychotherapists. All participants reported that their care recipients were unable to communicate in English. As a result, participants made an effort to seek out Korean-language services and prioritized culturally sensitive programs. Caregiver 3 explains her decision to bring her husband to a specific Koreanlanguage adult day program, despite it being located two hours from her home, on a weekly basis: "For a few hours every week, I can just relax and breathe. It's so much less tiring, knowing that he is safe with people who know how to take care of him. And here there are people like him. Not just with similar conditions, but people who look like him, talk like him, eat like him. I think that makes him feel so much safer." In summary, participants struggled to simultaneously balance their caregiving with their paid employment. Participants often had to reduce their commitment to one set of responsibilities to dedicate more attention to the other. In other words, participants who reduced their commitments to their paid employment often did so with the intent of focusing on their caregiving. Conversely, those who reduced their caregiving commitments often did so with the intent of focusing on their paid employment.
Theme 3: coping mechanisms
This theme addresses the strategies which participants used to manage the emotional distress they experienced as a result of their caregiving. The sub-themes address coping mechanisms commonly used by participants, including: (i) reminding oneself of their filial responsibility, (ii) seeking emotional support from social networks, and (iii) turning to religion.
Reminding oneself of their filial responsibility
Six participants reminded themselves of their filial responsibility to help manage the emotional distress associated with their caregiving duties. These participants made reference to their cultural upbringing, influenced by traditional Korean values, which emphasized the importance of fulfilling their filial obligations. In this regard, they justified the emotional distress that accompanied their caregiving as being necessary and part of their duty. For instance, Caregiver 1 stated: "We were raised on the same values that our parents were raised on, and for many of us, that's our only earnest connection with Korean culture. The younger generation in Korea does not seem to care as much for upholding this tradition, where we, the children of our parents, turn back to take care of them when they need us." Caregiver 2 described his caregiving as a necessary, reciprocal obligation to his mother: "It's a major motivation for me. What I mean is that when I get frustrated, I try to take a moment to step back and remind myself of how much my parents sacrificed for me and how much they cared for me as I was growing up." Caregiver 8 echoes this sentiment in the following excerpt: "It's because she is my mom. When I got tired from caring for her, I would remind myself: if I don't do this, who will? As her son, it is something that I had to do. It was more than just a responsibility, it was my everything."
Seeking emotional support from social networks
Four participants confided in friends or family to manage the distress associated with their caregiving. In such instances, participants turned to their social networks for assistance and emotional support regarding the stressors resulting from their caregiving and their employment. Many participants were more comfortable accessing the informal support offered by their social networks as opposed to the support offered by formal institutions. Furthermore, participants were generally more comfortable confiding in those with similar experiences as CEs. For instance, Caregiver 5 explained: "It helps having a person that I can talk to about my parents and my concerns about them going into the future, someone who really gets it …. It's a huge weight gone, having someone who knows what I'm thinking and can validate my concerns and thoughts."
Turning to religion
Three participants coped with their emotional distress by turning to religion. These participants stated that their religious practices provided a source of comfort and hope, which alleviated some of the emotional distress they faced. These participants stated that praying allowed them to perceive their caregiving duties, as well as the health of their recipients, from a more comforting angle. For instance, Caregiver 1 described how turning to religion has allowed her to rationalize her mother's declining health: "Going back to my faith really helped me get through the lowest points. I remember asking how God could let my mother fall to dementia when she had been nothing but kind her entire life, but later I came to see this as a sign. Caring for her made me more sympathetic to other people and their suffering." In summary, the coping mechanisms employed by participants often reflected some dimension of their cultural heritage. For instance, some participants stated that they would remind themselves of their filial responsibility when they felt overwhelmed by their caregiving duties. This cultural expectation not served as a source of motivation. Furthermore, religion was used as a coping mechanism by participants of Christian denominations. This use of religion often allowed participants to rationalize and accept that several stressors associated with their caregiving were beyond their locus of control.
Theme 4: desired changes to the status quo
This theme addresses the systemic changes which participants believed would have a positive impact on their role as CEs. The desired changes can be grouped into three categories: (i) greater access to culturally sensitive support for their caregiving duties, (ii) greater access to financial support, and (iii) greater understanding from others.
Greater access to culturally sensitive support
Six participants were unsatisfied with their current level of access to services and programs offering culturally sensitive support with their caregiving duties. The necessity of such programs was emphasized by two points. As mentioned, all care recipients were reportedly unable to communicate in English. Secondly, 4 participants stated that although they were more comfortable communicating in English than their care recipients, they still had difficulty understanding English and would rather communicate in Korean whenever possible. These circumstances illustrate the importance of culturally sensitive programs in this study. Some participants attributed this insufficient level of access to the lack of programs offering culturally sensitive care services. Furthermore, participants tended to question the quality of the few programs which do exist, many of which operate informally. Caregiver 2 discussed both points in the following excerpt: "It's hard enough to find nursing programs with availabilities. Needing one that can provide services in Korean narrows our options. Then, of the nursing programs that qualify, we need to make sure they're actually good. It feels like it's downright impossible. There are a few informal programs we found, but they're just so sketchy. I wouldn't be surprised if they were unlicensed, you know, doing it under the table." Other participants explained that most programs offering culturally sensitive services seemed to be affiliated with local churches and religious organizations, which posed a barrier for those identifying as irreligious or from other religious denominations. Caregiver 9, who describes herself as an atheist, stated: "Religion needs to be removed from healthcare organizations that want to serve the Korean-Canadian community in Toronto. Like that religious part and God was not really helpful to me, and was actually alienating." Caregiver 9 identified a second barrier to accessing culturally sensitive support. She explained that when she first started caregiving for her mother, she struggled to find any information on the availability of culturally sensitive programs relevant to her needs. She explained that the lack of access to information on such programs was a significant source of stress, causing her to feel further isolated and burdened as a result of her situation: "There's no centralized way of getting information. It still runs how the Korean-Canadian community ran in the 70s and 60s, which is basically just word-of-mouth. Not even the internet wasn't really helpful. All the websites were dead or gibberish. As someone who wasn't involved in the Korean-Canadian community, it was impossible to find anything."
Greater access to financial support
Three participants stated that they would like to reduce their work commitments to focus on their caregiving duties, but lacked the financial means to do so. These participants stated that receiving financial support would not only allow them to devote more time to their caregiving duties, but would also allow them to provide better quality care to their care recipients. These perspectives are discussed in the following excerpt by Caregiver 4: "I don't think I could afford quitting my job to focus on taking care of things at home. We live fairly comfortably now, but that's with me bringing in most of the income. My husband also works, though I don't think his income alone would be enough to support us if I quit my job to focus on my parents. Not to mention the cost of supporting our children and my parents. It's scary, since if anything happens, a three month unpaid leave from work would be devastating. I don't know what we could do on our own."
Greater understanding from others
Seven participants stated that it would be easier for them to maintain their work-life balance if those around them were more understanding of their status as CEs. Some participants stated that they would want this greater understanding to come from their workplace, especially from their employers. For instance, Caregiver 5 noted that his employer struggled to understand his situation: "It's like management is trying to make me feel guilty for taking time off to help my parents. I don't like being guilted into picking one over the other, you know? My work or my parents …. I wish they tried to understand my situation better. I'm not trying to cheat the system. I'm not enjoying my days off. I'm going home to my dying mother." Other participants stated that they wished to receive a greater amount of understanding from their personal social networks. Caregiver 7 explained how a lack of understanding made it difficult to confide in others, especially friends and family in the Korean-Canadian community, about the frustrations resulting from her caregiving: "There is, I think, a cultural stance against asking for favours to avoid burdening them. The downside of this is that there is this stigma around even talking about the frustrations I face as a caregiver, so it can be hard to sit down with people I know in the community and say 'you don't need to do anything, I just need someone to listen.'" In summary, participants proposed several changes to the status quo which they believed would alleviate some of the stressors they faced as CEs. Many of the changes proposed by participants reflected the differing expectations of Western and Korean cultural attitudes with respect to care. Participants felt that they were caught in the middle of these two perspectives, often wanting to increase their caregiving in accordance with Korean cultural values but being unable to do so due to societal constraints reflecting Western cultural values.
Discussion
The study findings suggest that participants perceived their caregiving duties in a way that reflected their social and cultural heritage as part of the Korean-Canadian community. As all participants have resided in Canada for at least 16 years, it should come as no surprise that their perspectives on caregiving appeared to be a blend of Korean and Western cultural attitudes. Participants generally believed that Korean cultural attitudes were more family-oriented and placed a greater emphasis on one's direct involvement in care provision. On the other hand, participants generally believed that Western cultural attitudes were focused on individual motivations, placing less emphasis on one's direct involvement in care provision. The data indicate that the differing expectations of these cultural attitudes were often a source of conflict for participants. This conflict tended to take one of two forms: intrapersonal or interpersonal. In the context of this study, 'intrapersonal conflicts' refer to how a CE's actions may have affected their own behaviour, whereas 'interpersonal conflicts' refer to how a CE's actions may have affected another individual's behaviour or vice versa [17].
With respect to intrapersonal conflicts, participants wished to dedicate more time and effort to their care provision but were unable to do so without reducing their current quality of life. This was due to social and economic stressors, including: the cost of living, a lack of workplace accommodations, and fear of judgement by supervisors and coworkers. Participants perceived these pressures to be common in Canadian society and believed them to be a reflection of Western cultural attitudes on care, characterized as having less of an expectation for individuals to directly care for family and friends. Participants believed that this particular perspective was unaccommodating given Korean cultural attitudes, in which there was a greater expectation for individuals to directly care for family and friends. Participants noted that the aforementioned stressors made it difficult for them to increase their level of caregiving without compromising their economic, physical, and mental wellbeing. As a result of these constraints, the actual level of caregiving displayed by participants did not necessarily reflect their own beliefs and attitudes of what ought to be. This may mark a significant difference between the experiences of Korean-Canadian CEs and Korean CEs residing in their cultural homeland.
With respect to interpersonal conflicts, younger participants -who typically described themselves as assimilated into Canadian society, faced tensions with members of the Korean-Canadian community who had a stronger adherence to traditional cultural values. These conflicts were the result of disagreements over the extent to which participants should directly involve themselves in the care received by their care recipients. While younger participants were typically willing to access external assistance in the form of interpreters, nursing homes, etc., these resources were viewed as being less acceptable by older members in the Korean-Canadian community. Similar attitudes have been documented among caregivers from other cultural groups emphasizing filial piety. A study by Donovan and Williams, revealed that Vietnamese-Canadian caregivers "did not believe that care recipients in hospitals and nursing homes would receive the level of care needed, especially compared with what they could provide at home" [18]. This statement mirrors the preference observed among older members of the Korean-Canadian community, in which formal caregiving was deemed to be less acceptable than informal caregiving.
Cultural differences also influenced the type of caregiving provided by participants. All but one of the participants stated they were providing care for their parents, who were described as comparatively less assimilated into Canadian society. As a result, participants often had to mediate between Korean and Canadian culture in their caregiving role. This was most notable in the context of language, as most care recipients were unable to communicate in English. Participants were responsible for scheduling appointments and speaking with medical staff, thereby allowing their care recipient to access formal healthcare services despite language barriers. Although care recipients heavily relied on their caregivers to access the Canadian healthcare system, there was sometimes a cultural divide between them. This was notable among Canadian-born participants who were caring for parents who had emigrated from Korea. These participants tended to be less comfortable communicating in Korean than English, casting doubt on their ability to relay healthcare information in a nuanced manner. Previous studies have documented the effects of language barriers on immigrant access to healthcare. For instance, a study by Sethi et al. (2017) found that language barriers were a major stressor for Mandarin-speaking CEs [19]. The study noted that Mandarin-speaking care recipients were often unable to independently access healthcare services outside of their respective communities due to language barriers, thus adding to the burden experienced by their caregivers. Similarly, another study by Sethi et al. (2018) also identified intergenerational differences as a source of stress and emotional conflict for CEs [20]. The intergenerational differences identified in Sethi et al's (2018) study extended beyond the role of language, including factors such as food preparation, etc. [20] Gender also had a significant role in shaping the caregiving experiences of participants. Of the 9 participants recruited into the study, 6 identified themselves as female and 3 identified themselves as male. There seemed to be a greater expectation for female children, as opposed to their male siblings, to take on the responsibility of caring for their parents. For instance, 3 of the 5 female participants with experience caregiving for a parent stated that they had male siblings, whereas the remaining 2 were the only children in the family. These participants shared the sentiment that the responsibility of caring for their parents "naturally fell to them" rather than their male siblings. Furthermore, 1 of these 3 participants stated that in addition to caring for her own mother, she was also expected to provide care for her husband's parents. In contrast, all 3 male participants stated that they had male siblings, with 2 of them reporting that they were the eldest child. In the 1 remaining case, the participant in question shared his belief that it was unusual for him, as the youngest child, to be caring for his mother. It should also be noted that male participants were generally more hesitant, relative to their female counterparts, to reduce their commitments to their paid employment to focus on their caregiving. Similar findings were reported in a scoping review by Maynard et al., which proposed that this tendency may be a product of the gendered nature of the workforce [21]. It was stated that in "couple [s] faced with work-care conflicts, the partner earning less and with less job security will assume the caring role" which, in the context of the current gendered distribution of workforce participation and wage, increased the likelihood of women in a household sacrificing their employment [21].
Furthermore, prior studies have examined religious differences as a factor preventing immigrant caregivers from accessing support outside their respective communities [22]. However, there is a lack of literature examining religious differences as a factor preventing caregivers from accessing support within their communities. In response to the sociodemographic questionnaires issued as part of this study, 55.56% of participants identified themselves as Protestant, 11.11% as Catholic, and 33.33% as having no religion. The religious makeup of the study participants broadly reflects the data collected from the 2001 Canadian census, where 51% of Korean-Canadians identified themselves as Protestant, 25% as Catholic, and 20% as having no religion [23]. The dominance of Protestant Christians in the Korean-Canadian community is reflected by the affiliation of many culturally sensitive support programs in the GTHA with Protestant Christian churches and community centres. As mentioned, this particular observation was also noted by participants of the study. Non-religious participants reportedly felt alienated by the influence and incorporation of religion in the services provided by these programs. These participants stated that they would prefer to attend secular programs offering the same services, but were unable to do so because of their limited availability.
Participants also faced challenges commonly experienced by CEs in general, regardless of their social or cultural heritage. For instance, participants often struggled to simultaneously manage their responsibilities both as caregivers and as employees. Due to limitations in time, energy, and resources, participants often felt that they had to reduce their commitments to one of these areas of responsibility in order to allocate more attention to the other. When participants chose to distance themselves from their caregiving in order to focus on their paid employment, they often felt guilt, emotional distress, and judgment from their social networks. In contrast, when participants chose to distance themselves from their paid employment to focus on their caregiving, consequences included early retirement, extended leaves, and shifts from full-time to part-time work. Participants noted that their decision to choose one area of responsibility for the other tended to be triggered by a lack of accommodation from their workplaces, which offered them little flexibility and left them in precarious situations. This was especially problematic when participants encountered sudden and unpredictable events associated with caregiving, such as the onset of a medical emergency. Coupled with a lack of workplace accommodations, such events often caused participants to realize that they could not continue to manage both areas of responsibility at a sustainable level. These assessments are corroborated by conclusions drawn from prior literature. For instance, in a scoping review by Ireson et al., it was stated that a "lack of employer support has health and financial consequences for caregiver-employees, such as missed workdays, early retirements and reduced productivity" [24]. The authors also advocated for the implementation of policies which could render workplaces more accommodating to CEs by taking into account the unique challenges they face. A systematic literature review by Plöthner et al. identified many of the same factors when assessing challenges commonly encountered by CEs [25].
Implications for research and practice
The study findings may be used to develop strategies aimed at supporting Korean-Canadian CEs. For instance, an effort could be made to increase the availability of care support programs, such as respite day and home care programs, offering services in both English and Korean. This may allow for culturally sensitive communication of information to caregivers and their care recipients.
It may also be beneficial to increase the availability of culturally sensitive services (e.g. patient navigators, translators, etc.) within the healthcare system, especially those catering to traditional Korean cultural attitudes emphasizing the role of caregivers in care provision. This could increase the degree to which Korean-Canadian CEs feel comfortable accessing formal support. Furthermore, efforts could be made to increase the availability of secular programs offering culturally sensitive caregiving support. This may reduce the degree of alienation felt by non-religious and non-Protestant members of the Korean-Canadian community.
Finally, strategies could be implemented to render workplaces more accommodating to the needs of employees who are simultaneously caregivers. This may include employers being exposed to informational resources explaining the challenges faced by CEs in general, as well as the challenges unique to CEs of a particular cultural heritage (see ghw.mcmaster.ca). This could promote a greater degree of understanding from employers with respect to the particular circumstances of their employees.
In future research on the experiences of CEs in the Korean diaspora, it may be valuable to see how the experiences of Korean immigrants living in other countries compare to the findings in this study. We anticipate that the study findings will be largely applicable for Korean CEs residing in countries with similar social, cultural, and economic contexts as Canada, such as Australia, as we have seen with the research from the United States. That being said, these findings may be less applicable for Korean CEs residing in developing, or least-developed economies with a non-Western cultural context.
Limitations
The scope of this study is not without limitations. To begin, the external validity of this study, as with most qualitative research, may be undermined by its small sample size. The study findings may thus not be generalizable to Korean-Canadian CEs beyond the participants of this study. Furthermore, as indicated by the results of the sociodemographic questionnaires, all participants in the study have resided in Canada for at least 16 years. This may not be representative of the general Korean-Canadian community, considering how South Korea represented the tenth-largest source of immigration to Canada in 2016 alone [26]. It should also be noted that the overwhelming majority of participants reported that they were caring for parents, and thus may not represent the circumstances of Korean-Canadian CEs providing care for siblings, friends, etc. Moreover, as caregiving is a gendered experience, it should be acknowledged that all study participants who disclosed their gender identity and sexual orientation stated that they were cisgender and heterosexual respectively. The findings of this study may not reflect the needs and perspectives of individuals of non-binary gender identities and non-heterosexual orientations. In order to develop a greater and more relevant insight into the experiences of Korean-Canadian CEs, it would be beneficial for future studies to recruit a larger sample size of participants from a greater diversity of circumstances. Finally, while this study used a qualitative approach to obtain a nuanced understanding of the experiences and perspectives of Korean-Canadian CEs, future studies may use quantitative measures in addition to qualitative ones so as to gain a more precise understanding of their experiences and burdens.
Conclusion
This study examined the perspectives of Korean-Canadian CEs regarding their dual role as both paid employees and informal caregivers. The findings suggest that in addition to the challenges commonly faced by CEs in general, CEs in the Korean-Canadian community tend to encounter challenges unique to their social and cultural heritage. The study findings have important implications for formal program planners in the Canadian healthcare system, as well as informal program planners in the Korean-Canadian community. As both communities seek to address the challenges of an aging population, further support should be extended to informal family caregivers. Programs aiming to support CEs of a particular social and cultural heritage must be cognizant of the unique challenges they face and, in so doing, provide them with the most effective and culturally sensitive solutions possible. | 2021-09-23T13:50:36.096Z | 2021-09-23T00:00:00.000 | {
"year": 2021,
"sha1": "026e72d4306920b4b0320a87c6d4200201c4cdf7",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-021-11812-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "026e72d4306920b4b0320a87c6d4200201c4cdf7",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119640647 | pes2o/s2orc | v3-fos-license | On Copson's inequalities for $0
Let $(\lambda_n)_{n \geq 1}$ be a non-negative sequence with $\lambda_1>0$ and let $\Lambda_n=\sum^n_{i=1}\lambda_i$. We study the following Copson inequality for $0p$, \begin{align*} \sum^{\infty}_{n=1}\left (\frac 1{\Lambda_n} \sum^{\infty}_{k=n}\lambda_k x_k \right )^p \geq \left ( \frac {p}{L-p}\right )^p \sum^{\infty}_{n=1}x^p_n. \end{align*} We find conditions on $\lambda_n$ such that the above inequality is valid with the constant being best possible.
Introduction
Let p > 0 and x = (x n ) n≥1 be a positive sequence Let (λ n ) n≥1 be a positive sequence and let The above two inequalities are equivalent (see [10]) and the constants are best possible. When λ k = 1, k ≥ 1 and c = p, inequality (1.1) becomes the following celebrated Hardy inequality ([12, Theorem 326]): It is noted in [12] that the constant p p in (1.4) may not be best possible and the best constant for 0 < p ≤ 1/3 was shown by Levin and Stečkin [13, Theorem 61] to be indeed (p/(1p)) p . In [8], it is shown that the constant (p/(1p)) p stays best possible for all 0 < p ≤ 0.346. It is further shown in [11] that the constant (p/(1p)) p is best possible when p = 0.35.
There exists an extensive and rich literature on extensions and generalizations of Copson's inequalities and Hardy's inequality (1.3) for p > 1. For recent developments in this direction, we refer the reader to the articles in [6][7][8][9][10][11] and the references therein. On the contrary, the case 0 < p < 1 is less known as can be seen by comparing inequalities (1.3) and (1.4). On one hand, the constant in (1.3) is shown to be best possible for all p > 1. On the other hand, though it is known the best constant that makes inequality (1.4) valid is (p/(1p)) p when 0 < p ≤ 0.346, it is shown in [8] that the constant (p/(1p)) p fails to be best possible when 1/2 ≤ p < 1 and the best constant in these cases remains unknown.
Our goal in this paper is to study the following variation of Copson's inequalities for 0 < p < 1: It is an open problem to determine the best possible constant to make inequality (1.5) valid in general. Our choice for presenting the constant in the form (p/(Lp)) p in (1.5) is motivated by the study on the analogue case of inequality (1.5) when p > 1. We define A result of Cartlidge [3] shows that when L λ < p for p > 1, then the following inequality holds for all non-negative sequences x: We shall see in Theorem 1.3 below that the constant given in inequality (1.5) is indeed best possible for certain sequences (λ n ) and certain ranges of p when one replaces L by L λ in (1.5). This includes case concerning the classical inequality (1.4) (with p p replaced by (p/(1p)) p there).
Further, let q < 0 be the number satisfying 1/p + 1/q = 1, we note that inequality (1.5) is equivalent to its dual version (assuming that x n > 0 for all n): The equivalence of the above two inequalities can be easily established following the discussions in [8,Sect. 1].
Our main result gives a condition on λ n and L such that inequalities (1.5) and (1.8) hold. For this purpose, we define, for constants p and L, (1.9) Our main result is the following statement.
Note that the values of p are not given explicitly in (1.10), nor by the conditions a i (L λ , p) ≥ 0, i = 1, 2. Thus, Theorem 1.1 is not readily applied in practice. For this reason, and with future applications in mind, we develop the following result.
then inequality (1.5) holds with L replaced by L λ there for all non-negative sequences x.
Suppose that there exist positive constants 1/2 < L < 1, 0 < M < 1, L + 2M < 1 such that, for any integer n ≥ 1, (1.14) Then inequality (1.5) holds for all non-negative sequences x when We remark here that it is easy to see that the minimum on the right-hand side of (1.15) can take either values. We now apply Theorem 1.2 to study inequalities (1.11)-(1.12). As the case α = 1 yields the classical inequality (1.4), we concentrate on the case α > 1 and we deduce readily from Theorem 1.2 the following result. Theorem 1.3 Let α ≥ 1 and p 1/α be defined as in (1.13). Then inequality (1.11) holds for all non-negative sequences x when α > 1, 0 < p ≤ p 1/α and inequality (1.12) holds for all non-negative sequences x when α ≥ 2, 0 < p ≤ p 1/α . The constants are best possible in both cases.
Proof of Theorem 1.1
For the first assertion of Theorem 1.1, our goal is to find conditions on the λ n such that the following inequality holds for 0 < p < 1, L > p: It suffices to prove the above inequality by replacing the infinite sums by finite sums from n = 1 to N (and k = n to N ) for any integer N ≥ 1. Note that, as in [6, Sect. 3], we have, for any positive sequence w = (w n ), Suppose we can find a sequence w = (w n ) of positive terms, such that, for any integer n ≥ 1, .
Then the desired inequality follows. Now we make a change of variables: w i → λ i w i to recast the above inequality as We now define our sequence w = (w n ) to satisfy w 1 = 1 and we inductively see that, for n ≥ 2, This allows us to deduce that, for n ≥ 1, We now prove the second assertion of Theorem 1.1. We set x = λ n /Λ n , y = λ n+1 /Λ n+1 to recast inequality (1.10) as x 1/(1-p) y p/ (1-p) .
To facilitate also the proof of Theorem 1.2 below, we proceed by taking the condition (1.14) into consideration to assume that L is a constant such that 1/y ≤ 1/x + L + Mx, where M ≥ 0 is another constant. As the function t → (t -1) (1+p)/(1-p) t -p/(1-p) is an increasing function of t ≥ 1, we deduce that it suffices to prove the above inequality for 1/y = 1/x + L + Mx, which is equivalent to showing that f L,M,p (x) ≥ 0 for 0 < x ≤ 1, where Suppose that (1.6) is valid and L λ ≥ 1. In this case we set L = L λ and M = 0 so that it suffices to show that f L,0,p (x) ≥ 0. Calculation shows that Suppose that 0 < p ≤ 1/3. We want to show that g L,p (x) ≥ 0 for 0 < x ≤ 1. We first note that we have We may now assume that Otherwise, we have trivially g L,p (x) ≥ 0. We then estimate (1 + (L -1)x) It then follows that g L,p (x) ≥ u L,p (x), where Suppose first that We then deduce that We regard the last expression above as a linear function of x to see that its derivative with respect to x is As we have 0 < p ≤ 1/3 and L ≥ 1, we deduce that It follows from this that the last expression in (2.5) is minimized at x = 0 with corresponding value being u L,p (0). On the other hand, if the inequality in (2.4) does not hold, then one checks that u L,p (x) is a quadratic function of x with negative leading coefficient, hence is minimized at x = 0 or x = 1. Thus, in either case, we conclude that u L,p (x) ≥ min{u L,p (0), u L,p (1)} for 0 ≤ x ≤ 1. One checks that When we regard the above expression as a function of L, it is readily seen that u L,p (0) is convex in L such that We thus deduce that u L,p (0) ≥ 0 when 0 < p ≤ 1/3. On the other hand, we have u L,p (1) = a 1 (L, p) ≥ 0 by our assumption, where a 1 (L, p) is defined in (1.9). It follows that g L,p (x) ≥ u L,p (x) ≥ 0, hence f L,0,p (x) ≥ 0 for 0 < x ≤ 1. As f L,0,p (0) = f L,0,p (0) = 0, we then deduce that f L,0,p (x) ≥ 0 and this completes the proof for the case L λ ≥ 1 of the second assertion of Theorem 1.1.
To prove the case 0 < L λ < 1 of the second assertion of Theorem 1.1, we first note that One checks that It follows from this and the arithmetic-geometric mean inequality that Suppose that (1.6) is valid and 0 < L λ < 1. In this case we can also set L = L λ and M = 0 to see that inequality (2.7) becomes Calculation shows that One checks that v L,p (x) is a quadratic polynomial of x with a negative leading coefficient when L ≥ 2p. It follows that v L,p (x) ≥ min{v L,p (0), v L,p (1)} for 0 < x ≤ 1 and one checks that v L,p (0) = pu L,p (0), where u L,p (0) is defined in (2.6). Similar to our discussions for the case L > 1, one checks that u L,p (0) is convex in L such that
Proof of Theorem 1.2
First, we assume that (1.6) is valid and we set L = L λ in this case. It suffices to find a value of p such that a 2 (L, p) ≥ 0 by Theorem 1.1. Note that lim p→0 + a 2 (ap, p)/p < 0 when a > 1, it is therefore not possible to show a 2 (L, p) ≥ 0 by assuming that p ≤ L/a for any a > 1. We therefore seek to show a 2 (L, p) ≥ 0 for p ≤ L 2 /4. We first note that where the last inequality follows from the observation that the function p → 4 -5p -2 1-p is non-negative for 0 < p ≤ 1/4. This completes the proof for the first assertion of Theorem 1.2.
To prove the second assertion of Theorem 1.2, we see from the proof of Theorem 1.1 that it suffices to show the right-hand side expression of (2.7) is ≥ 1 for 0 < x ≤ 1. We simplify it to see that it is equivalent to showing that We assume that This implies that the function is a concave function of x, hence is minimized at x = 0 or 1. When x = 0, the above function takes the value 1. We further assume that the above function takes a value ≥ 1 when x = 1.
That is, We then deduce that We apply the above estimation and the estimation that 1 + 2Mx/L ≥ 1 in (3.1) to see that it suffices to show that, for 0 < x ≤ 1, We now assume that It is easy to see that v L,M,p (x) is a quadratic polynomial of x with negative leading coefficient The above inequality is certainly valid when L/p = 2. We may thus assume that L/p > 2 to see that the above inequality is a consequence of the following inequality: We apply the above estimates to see that inequality (3.5) is a consequence of the following inequality: L -2p -1p 2 (1 -L) L p ≥ (1 + p)(1 + L + M). (3.8) As inequality (3.7) implies inequalities (3.6) and (3.8), we first find values of p so that inequality (3.7) holds. To do so, we first simply inequality (3.7) by noting that L -2p -1p 2 (1 -L) ≥ 2L -1 -2p. | 2018-06-20T11:03:57.000Z | 2018-06-20T00:00:00.000 | {
"year": 2020,
"sha1": "f05af54dbef0d60eef674003228466205843f71f",
"oa_license": "CCBY",
"oa_url": "https://journalofinequalitiesandapplications.springeropen.com/track/pdf/10.1186/s13660-020-02339-3",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "45d1b94c783ea9718fe45236b380423b9f5df964",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
206957514 | pes2o/s2orc | v3-fos-license | Age-related macular degeneration in a randomized controlled trial of low-dose aspirin: Rationale and study design of the ASPREE-AMD study
Purpose Although aspirin therapy is used widely in older adults for prevention of cardiovascular disease, its impact on the incidence, progression and severity of age-related macular degeneration (AMD) is uncertain. The effect of low-dose aspirin on the course of AMD will be evaluated in this clinical trial. Design A sub-study of the ‘ASPirin in Reducing Events in the Elderly’ (ASPREE) trial, ASPREE-AMD is a 5-year follow-up double-blind, placebo-controlled, randomized trial of the effect of 100 mg daily aspirin on the course of AMD in 5000 subjects aged 70 years or older, with normal cognitive function and without cardiovascular disease at baseline. Non-mydriatic fundus photography will be performed at baseline, 3-year and 5-year follow-up to determine AMD status. Primary outcome measures The incidence and progression of AMD. Exploratory analyses will determine whether aspirin affects the risk of retinal hemorrhage in late AMD, and whether other factors, such as genotype, systemic disease, inflammatory biomarkers, influence the effect of aspirin on AMD. Conclusion The study findings will be of significant clinical and public interest due to a potential to identify a possible low cost therapy for preventing AMD worldwide and to determine risk/benefit balance of the aspirin usage by the AMD-affected elderly. The ASPREE-AMD study provides a unique opportunity to determine the effect of aspirin on AMD incidence and progression, by adding retinal imaging to an ongoing, large-scale primary prevention randomized clinical trial.
Introduction
Age-related macular degeneration (AMD) is a major cause of visual impairment and legal blindness amongst the elderly in developed countries [1e4]. Visual impairment from AMD leads to a loss of quality of life, with increased rates of depression, injury, social isolation and institutionalization. AMD is strongly associated with age, to the extent that more than 10% of adults aged over 80 are living with advanced AMD. Increasing life expectancy was estimated to double the number of people with this late-onset disease over the next two decades, with a substantial impact on quality of life and the costs of care [5e7]. The late forms of AMD lead to central vision loss as a result of neovascular (nAMD) complications or atrophic changes (geographic atrophy) in the retina. Anti-vascular endothelial growth factor (anti-VEGF) therapy has significantly improved the outcomes for nAMD, but there remains no proven treatment to specifically slow progression of geographic atrophy (GA). There is also no specific treatment that prevents progression from early or intermediate AMD to late AMD. Current recommendations, which are of limited efficacy, are centred upon the use of supplements, lifestyle and dietary advice [8e11]. As populations age, there is an imperative to delay the onset and progression of disability and chronic disease. Identifying an effective preventive agent for AMD, or one that can slow its progression, would have significant beneficial effects on quality of life as well as healthcare costs.
2. Rationale for the ASPREE-AMD study: a potential for aspirin to prevent or slow the AMD process Inflammatory processes have been implicated in the pathogenesis of AMD and its progression and AMD is considered by many to be a chronic, systemic inflammatory disease [12e16]. As such it is plausible that aspirin, via its anti-inflammatory actions, may play a role in both the prevention and slowing of progression to vision loss through low grade inflammatory process. This was the rationale for two previous sub-studies in large primary prevention trials of low dose aspirin that evaluated self-reported AMD incidence as a secondary outcome. In these trials, alternate-daily aspirin versus placebo was administered in a population of 22,071 US physicians aged 40e84 years, with 5 years of follow-up [17] and in a population of 39,876 women aged over 45 years, with 10 years of followup [18]. Both studies reported a similar effect size of low dose aspirin in reducing the relative risk of visually significant AMD by more than 20%. The studies used self-reported diagnoses, confirmed with medical record data, in order to reduce random misclassification. While both studies are suggestive of a beneficial effect of aspirin with respect to AMD, the reliance on self-reported diagnosis and the low significance of the results due to the low number of incident cases in the relatively young study populations limit the weight that can be given to this evidence.
Results from observational studies with respect to aspirin's influence on AMD incidence and prevalence have been inconsistent, ranging from the generally positive ('harmful') associations with AMD prevalence or incidence, with emphasis on the increased frequency of sub-retinal or vitreous hemorrhages [19e24], to no association [25e32], to negative ('protective') association [33,34]. (Table 1). Thus, the overall risk/benefit balances of aspirin in relation to incidence and prevalence of AMD are yet to be fully explored. In addition, if aspirin were to increase the risk of retinal or vitreous hemorrhage, this finding would have important implications when considering aspirin for widespread use in primary prevention. A growing need for a sufficiently powered randomized controlled trial to resolve the relationship between aspirin use and AMD was highlighted in several recent reviews and meta-analyses [23,35e43]. The NIH-funded large randomized controlled trial -ASPirin in Reducing Events in the Elderly (ASPREE) -designed to address the role of aspirin in primary prevention on disability-and dementia-free survival in older adults, provides the opportunity for a sizable sub-study to address this need [44]. Taking into account the likelihood that a number of actions of aspirin (anti-inflammatory, anti-platelet, etc.) are likely to be seen with 100 mg daily in ASPREE, its effect could be multifaceted. Thus, along with the tested possible preventive effect by suppressing the inflammatory process at earlier stages of AMD, aspirin's antiplatelet property might appear to be exacerbating the late AMD processes. This phenomenon will be closely looked into. Adding fundus photography to examine a sub-set of ASPREE participants, the ASPREE-AMD substudy will determine the effect of low-dose aspirin on the course of AMD.
Study design
This study is a 5-year follow-up, double-blind, randomized placebo-controlled trial of daily 100 mg aspirin versus placebo on the incidence and progression of AMD, in a population of healthy Australians aged 70 years or older. ASPREE-AMD is a sub-study of the principal ASPREE trial.
Ethics statement
The ASPREE-AMD study has been approved by the ASPREE participants consented separately to retinal photography. Participants in two other sub-studies of ASPREE, which together include more than 900 ASPREE participants with retinal photographs taken with the same cameras at baseline and after 3 years of treatment with study medication, will also be included in the ASPREE-AMD sub-study. These two sub-studies are: (1) ENVISion (Aspirin for the prevention of cognitive decline in the Elderly: a Neuro-Vascular Imaging Study -a two-centre, randomized, double-blind, placebo controlled trial of the effects of daily 100 mg enteric-coated aspirin on the rate of increase of magnetic resonance imaging (MRI)-based white matter hyperintensities (WMH) and silent brain infarctions (SBI), ACTRN12609000613202) [45]; (2) SNORE-ASA (Study of Neurocognitive Outcomes, Radiological and retinal Effects of Aspirin in Sleep Apnoea -a multicentre, randomized, double-blind, placebo controlled trial of the effects of daily 100 mg enteric-coated aspirin on cognitive outcomes in the setting of sleep apnoea, in healthy older adults aged 70 and over, ACTRN 12612000891820).
The year 5 follow-up photography for these participants will be conducted as part of the ASPREE-AMD study.
The principal ASPREE trial
ASPREE is a multi-centre, randomized, double-blinded, placebocontrolled trial of daily 100 mg enteric-coated aspirin in 19,000þ healthy community dwelling older adults in Australia and the US. Age eligibility is 65 years and over for African Americans and Hispanics in the USA, and 70 years and over for all others, Table 1 Studies of the association between regular aspirin intake and the course of age-related macular degeneration (AMD).
Details of study Conclusion
Case studies el Baba, F. 1986 [25] Retrospective case-control study: 50 cases with massive haemorrhages (15 on Warfarin and 8 on aspirin) vs 50 cases with small AMD-related hemorrhages (2 on warfarin and 6 on aspirin) Aspirin doses varied from 80 to 100 mg daily.
Aspirin was not associated with massive hemorrhages in AMD, whilst warfarin was associated. including all in the ASPREE-AMD, ENVISion and SNORE-ASA trials [44]. ASPREE will determine whether 100 mg aspirin daily extends disability-and dementia-free survival in the elderly, and it is a primary prevention study. The ASPREE study methods have been described in detail elsewhere [44]. In brief, the majority of ASPREE Australian participants have been recruited through partnerships with general practitioner co-investigators. A minority has been recruited directly from the community. The primary endpoint of ASPREE is a composite of death or dementia (adjudicated according to the DSM-IV criteria) or Aspirin intake was not associated with self-reported AMD, RR 0.77; 95% CI 0.54e1.11. Additional testing for a potential beneficial effect in randomized trials of adequate size and duration is required; due to relatively young age of population, the detection of 23% risk reduction was not statistically significant Christen, W. G. 2009 [18] 10-year RCT of low-dose aspirin (100 mg every other day), US female health professionals; n ¼ 39,876 Aspirin intake was not associated with self-reported AMD, Hazard ratio 0.82; 95% CI, 0.64e1.06. ASPREE Investigator Group 2013 [44] Australian-American RCT "ASPirin in Reducing Events in the Elderly (ASPREE). n ¼ 19.000, aged 65 years or above ("US minorities") and 70 years or above (non-"US minorities") 100 mg of aspirin daily.
Design: In contrast to other aspirin trials that have largely focused on one disease/group of diseases, ASPREE has a unique composite primary endpoint 'disability free survival', to capture the overall risk and benefit of aspirin, aiming to extend healthy independent lifespan. Pending results. Meta-analyses Kahawita, S. K. 2014 [41] 2 cross-sectional, 1 population-based incidence and 1 cohort study; n ¼ 10,292 High heterogeneity was emphasized. Aspirin use was associated with early ARMD. Pooled OR 1.43 (95%CI 1.09e1.95) Ye, J. 2014 [40] 2 RCTs, 3 cohorts and 4 case-control studies n ¼ 177,683 Aspirin use in total was not associated with AMD in total. Neovascular AMD was associated with aspirin use: RR 1.59; 95% CI 1.09e2.31 Zhu, W 2013 [35] 2 RCTs, 4 case-control and 4 cohort studies n ¼ 171,729 Aspirin use was not associated with increased risk for Any ARMD (pooled RR 1.09, 95%CI 0.96e1.28), and it was so for either early or late AMD, in RCT case-control or cohort studies. Li, L. 2015 [23] Ten studies. [37] Analysis of findings and limitations of two RCTs, two population-based cross-sectional studies, one cohort and one case-control study.
Current data on the association between aspirin use and AMD do not fulfil criteria to declare that a causal relationship exists. The criteria that are not met include consistency, sufficient strength of association and specificity. Wu,Y 2013 [36] Aspirin and Age Related Macular Degeneration; the Possible Relationship. Review of multiple studies.
Evidence from the epidemiological studies has been contradictory and no persuasive conclusions have been made. "The current results should be challenged and acknowledged by well-designed, large-scale and long term follow-up studies" Christen, W.G. 2014 [42] Aspirin and risk of Neovascular AMD. Review of multiple studies.
The inherent limitations of observational studies preclude an interpretation of causality. Well-designed randomized trials of sufficient size and duration are needed to establish risk/ benefit ratio of aspirin use by individuals at low-to-moderate risk of cardiovascular disease. Chong, E.W. 2014 [39] Aspirin and risk of AMD. Review-Commentary Discussed the controversy surrounding aspirin use and highlighted the ongoing randomized controlled trial ASPREE.
Abbreviations: AMD, age-related macular degeneration; RCT, randomized controlled trial; GA, geographic atrophy; CNV, choroidal neovascularisation; OR, odds ratio; RR, relative risk; CI, confidence interval. persistent loss of one of the basic activities of daily living. Prespecified secondary endpoints include death, cardiovascular and cerebrovascular disease, cancer, cognitive impairment, depression, physical disability and clinically significant bleeding. Clinical endpoints are adjudicated by independent committees provided with de-identified clinical information about the event [44]. Inclusion criteria include men and women who were able to give informed consent and able to attend a study visit for an estimated period of five years. Exclusion criteria include a past history of cardiovascular event or established cardiovascular disease (including stroke, transient ischemic attack, myocardial infarction, unstable angina, coronary artery reperfusion procedures and bypass grafting, abdominal aortic aneurysm, cardiac failure), atrial fibrillation, dementia or score of <78 on Modified Mini-Mental State (3MS) examination, disability as defined by severe difficulty or inability to perform any of the 6 Katz Activities of Daily Living (ADLs) [46], a condition with a high current or recurrent risk of bleeding, anaemia, a condition likely to cause death within 5 years, current use of other antiplatelet or antithrombotic medication, current use of aspirin for secondary prevention, and uncontrolled hypertension.
Participants meeting initial ASPREE eligibility at a screening study visit underwent a four-week placebo run-in phase and those with compliance equal or greater than 80% were randomized. Randomization of study drug followed a block randomization procedure and was stratified by site and age (65e79 and 80 years). An independent statistician generated the randomization list using the STATA 'ralloc' procedure. Participants were randomized to receive either 100 mg of enteric-coated aspirin or entericcoated placebo, which are identical in appearance, in a ratio of 1:1. A 12-month supply of study medication was dispensed at trial entry and thereafter at each annual visit. Study participants, investigators and general practitioner co-investigators remain masked to treatment allocation.
All ASPREE participants have face-to-face study visits annually, with quarterly telephone contact in between visits. The 6-month phone call ascertains additional information relevant to study endpoints, including persistence of functional impairment.
Annual ASPREE data collected and assessments conducted include: demographics, cognitive function, physical function, quality of life, blood pressure, cardiovascular biomarkers, health behaviours and lifestyle (Table 2). Compliance with study medication is checked by annual pill count. Clinical endpoints of the study are being adjudicated and confirmed. The ASPREE study began in 2010 and completed recruitment in December 2014, with 16,703 participants in Australia and 2411 in the USA, and will conclude in 2017/2018.
The ASPREE-AMD trial
The ASPREE-AMD trial involves the acquisition of digital retinal images of both eyes at baseline, 3 and 5 years in ASPREE participants after randomization to aspirin or placebo.
All Inclusion criteria for the parent ASPREE study applied to this project. All consecutive randomized ASPREE participants at each centre who also gave informed consent for retinal photography and were able to attend a retinal photography visit were deemed eligible for entry into the ASPREE-AMD sub-study.
All Exclusion criteria for the parent ASPREE study also applied, with the additional criterion of the examiner being unable to view the macula without pharmaceutical dilation to take a retinal image (mainly due to either ocular media opacity or small and rigid pupil).
Participants with bilateral late AMD at baseline were still enrolled, but will be excluded from the analysis of AMD incidence or progression. These participants will be followed up for assessment of the potential worsening of the condition due to possible new or recurrent hemorrhage.
Retinal photography
Two digital, 45 , non-stereoscopic, colour retinal photographs of each eye, with one image centred on the fovea and one on the optic disc, are taken on one of nine non-mydriatic fundus cameras (Canon Inc., Tokyo, Japan), using Digital Health Care software (UK). Non-mydriatic digital retinal imaging has been proven to be a reliable method of AMD detection [47e49]. These fundus cameras are either (1) located permanently at four ASPREE study stationary centers in three Australian states, or (2) installed permanently in the specifically designed three high-roofed clinic vehicles (Mercedes vans) operated from the Melbourne site to engage participants from remote regional and rural areas in the study, or (3) shipped on the regular basis in the flight cases from Melbourne to the most distant areas, with trained research staff travelling to and assembling the cameras at the pre-organized sites. The use of several mobile photographic units allowed involvement of rural and regional Australian population in this research.
Photographs are taken without pharmacological pupil dilation. The right eye is photographed first, with sufficient time (up to 5 min) allowed between the following photo shots for the pupils to recover from flash-induced constriction. Staff members were trained to assess image quality and re-capture if images are unsatisfactory. The images and participant identifiers are backed up on the portable hard discs and bulk-exported from each camera on a monthly basis. Prior to uploading the images to the ASPREE database, the batches of the exported images are converted into JPEG format and labelled (each image) with the participants' data entered into the computers linked to the fundus cameras, using for both procedures a custom-written script for automated bulk processing. Four pre-specified identifiers (participant ID, acrostic that consists of the combination of the shortened last and first names, date of imaging and DHC code) are used to match images with the ASPREE database records of the participants during bulk uploading of the images. The batches of images are initially screened for signs of clinically significant pathology requiring medical attention and if needed, the notification letters, automatically generated via the ASPREE database, are sent to the participants and their general practitioner. De-identified images are transferred to the ASPREE-AMD retinal image database housed on a secure server for detailed grading. Images are graded for AMD according to the Beckman classification by two independent, masked experienced graders [50]. Grading process closely follows the timing of image acquisition, aiming to complete the grading soon after completion of image collection. During side by side grading, the temporal sequence of photos will not be masked. In this study, the image labels include the date of photography as one of the identifiers important for data validation. Deleting the dates and relabelling the images would increase risk of errors, mostly due to the large scale of the study. However, the graders will at all times be independent of each other and masked to the allocation of study medications.
The graders assess quality of the image (for focus and field placement), as well as the presence, size and location of the AMDrelated lesions within a 6000 mm circular grid calibrated for size on the optic disc and centred on the fovea [47]. Grading is checked periodically for inter-grader agreement and intra-grader repeatability on random selections of images.
Incident pathology or cases of progression will be confirmed via side-by-side comparison of baseline and follow-up images from the same participant and adjudicated by an ophthalmologist (RG) when required. After assessment of the baseline and follow-up retinal images for incidence, progression and severity of AMD, the effect of aspirin on the course of AMD will be analysed.
Adverse events
All adverse events and serious adverse events are reported according to good clinical practice guidelines and handled by the principal ASPREE study. An independent Data and Safety Monitoring Board (DSMB), established by the National Institute on Aging, monitors all ASPREE activities on a 6 monthly basis. Clinically significant retinal pathology at baseline or follow-up is reported back to the participant's primary care physician and to the participant. Participants with any bleeding disorder, including retinal hemorrhage, may be taken off study medication.
Acquisition of additional information on adverse events and anti-VEGF treatment
In the last decade, intravitreal anti-VEGF therapy for choroidal nAMD has been implemented in Australia. Once treated with anti-VEGF, signs of nAMD may not be visible on colour fundus images. Therefore, to improve the accuracy of diagnosis and correctly identify those who are receiving treatment with intravitreal ranibizumab and aflibercept between ASPREE-AMD baseline and follow-up time points, additional information will be obtained from two sources to ensure the development of nAMD is captured: participant self-reported adverse events validated through medical records and the National Medicare data on intravitreal ranibizumab and aflibercept prescriptions being approved for nAMD treatment during the study period. These data will not capture those participants who are being treated with bevacizumab, an off label drug used for nAMD. As ranibizumab and aflibercept are fully subsidized through the pharmaceutical benefits scheme (PBS) in Australia for treatment of subfoveal nAMD, treatment with bevacizumab is likely to be uncommon. As anti-VEGF treatment via the PBS is used now for other retinal conditions as well, the diagnosis of AMD will be validated via medical records and the collected data on adverse events.
The data on anti-VEGF medications for nAMD will be collected for all ASPREE participants (not just those in ASPREE-AMD), as consent for their retrospective information to be obtained from Medicare is included in the ASPREE protocol, and this will provide an additional opportunity to conduct analysis on the effect of aspirin on the incidence of nAMD on a sample which will be considerably larger than the sample photographed as part of ASPREE-AMD.
Definitions of AMD
The following Beckman risk categories of AMD will be used in the analysis [50]: No apparent aging change: no drusen and no AMD pigmentation abnormalities Normal aging changes: only drupelets (small drusen <63 mm) and no AMD pigmentation abnormalities Early AMD: Medium drusen (63 mm -<125 mm) with no AMD pigmentation abnormalities Intermediate AMD: Medium drusen (63 mm -<125 mm) with AMD pigmentation abnormalities or large drusen (125 mm) Advanced AMD: neovascular AMD (nAMD) or geographic atrophy 3.1.8. Primary outcome measures 1) Incident AMD. Any case that progresses from bilateral 'normal' or 'normal aging change' to any grade of AMD in at least one eye will be classified as incident AMD.
2) Progression of AMD. An increase in the AMD severity status from early or intermediate AMD in either eye will be classified as AMD progression. Regression of AMD stage will also be documented.
Future genetic and inflammatory biomarker analyses
Another sub-study of the principal ASPREE study, the ASPREE Healthy Ageing Biobank (www.aspree.org), in parallel with ASPREE-AMD, collects, processes and stores components of blood and urine samples at baseline and at year 3 in the trial, with serum, EDTA plasma, sodium citrate plasma, red blood cells and buffy coat aliquots stored for future biomarker and genotyping analysis. Association studies will be conducted in relation to potential biomarker variables (inflammatory markers, AMD-related genetic polymorphisms) and AMD incidence and progression, as well as their possible influence on the effect of aspirin on the primary outcomes.
Sample size and study power
No single population-based study provided all relevant information that we required for sample size calculations, hence we used data from several studies.
Prevalence estimates: we used the results from the crosssectional European Eye Study, conducted on participants of similar ethnic origin and similar age (65 years or older), and also used digital images of the retina [51]. The study found that approximately half of the participants had no AMD, about one third had medium drusen (early AMD) and about 15% had large drusen (intermediate AMD).
From an expected sample of 5000 ASPREE-AMD participants with gradable images at baseline, excluding an estimated 2% with existing 'late AMD' at baseline and allowing for a 4% per annum attrition, the cohort will have 3995 participants at 5-year follow-up.
Incidence estimates: Based on the age-specific data (70e79 and 80þ years) from the population-based longitudinal Melbourne Visual Impairment Project (VIP), also conducted in Victoria, Australia, among an estimated 1998 participants with no AMD at baseline (estimated to be 50% of 3995 followed-up participants), the expected 5-year incidence of early and intermediate AMD (combined) is approximately 20% [52].
Progression estimates: (i) Among an estimated 1398 participants with early AMD (medium size drusen) at baseline (35% of 3995 participants), we expect 35% to have 5-year progression to intermediate AMD.
For this estimate we used the only available data -the AREDS study finding that medium sized drusen (63e125 mm) progress in five years to large drusen (125 mm) at the rate of 20% if drusen are in one eye only and at the rate of 50% if drusen are in both eyes [50]. As there is no data on the proportion of unilateral and bilateral drusen in a population aged 70 years or older, we took an average of 35%. (ii) Among an estimated 1998 participants with either early AMD or intermediate AMD at baseline, we expect at least 4% to progress to late AMD, based on the published late AMD incidence rates amongst people 70 years or older in two longitudinal studies, the Melbourne VIP and Blue Mountains Eye Study (BMES), both conducted in the Australian population [52,53].
Based on these estimates, our study will provide 80% power with two-sided alpha of 0.05 to detect: (1) 24% reduction of early/ intermediate AMD incidence, (2) 20% reduction of progression from early to intermediate stage of disease and (3) 53% reduction of progression from early or intermediate stage to late stage.
The detectable 5-year changes between the placebo and aspirin treatment groups in incidence and progression of AMD are provided in Table 3. Competing risks of death and debilitating diseases that might cause differential survivorship in study arms will be considered in the analysis [54,55].
Our power calculation describes percent reductions that can be detected with 80% power based on the observed effects, which are innately weaker than the "true" effect that could exist if everyone remained on their randomized treatment. Therefore, a naturally occurring incomplete compliance has been incorporated in the estimates. The reasons for non-compliance include the development of a non-fatal, non-disabling cardiovascular or cerebrovascular event necessitating aspirin therapy. Thus, the ASPREE protocol specifies an expectation that 5% per annum of placebogroup participants will initiate aspirin use and similarly that 5% of aspirin-group participants will cease taking study medication and not commence open-label aspirin use.
Statistical analysis plan
The primary analyses will be conducted using the intention-totreat principle, i.e. according to the group to which participants were randomized and without reference to their actual compliance with assigned treatment. We will use the grading data from the participants with baseline and 5-year images and apply logbinomial regression models to directly compare event rates between treatment groups, to assess the effect of aspirin on the outcomes: AMD incidence and progression. Each model will be applied to the relevant eligible participant subset (see Table 3 for expected numbers) and the models will include a binary covariate to indicate randomization to aspirin or placebo; the parameter for this covariate can be translated as the estimated rate ratio for aspirin.
Secondary analyses will apply the same models but with adjustment for age, sex and smoking status at baseline, and further analyses will also adjust for any variables predictive of AMD progression and found to be different between the two groups at baseline.
An exploratory analysis will be undertaken to determine whether aspirin is associated with increased risk of retinal hemorrhages.
Given the large sample size, we anticipate that randomization will adequately balance baseline characteristics of participants between the two treatment groups. However, the use of aspirin may affect survivorship itself, which plays a major role in AMD statistics. Therefore, if the follow-up loss of retinal data due to death and disability is found to be unbalanced between the study arms, it will be included in the statistical model as a competing risk [55].
To assess sensitivity to participant dropout, the analyses will be repeated within a multiple imputation process, in which the imputation model will include 3-year image information, baseline characteristics and 5-year outcomes. Additionally, an analysis will be undertaken using baseline, 3-year and 5-year information from each participant in mixed effect models that are extensions of the Table 3 Detectable risk reduction (%) in 3995 participants with gradable images free of late AMD at baseline in cumulative 5-year incidence or progression of AMD. log-binomial regression models with the inclusion of a random effect for participant to allow for intra-person correlation in outcome over time.
A 'per protocol' analysis will also be conducted for each outcome using the recorded data on study drug compliance and commencement of open-label aspirin use during follow-up with the aim of estimating complier-averaged causal effects of aspirin. The results of both Intention-To-Treat and "per protocol" analyses will be reported.
Pre-specified sub-group analyses
Subgroup analyses will use interaction terms involving the randomization covariate to examine whether variations in systemic diseases and inflammatory biomarker status influence the effect of aspirin on AMD. Pre-specified subgroup analyses will be undertaken by age and smoking status: a) Age below and above study median: The balance of the AMDrelated risks and benefits due to aspirin may differ between age groups as a result of different rates of mortality, cardiovascular risk, cognitive decline, other disability and risk of adverse effects. b) Smoking: Current versus Never or Former smokers. Smoking is a major well-proven external risk factor for AMD. The effect of aspirin may be different in smokers and non-smokers.
Reporting of aspirin effects will be stratified by any covariate found to hold an interaction with the randomization variable.
We will also look in the future at common genetic variants associated with AMD and determine if the effect of aspirin was influenced by the genotype.
Discussion
In addition to its anti-inflammatory, antipyretic and analgesic qualities, aspirin, as a drug proven for the secondary prevention of occlusive cardiovascular events, has become the world's most widely used pharmaceutical drug, particularly by older adults. AMD is the most common cause of vision impairment in people over the age of 50 years in our community and it has profound effects on vision and quality of life. The ASPREE-AMD randomized clinical trial provides a unique opportunity to examine whether long-term low-dose aspirin influences the incidence or progression of AMD.
Aspirin is established for the management of acute cardiovascular diseases (CVD) and their secondary prevention [56e59]. The role of aspirin in primary prevention of CVD is less defined and is currently under further investigation [44,54,60e64]. Nevertheless, it is widely used by older adults, with 20%e50% of older persons in the USA, without cardiovascular disease, being regular users of aspirin [65,66]. The role of low dose aspirin in cancer prevention and management is also under investigation [65,67e70].
A number of studies have been undertaken to establish the association of aspirin with the incidence and progression of AMD, but the results have been inconsistent. Self-reported data from two large randomized controlled trials suggest a beneficial effect of aspirin on AMD [17,18]. Recently, considerable publicity was given to the results of cross-sectional and cohort studies, which reported that aspirin exacerbated the AMD process and contributed to blindness [20e22]. In the latest report from the Blue Mountains Eye Study (BMES), the 15-year cumulative nAMD incidence was 9.3% in aspirin users and 3.7% in nonusers. Users were defined as those who took aspirin (doses were not recorded) once or more per week in the year before the baseline. Adjusted for age, sex, smoking, history of cardiovascular disease, systolic blood pressure, and body mass index, nAMD was associated with regular use of aspirin (OR 2.46; 95% CI, 1.25e4.83). In this study, aspirin use was not updated at follow-up examinations and the systemic survival bias could have affected the results, as 46% participants from the original BMES cohort were not available for the 15-year follow-up. Aspirin users may have had prolonged survival rates allowing them to develop AMD. Competing risks of death were not adjusted for in the BMES study. Thus, the risk/benefit profile of aspirin use with respect to AMD remains unanswered. These issues are important given that aspirin is already the world's most widely used therapeutic agent, being taken regularly by more than 100 million individuals [71].
The principal ASPREE trial will be informative on the use of aspirin in primary prevention of death, physical and cognitive disability in an elderly population, whilst ASPREE-AMD will clarify whether aspirin is efficacious as a primary prevention for AMD and useful in slowing its progression. Additionally, it will allow the question regarding a possible increased risk of retinal hemorrhage associated with nAMD to be addressed. The ASPREE-AMD study differs from other studies in that it is a randomized controlled trial, with photo-documented detailed AMD assessment and a large sample size in the at-risk, relatively healthy, participants aged 70 years or older. The strong detailed records of the exposure data, which include checking the routine of pill taking at every 3monthly phone calls and counting untaken pills in the returned containers, add weight to the study merit.
As part of ASPREE, other health outcomes will be captured, which can be used when interpreting the ASPREE-AMD results.
A weakness of this study is the reliance upon only nonstereoscopic colour fundus photography to diagnose AMD, without the added benefit of multimodal imaging, such as optical coherence tomography, auto-fluorescence and infrared imaging that would allow for more thorough phenotyping. In particular, our ability to detect reticular pseudo-drusen, a risk factor for progression to late AMD, is limited.[72e74] However, the randomization should ensure equal distribution of this limitation across both groups. Another weakness of reliance upon colour imaging is that nAMD may not be detectable on colour images due to the use of anti-VEGF treatment, which can lead to underestimation of the number of nAMD cases. This will be mitigated by including information collected through registered adverse events in ASPREE, validated through medical records, and also from linking the ASPREE-AMD data to Australian Federal Government Medicare electronic records on the use of the anti-VEGF intravitreal injections.
Conclusion
The results of the study are likely to be of substantial clinical significance and public health importance due to a potential for the ASPREE-AMD sub-study to identify a possible low cost therapy for the prevention or slowing progression of AMD worldwide. The large size of the trial will also provide information that may indicate sub-groups of people who could benefit more, or less, than the overall population. Similarly, the trial will establish how the presence of various common co-morbidities influences the risk/benefit of aspirin.
Bayer Schering Pharma provided aspirin and matching placebo. The ASPREE-AMD study has been supported by the NHMRC [grant #1051625], the NIH through the National Eye Institute [grant #RO1-1R01EY026890 e 01], the Phyllis Connor Memorial Trust and Eric Ormond Baker Charitable Trust.
CERA receives Operational Infrastructure Support from the Victorian Government. RG is supported by the NHMRC Research Fellowship (#1103013).
The funders had no role in study design, data collection, decision to publish or preparation of this manuscript.
Other acknowledgements: The investigators acknowledge the work of all ASPREE retinal photographers and field research staff members who conduct study visits and collect data from ASPREE participants. The investigators also acknowledge the valued contribution of the ASPREE participants and the support from their general practitioners. | 2018-04-03T02:40:33.585Z | 2017-03-27T00:00:00.000 | {
"year": 2017,
"sha1": "5dee9a99759d5ef93935e3ff36da5701cbc516e4",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.conctc.2017.03.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d22d9f1fa0452b0279f2954a766f3fcb9ec2c3c3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269771453 | pes2o/s2orc | v3-fos-license | Transoral laser exoscopic surgery of the larynx: state of the art and comparison with traditional transoral laser microsurgery
SUMMARY Objective To evaluate the efficacy of transoral laser exoscopic surgery (TOLES) in a unicentric series of patients affected by benign and malignant glottic and supraglottic lesions, and compare outcomes with those of transoral laser microsurgery (TOLMS). Methods To demonstrate the non-inferiority of TOLES in terms of operative time, margin status and complication rates, we compared outcomes of 93 patients treated by TOLES between July 2021 and July 2023 with those of a match-paired group of 107 historical patients treated by TOLMS. To perform a multiparametric ergonomic evaluation of TOLES vs TOLMS, we used observational methods for biomechanical overload risk assessment and wearable technologies comparing 15 procedures with TOLES vs a paired match of 13 surgeries performed with TOLMS by the same surgeon. Results No significant differences were found in terms of surgical duration, positive margins, or complications between TOLES and TOLMS. Ergonomics assessment by inertial measurement units and electromyographic surface electrodes demonstrated a reduced biomechanical overload with TOLES compared to TOLMS. Conclusions The many advantages of TOLES, such as its superior didactic value, better digital control of light even through small-bored laryngoscopes, improved binocular vision, and increase in surgical performance by 3 or 4-hand techniques, are difficult to be quantified. In contrast, its non-inferiority in terms of oncological results and better ergonomics compared to TOLMS are demonstrated herein.
Introduction
In the last two decades, technological advancements in the field of minimallyinvasive laryngeal surgery have led to the development of new devices and instruments aimed at improving laryngeal visualisation during transoral procedures and enhancing surgical manoeuverability inside the narrow space of the operative laryngoscope.Especially for removal of premalignant and earlyintermediate tumours of the glottis, supraglottis, and hypopharynx, a number of different techniques have been used, ranging from traditional transoral laser microsurgery (TOLMS) 1 to transoral ultrasonic surgery (TOUSS) 2 , transoral robotic surgery (TORS) [3][4] and, more recently, transoral laser exoscopic surgery (TOLES) 5 , which represents the target of the present comparative study.TOLES is based on a novel visual technology known as "exoscopy", which is driven by recent advancements in endoscopic cameras.The term "exoscopy" S4 refers to the use of an external digital device designed to improve the visualisation of the surgical field by delivering high-definition imagery, strong illumination and excellent magnification 5 .Its use in laryngeal surgery was first conceived in 2011 by the Storz company.However, at that time, the image of the surgical field was presented in only two dimensions, thus leading to a significant loss of image depth and surgical limitations in the accomplishment of fine, minimally-invasive endolaryngeal manoeuvers 6 .Subsequently, in 2017, the VITOM 3D exoscope system (Karl Storz, Tuttlingen, Germany) was released, which, in contrast, allows for a three-dimensional (3D) perception of object volume and structural depth, enabling more precise control of fine movements, and enhancing hand-eye coordination by using dedicated glasses that allow to see the surgical field on a wide screen [7][8][9] .This technology has been first applied to limited surgical series, mainly in the fields of neurosurgery [10][11] and urology 12 .In otorhinolaryngology, an increasing number of studies have demonstrated a growing interest in exoscopic surgery, mainly in otoneurosurgery 13 , oropharyngeal surgery 14 , and reconstructive microvascular techniques 15 , showing various advantages over the traditional gold standard, i.e., the operative microscope.The known advantages of exoscopic surgery include improved ergonomics, reduced operator fatigue, the ability to provide high-quality, collaborative visualisation for the entire surgical team with the possibility to perform a 3-or 4-hand technique, and the possibility of adding real-time filtered wavelengths 5 .The aim of this study is to evaluate the efficacy of TOLES in a large unicentric series of patients affected by benign and malignant glottic and supraglottic lesions, comparing outcomes with those of the gold standard TOLMS.In particular, we focused on two aspects: 1) establish the noninferiority of TOLES in the treatment of laryngeal diseases, in terms of operative time, margin status and complication rates; 2) perform a multiparametric ergonomic evaluation of TOLES to quantify its improvement in comparison to TOLMS using observational methods for biomechanical overload risk assessment and wearable technologies.
Materials and methods
A prospective analysis of consecutive patients treated with TOLES between July 20, 2021 and July 20, 2023 by a single surgeon (C.P.) was conducted at the Department of Otorhinolaryngology -Head and Neck Surgery, University of Brescia, Italy.Surgical procedures were performed by a VITOM 3D coupled with a Lumenis Ultrapulse Encore 60 (Santa Clara, California, USA) with superpulse delivery in continuous mode (1 to 5 Watts) and an Acuspot 712 micro-manipulator (270 μm spot size).The holding system was represented by an ARTip Cruise robotic arm (Karl Storz, Tuttlingen, Germany).Informed consent was collected and signed by every patient of the study.Inclusion criteria were: a) age older than 18 years; b) benign or neoplastic laryngeal lesions amenable to transoral resection.Exclusion criteria were: a) patients unfit for transoral surgery because of difficult/impossible laryngeal exposure (as preoperatively assessed by the Laryngoscore 16 and/or its simplified version, the mini-Laryngoscore 17 ); b) simple biopsy procedures.Laryngeal flexible fibreoptic endoscopy was always performed before surgery, both by white light and narrow band imaging (NBI).Surgery was performed under general anaesthesia.After placing the patient in the Boyce-Jackson's position, laryngeal exposure was granted using different kinds of operative laryngoscopes according to the site of the lesion (supraglottic vs glottic) and degree of exposure 16 .Depending on the pre-and intraoperative evaluation of the laryngeal disease to be treated, different surgical procedures were performed: a) excision of benign lesions; b) different types of endoscopic cordectomies according to the European Laryngological Society (ELS) classification 18 ; c) endoscopic supraglottic laryngectomies according to the ELS classification 19 ; d) other functional procedures (i.e.posterior cordotomy, lysis of synechiae).Demographics, tumour characteristics (histological grading and pathological TNM classification), and treatmentrelated characteristics (type of surgery performed, type of laryngoscope used, operating time, surgical margins status and complications) were prospectively collected into an anonymous database.Concerning histological margins, R0 was defined as a distance from the lesion > 1 mm, R0 close as a distance from the lesion < 1 mm, and R1 when the surgical margin was involved by at least carcinoma in situ.Variables included in the analysis were expressed in terms of median, interquartile range (IQR), range of values, and percentages.The main demographics (age at surgery, gender), and pathological features (margin status) were compared using the chi-square and Student's t tests, as appropriate; p values < 0.05 (two-tailed) were considered statistically significant.
Comparisons
In the first part of the study, patients treated by TOLES between July 20, 2021 and July 20, 2023 (N = 93) were compared with a match-paired group of patients treated by TOLMS between April 15, 2019 and July 15, 2021 (N = 107) to demonstrate the non-inferiority of TOLES vs TOLMS in terms of operating time, surgical margins, and complication rate.
In the second part of the study, a subgroup of 15 patients treated by TOLES between July 20, 2021 and July 20, 2023 were compared with 13 patients treated in the same time frame by the same surgeon (C.P.) by CO 2 TOLMS with similar diseases and procedures of similar length to assess the ergonomics of the two surgical tools (exoscope vs microscope).In particular, these two approaches were subjected to biomechanical overload risk assessment with comparative application of several observational methods proposed by the international literature [20][21][22][23][24][25] .Each surgery was divided into elementary operations to highlight any dysergonomics present in terms of force engagement, repetitive movements and maintenance of incongruous postures.The duration of the dysergonomic operations was summed to define the total duration of biomechanical overload for each work shift to allow subsequent risk assessment using multiparametric analysis.This was made possible by direct observation and analysis of video footage of the surgeries.During the execution of the same interventions, in order to evaluate the discomfort of the postures and the muscular fatigue of the surgeon, specific wearable devices including two inertial measurement units (IMUs, WaveTrack Inertial System, Cometa System) and 8 probes for surface electromyography (EMG, Mini Wave Infinity, Cometa System) were used.IMUs were placed on the forehead with an elastic headband and at C7 level via adhesive tape, respectively, in order to acquire the movements of the head with respect to the trunk (Fig. 1).The EMG probes were placed on the right (R) and left (L) sternocleidomastoid, R and L cervical splenius, R and L upper trapezius, and R and L anterior deltoid muscles, representing the muscular structures mainly involved during transoral laryngeal surgery (Fig. 2).A maximal voluntary contractions procedure was defined and used to set the 100% muscular activations and normalise each EMG acquisition with respect to it.The protocol allowed estimation of head flexion/extension and rotation, and variations in the head posture during surgical interventions.In addition, we were able to assess muscle activation levels, number and duration of activations, and muscle fatigue through median frequency analysis.
Patients
The cohort treated by TOLES consisted of 93 patients (Group A), of whom 80 (86%) were males and 13 (14%) females.Mean age at diagnosis was 63.3 years, with a median of 65.2 (range, 25-86).Of these, 15 (Subgroup A) were also studied with ergonomics evaluation (12 males, 3 females; mean age 61.2 years, median 64.2, range 25-78) and compared with 13 patients (Subgroup B) treated in the same period by CO 2 TOLMS (11 males, 2 females; mean age 62 years, median 63.8, range 30-80).In this cohort, mean age at diagnosis was 67 years, with a median of 69.4 (range, 31-97).No significant differences in terms of age and gender were observed between Group A (TOLES) vs Group B (TOLMS) and Subgroup A (TOLES) vs Subgroup B (TOLMS).
Lesion characteristics
In both Groups A (TOLES) and B (TOLMS), most surgical procedures were performed to treat oncologic diseases, which included both malignant and pre-malignant lesions.These lesions represented 73.2% of patients treated by TOLES and 67.3% of those treated by TOLMS.In detail, 55 malignant (59.2%) and 13 premalignant lesions (14%) were treated by TOLES, while 62 malignant (58%) and 10 premalignant (9.3%) were treated by TOLMS.According to the 8 th Edition of the TNM classification, there was a predominance of glottic pT1a in both cohorts, 25 (45.5%) and 43 (69.4%) in Group A and Group B, respectively.Only 60 patients (25 in the TOLES group and 35 in the TOLMS group) had a benign lesion.In both groups, neoplastic lesions were predominantly located in the glottic region, 55 (94.6%) and 59 (95.1%) in TOLES and TOLMS cohort, respectively.Only 3 patients in each group had supraglottic tumours.Further details are provided in Table I.
Surgery
None of the surgical procedures performed by TOLES required the support of the operative microscope.According to the ELS classification of cordectomy 18 , different types of cordectomy were performed, with Type II being the most common.Further details on surgeries are reported in Table II.In Groups A and B only one complication (postoperative bleeding) was reported.In both cases, these complications were subsequently addressed through a second transoral procedure for revision.Regarding laryngeal exposure, a large-bored operative laryngoscope was used for 74 patients (79.5%) treated by TOLES.For 17 patients (18.2%), a small-bored operative laryngoscope for difficult laryngeal exposure was needed.A laryngoscope designed for supraglottic lesions was applied in 2 patients (2.3%).In the TOLMS cohort, 83 (77.5%) and 22 (20.5%)patients were exposed with large-and small-bored laryngoscope, respectively.As for TOLES, a laryngoscope designed for supraglottic lesions was applied in 2 patients (2%).TOLES had a mean operative time of 58 minutes with a standard deviation (SD) of 30.6 minutes.The median time was 55 minutes.The shortest surgical procedure lasted 14 minutes, while the longest was 165 minutes for a transoral Type IIIa supraglottic horizontal laryngectomy 19 .For TOLMS, the mean operative time was 56 minutes with a SD of 37.7 minutes.The median time was 45 minutes.The shortest surgery lasted 15 minutes, while the longest was 220 minutes for a transoral Type Vabcd cordectomy 18 .The duration of surgical procedures performed by TOLES was comparable to those carried out by TOLMS (p = 0.684).
Surgical margins
As shown in Figure 3, 22% of patients treated by TOLES had positive surgical margins, whereas positive margins were found in 19% of cases for TOLMS.These differences were not statistically significant (p = 0.690).
Biomechanical overload
In Subgroup A, the mean time for TOLES was 43.8 minutes (SD = 17.6), and median time 40 minutes.The shortest procedure lasted 30 minutes (excision of Reinke's oedema) and the longest 90 minutes (Type V cordectomy).Conversely, in Subgroup B, the mean time for TOLMS was 34.3 minutes (SD = 18.3), while the median time was 30 minutes.The shortest procedure lasted 15 minutes (Type II cordectomy) and the longest one lasted 70 minutes (Type I supraglottic laryngectomy).There was no significant dif- ference between groups for duration of surgery (p = 0.174).
Table III shows the incongruous posture maintenance times, measured in seconds, of both TOLES and TOLMS and the corresponding percentages compared to the total duration of the intervention.The results show that in TOLMS interventions, biomechanical overload was greater in terms of maintaining dysergonomic positions for the shoulder, wrist, and hand-finger district.The entire hand-wrist complex, in particular, is often employed in pinch actions in both types of approaches.
The risk assessment of biomechanical overload for the operator engaged in the two types of intervention revealed a significant difference between the times of permanence of incongruous posture of the cervical spine: much longer times in interventions by TOLMS, as confirmed by the application of the OREGE method 20 .This is due to the fact that TOLES makes it possible to maintain a head-up posture.
Objective measurement of risk factors
The analysis of the data obtained from the objective acquisitions confirmed what emerged from the risk assessment conducted by observational methods, showing that there are significant differences in the two surgical approaches which affect head posture.
For the kinematics data collected by inertial sensing of the variability of joint angles, i.e., deviation from the mean value, only the ranges of lateral tilt of the cervical spine were significantly greater for TOLMS (p = 0.023), while the range of flexion-extension was reduced in TOLES but with only a trend towards statistical significance (Fig. 4).
The results of the evaluations by EMG showed that during TOLMS the number of muscle activations per minute lasting longer than 5 seconds and of greater intensity than the background noise was greater for 3 of the 4 muscle groups considered: sternocleidomastoid, trapezius, and posterior cervical muscles (Fig. 5).In contrast, the average duration of muscle activations was shorter in TOLES interventions for the sternocleidomastoid, trapezius, and posterior cervical muscles (Fig. 6).
Discussion
The recent literature has explored a number of possible surgical applications of the exoscope, primarily in the field of neurosurgery, where this technology is already a reliable alternative to the traditional operating microscope for various spinal and brain procedures 10,11 .The field of otorhinolaryngology has also observed an increasing number of studies utilising this device.Notably, these investigations have explored the potential of exoscopy in otologic 13 , oropharyngeal 14 , oral, and parotid surgery 26 , as well as for reconstructive microvascular techniques 15 .These studies
S9
have already demonstrated several advantages over the traditional gold standard, i.e. the operative microscope.Furthermore, this technology finds application in transoral laser-assisted laryngeal surgery where it facilitates minimally-invasive procedures thanks to a combination of shared high-quality and magnified images with 3D visual perception.The initial experience of Carlucci et al. in 2012 highlighted the feasibility of this technique, even though this was still limited to 2D exoscopic systems which had the shortcomings of representing a step back in comparison to the 3D perception of the surgeon using the operative microscope 6 .Subsequent studies, such as that by Crosetti et al. 14 and Carobbio et al. 7 , demonstrated the possibility of coupling a 3D exoscope with a CO 2 free beam laser micromanipulator, thus providing further evidence of the possible advantages of exoscopic technology.Finally, De Virgilio et al. 8 reported their first preclinical experience with the VITOM 3D mounted on an ARTip cruise robotic arm, emphasising the improved visualisation and ease of use during transoral laryngeal surgery.At that point, TOLES was ready for being applied on larger series to test its specific advantages, comparing them in an objective way with those already well-known of TOLMS, still considered the gold standard in this field of application.The literature highlights the significant advantages of VI-TOM 3D exoscopic technology compared to traditional instruments.These advantages include, but probably are not limited to, high-resolution 3D visualisation with exceptional illumination under digital control, possibility to intraoperatively shift to filtered lights (Image1 S, Karl Storz, Tuttlingen, Germany) without the need to change the visualisation tool, improved didactic capabilities, possibility to apply a 3 or 4-hand technique, increased manoeuverability, and superior ergonomics with reduced fatigue.This study represents the first unicentric large series addressing some of the above-mentioned issues and trying to compare them objectively with the TOLMS gold standard.In particular, the assessment of risk of biomechanical overload for the operator engaged in TOLMS vs TOLES showed a significant difference between the incongruous posture dwell times of the cervical spine during TOLES, which were found to be much lower compared to those observed during TOLMS.The data obtained from the observational risk assessment were confirmed by the analysis of objective acquisitions, which showed differences in the two surgical approaches, such that they actually affect the head posture and development of muscular fatigue of the upper limbs, although similar patterns of muscle activation were demonstrated.All applied risk assessment methods (observational and objective) agree in showing a reduction in the risk of biomechanical overload to which the surgeon is exposed during TOLES compared with TOLMS.With the limits represented by the number of acquisitions obtained by objective assessment of risk factors, kinematic analysis showed greater angular deviations of the head during TOLMS.The increased lateral deviations of the joint angles can be explained by the need to bend the cervical spine laterally to interact with the operating room staff, due to the obstruction caused by the microscope.Moreover, the head position during TOLMS resulted in a more frequently "fixed" posture due to the forced alignment of the surgeon's eyes with the microscope eye-pieces and laryngoscope proximal opening.In contrast, during TOLES the monitor is placed in front of the surgeon, who is thus able to view the surgical field unobstructed, keeping a relaxed seated position with a horizontal gaze, aligning the exoscope with the proximal opening of the laryngoscope, thus maintaining an "head-up" position.The results of EMG evaluations showed that during TOLMS the number of muscle activations per minute lasting longer than 5 seconds and of greater intensity than the background noise was greater for 3 of the 4 muscle groups considered: sternocleidomastoid, trapezius, and posterior cervical.This finding also appears to be justified by the different arrangement of the instrumentation and consequent differences in movement between the two methods.In contrast, the average duration of muscle activations is shorter in TOLES for the sternocleidomastoid, trapezius, and posterior cervical muscles with significantly higher muscle activation and fatigue of the right trapezius during TOLMS.The EMG results also show differences between the rightand left-sided muscles, mainly concerning the trapezius and deltoid muscles, which can be explained by the different use of the upper limbs (the surgeon on whom the measurements were taken is right-handed) and the asymmetrical arrangement of the surgical field (the instruments were placed to the right of the surgeon).In contrast, EMG data for the posterior cervical muscles, which have a predominantly postural role, are difficult to interpret because their activation signal was of similar intensity to that of background noise.TOLES technology provides further benefits: in fact, it presents high-definition images on an external monitor, enabling the entire surgical team to closely observe the procedure and creating a valuable learning opportunity for nurses and assistants, particularly for those in training.This feature leads to improved workflow, better collaboration among team members, and enhanced educational opportunities in academic settings.Even though such advantages are difficult to be quantified and demonstrated in an objective manner, the overall feeling of residents and young specialists seeing a procedure performed by TOLES confirms this.Moreover, exoscopic systems can switch between white and filtered light during surgery, significantly improving delineation of tumour margins, especially in cases of upper aero-digestive tract malignancies.Even though the vast majority of the literature concerning the use of filtered lights in transoral laser surgery of the larynx is focused on NBI 27 , it is already well-accepted that this correlates with improved precision in surgical resection of tumours, especially when dealing with difficult conditions such as persistent/recurrent diseases after (chemo)radiation [28][29][30] .Moreover, unlike traditional microscopes, which result in binocular vision loss using small-bored laryngoscopes, the 3D exoscope eliminates this issue thanks to the reduced distance between the two optical systems compared to the larger distance of the microscopic eye-pieces.The possibility to have a better 3D visualisation even through a small laryngoscope, together with the increased illumination under digital control, allows the surgeon to push the limits of operability of lesions with suboptimal laryn-geal exposure, without increasing the rate of positive surgical margins 31 .The time required for equipment setup and procedure execution is similar when comparing procedures performed by TOLMS and TOLES, as confirmed by the present series.Similar outcomes have been reported in other areas such as parotid surgery 26 , microvascular reconstructive surgery 15 , and cochlear implant placement 13 .In the context of surgical treatment of malignant lesions, the focus must remain the oncologic radicality of resection with achievement of negative margins by histological examination.In our case series, there were no significant differences in terms of positive margins between patients treated with TOLES compared with those managed with TOLMS.Future larger series from other centres will allow additional comparisons.
Conclusions
Exoscopic surgery has progressively evolved and TOLES represents one of the most promising applications of this novel technology to the field of transoral resection of laryngeal lesions.The accumulating evidence of its advantages, including improved ergonomics, educational value, and the addition of filtered light, as well as its non-inferiority in comparison to the gold standard TOLMS for benign and malignant laryngeal pathologies, render exoscopy a promising alternative to traditional microscopic techniques.
Figure 4 .
Figure 4. Results of kinematic analysis of cervical spine postures: analysis of variability of head angles relative to the trunk measured by sensors in the two surgical approaches (blue bars for TOLES, orange bars for TOLMS).
Figure 5 .
Figure 5. Number of muscle activations per minute lasting longer than 5 seconds and of greater intensity than the background noise (blue bars for TOLES, orange bars for TOLMS).
Figure 6 .
Figure 6.Average duration of muscle activations (blue bars for TOLES, orange bars for TOLMS).
Table I .
Characteristics of lesions.
Table II .
Types of surgery performed.
Table III .
Incongruous posture maintenance times for both TOLES and TOLMS and corresponding percentages compared to the total duration of intervention. | 2024-05-16T06:17:55.347Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "60d0d1aad4ffd33c97cc1a5c0e50254ed2c41184",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.14639/0392-100x-suppl.1-44-2024-n2850",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b909248ec0369399466d7dd1bef5573f06c102ac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246429235 | pes2o/s2orc | v3-fos-license | Knowledge-Driven Approaches to Create the MTox700+ Metabolite Panel for Predicting Toxicity
Abstract Endogenous metabolite levels describe the molecular phenotype that is most downstream from chemical exposure. Consequently, quantitative changes in metabolite levels have the potential to predict mode-of-action and adversity, with regulatory toxicology predicated on the latter. However, toxicity-related metabolic biomarker resources remain highly fragmented and incomplete. Although development of the S1500+ gene biomarker panel has accelerated the application of transcriptomics to toxicology, a similar initiative for metabolic biomarkers is lacking. Our aim was to define a publicly available metabolic biomarker panel, equivalent to S1500+, capable of predicting pathway perturbations and/or adverse outcomes. We conducted a systematic review of multiple toxicological resources, yielding 189 proposed metabolic biomarkers from existing assays (BASF, Bowes-44, and Tox21), 342 biomarkers from databases (Adverse Outcome Pathway Wiki, Comparative Toxicogenomics Database, QIAGEN Ingenuity Pathway Analysis, and Toxin and Toxin-Target Database), and 435 biomarkers from the literature. Evidence mapping across all 8 resources generated a panel of 722 metabolic biomarkers for toxicology (MTox700+), of which 462 (64%) are associated with molecular pathways and 575 (80%) with adverse outcomes. Comparing MTox700+ and S1500+ revealed that 418 (58%) metabolic biomarkers associate with pathways shared across both panels, with further metabolites mapping to unique pathways. Metabolite reference standards are commercially available for 646 (90%) of the panel metabolites, and assays exist for 578 (80%) of these biomarkers. This study has generated a publicly available metabolic biomarker panel for toxicology, which through its future laboratory deployment, is intended to help build foundational knowledge to support the generation of molecular mechanistic data for chemical hazard assessment.
The measurement of endogenous metabolite levels to predict toxicity, whether by untargeted metabolomics or targeted metabolite assays, has received increasing attention over the last decade (Johnson et al., 2016;Kamp et al., 2012;Ramirez et al., 2013;Sperber et al., 2019;Viant et al., 2019). Metabolic measurements describe the most downstream molecular phenotype, providing insights into a substance's mode-of-action (MoA) and, critically, biomarker profiles that are strongly associated with adverse (or apical) endpoints upon which regulatory toxicology is predicated (Hines et al., 2010;Taylor et al., 2018). Some individual metabolic biomarkers are already measured as part of international regulatory test guidelines, such as triiodothyronine and thyroxine hormones, to predict thyroid toxicity in rodent repeated-dose 90-day studies (OECD 2018). Other metabolic biomarkers, discovered via untargeted metabolomics, have been deployed in targeted screening assays, including ornithine and cystine for predicting developmental toxicity (Zurlinden et al., 2020). The regulatory application of a broader panel of more than 200 metabolic biomarkers, predictive of multiple MoAs, has been extensively demonstrated by BASF, in particular for category formation to support read-across Mattes et al., 2014;Sperber et al., 2019;Van Ravenzwaay et al., 2015). Although these examples collectively demonstrate the value of metabolic biomarkers in regulatory toxicology, their implementation remains limited. This is in part because toxicity-related metabolic biomarker resources for human health remain highly fragmented and incomplete.
In 2013, the U.S. National Toxicology Program (NTP) launched an initiative that utilized data-and knowledge-driven approaches to create a human transcriptomics biomarker panel, the S1500þ targeted gene set, to enable cost-effective, highthroughput measurements that are predictive of pathway perturbations (Mav et al., 2018). By establishing this gene biomarker panel on the TempO-Seq gene expression platform (Yeakley et al., 2017), the application of targeted transcriptome profiling in toxicology has increased substantially and rapidly . In contrast, no equivalent study has been reported that interrogates information from multiple resources to derive a panel of metabolic biomarkers that have the potential to predict pathway perturbations and/or adversity. Yet, there are several justifications to creating such a panel, not least as it could circumvent the challenge of metabolite identification that plagues untargeted metabolomics studies in toxicology. In addition, it could improve harmonization of the analytical approaches used and accelerate the generation of informatics resources that describe levels of identified metabolites in the context of predicting toxicity endpoints. Such resources are sorely lacking in metabolomics compared with transcriptomics (Igarashi et al., 2015;Lamb et al., 2006;Richard et al., 2016), arguably another factor underlying the limited implementation of metabolomics into regulatory toxicology. Defining robust mechanistic associations between metabolic biomarkers and adverse outcomes (AOs) would increase the acceptance of this New Approach Methodology into hazard assessment frameworks.
The overall aim of this work was to define for the first time a metabolic biomarker panel for toxicology, similar to the definition of the S1500þ gene biomarker panel. This was achieved by mining multiple toxicological resources-including existing multiplexed molecular assays, databases, and the literature-to identify a panel of human health-relevant metabolites associated with disease, toxicity, and other AOs in humans. The first objective was to create a universal list of detectable human metabolites ("metabolite master list [MML]"), derived from the Human Metabolome Database (HMDB), for facile and rigorous filtering of metabolites through all subsequent phases in the project. Next, multiple toxicological resources containing information on metabolic biomarkers were identified, and data were extracted from each of these, including 3 multiplexed assays, 4 databases, and the published literature. To maximize confidence in the predictivity of these biomarkers and hence their utility in regulatory decision making, further information was gathered to assign one or more disease and/or AOs to each metabolite. To help provide guidance on the context of use of the metabolic biomarkers, the type(s) of samples in which the biomarkers have previously been measured was collected. Furthermore, pathway sources were interrogated to link the biomarkers to molecular mechanisms, allowing pathway complementarity to the S1500þ gene biomarker panel to be assessed. The availability of analytical assays and reference standards for each metabolic biomarker was investigated to assess the community's capability to measure the biomarkers routinely. This first version of the proposed metabolic biomarker panel for predictive toxicology is termed MTox700þ, which is made available at https://michabo.co.uk/resources/mtox (version 1, updated on 12/01/2022).
Creation of MML From HMDB and MetaboLights
An "MML" of detectable human-relevant metabolites was created based on the HMDB (Wishart et al., 2007;version 4.0 [Wishart et al., 2018], release date-July 9, 2018) and MetaboLights data repository (Haug et al., 2013;downloaded on February 29, 2020; identifiers were converted from ChEBI to HMDB using The Chemical Translation Service-CTS [Wohlgemuth et al., 2010]). The MML was created for multiple reasons: to ensure consistency in naming metabolites throughout the study; to ensure consistency in filtering across each of the individual metabolite resource lists; to facilitate removal of all drugs, solvents, and other exposure-related chemicals from these lists; and to assess whether metabolites are analytically detectable according to HMDB (version 4.0)/ MetaboLights, as only detectable metabolites were listed in the MML. To remove several unwanted groups of chemicals from the MML, various types of ontology were used to create ontology filters, which were applied to the combined HMDB/MetaboLights MML as described in Figure 1. These ontology filters used both the HMDB 4.0 hierarchical ontology and the chemical ontology from ClassyFire (Djoumbou Feunang et al., 2016). First, drug metabolites were removed using the "biological role" HMDB ontology filter. Then drugs, personal care products, cosmetics, and laboratory chemicals were removed using the "industrial application" HMDB ontology filter. Environmental pollutants/contaminants were removed using the "environmental role" HMDB ontology filter. In addition, chemical ontology filters were applied to remove any remaining drug and food exposure-related chemical groups, which comprised the removal of inorganics, organometallic compounds, alkaloids and derivatives, hydrocarbons, organic 1,3-dipolar compounds, organic polymers, organohalogen compounds, biphenyls, naphthalenes, fluorenes, phenanthrenes and derivatives, tetralins, organic dithiophosphoric and thiophosphoric acids, oxepanes, isothiocyanates, sulfoxides, flavonoids, isoflavonoids, phenylpropanoic acids, and diarylheptanoids.
Extraction, Filtering, and Merging of Metabolite Resource Lists to Create Proposed Metabolic Biomarker Panel-MTox7001
Eight existing toxicological resources were selected for interrogation, including 3 multiplexed assays, 4 databases, and the published literature. A short description of each resource, together with a justification for its inclusion, is presented below. For each resource, a list of metabolites was either directly extracted or the resource was searched manually, filtering was then applied to ensure only high-data quality was retained, and a metabolite resource list was produced; see Figure 2 for an overview of the steps.
Multiplexed assays. "BASF" developed a metabolic biomarker panel for toxicology studies of rat, which has been used to create the MetaMap Tox database containing the responses of the plasma metabolome to more than 1000 chemicals (Sperber et al., 2019). The database associates changes in metabolite levels with rodent toxicity outcomes and hence the metabolic biomarker panel has high relevance to the current study. The BASF assay comprises 202 metabolic biomarkers (most are identified, some remain unknown) and was imported from Sperber et al. (2019). Where possible, HMDB IDs were assigned to metabolites using CTS.
" " is a pharmacological biomarker panel originally developed by a consortium of pharmaceutical companies (Bowes et al., 2012) and now commercially available via Eurofins. The 44 targets represent a minimal panel that provides a broad early assessment of potential human health hazards in drug development. It was selected for this study due to its high toxicological relevance and the availability of the assay. The panel currently comprises 47 measured endpoints. All gene biomarkers were removed and HMDB IDs were assigned to the metabolite entries using CTS.
"Toxicology in the 21st Century (Tox21)" program is a joint initiative between 3 U.S. agencies, the National Center for Advancing Translational Sciences, NTP, and the Food and Drug Administration (Thomas et al., 2018). Tox21 aims to improve toxicity assessment methods and provide a rapid and robust assessment of a chemical and its potential human health effects. Due to the high human toxicological relevance of the biomarkers measured in the Tox21 assays, they were considered for inclusion in this study. The Tox21 assay list was examined for metabolites, all gene biomarkers were removed, and the remaining entries containing reference publications were curated. Metabolites were then assigned HMDB IDs using CTS.
Databases. "Adverse Outcome Pathway (AOP) Wiki" is an opensource toxicological database, managed by the OECD, that structures data around a framework connecting a molecular initiating event to an AO (Ankley et al., 2010). The AOP Wiki organizes the knowledge of toxicological perturbations into base unitskey events (KEs) and KE relationships-extensively describing the underlying biological and experimental information. Currently, there are more than 260 AOPs in this knowledge base. Data within the AOP Wiki were selected for this study due to its focus on toxicological perturbations and the linkage of molecular KEs to AOs. A manual search for metabolites within all of the AOP KEs was performed. HMDB IDs were assigned to metabolites using CTS.
"Comparative Toxicogenomics Database (CTD)" is an opensource database, funded by the U.S. National Institute of Environmental Health Sciences, that focuses on the environmental exposures affecting human health (Davis et al., 2019;Mattingly et al., 2003). Although CTD primarily emphasizes the associations of gene biomarkers with chemicals and disease, it also contains some metabolic biomarker-chemical connections; hence, it is a further valuable resource for the current study. A matrix defining the toxicity-associated biomarkers, including genes, proteins, drugs, and metabolites, was exported from the CTD database. All gene and protein biomarkers were removed, as were all entries for which the detected "stressor" (drug/toxin) is the "marker." HMDB IDs were assigned to the remaining metabolites using CTS.
"QIAGEN Ingenuity Pathway Analysis (IPA, QIAGEN Inc., https://digitalinsights.qiagen.com/qiagen-ipa, last accessed February 2, 2022)" is a commercial knowledgebase that was created by compiling a vast amount of information on molecular mechanisms associated with disease (primarily) and toxicology, in human, mouse, and rat (Kr€ amer et al., 2014). It encompasses over 7.8 million total findings and approximately 90 000 curated publicly available datasets, which can be used to predict potential therapeutic or toxicity targets, and drugs acting on those targets. With over 16 850 metabolites (including endogenous metabolites and xenobiotics), and containing a variety of metabolite associations with diseases, biological functions, and/or pathways, IPA is particularly effective at deriving toxicity pathway-associated metabolites and therefore of considerable value to this study. First, IPA was interrogated for any humanrelevant metabolites that were associated with exposure to "Substances of Very High Concern" (SVHC; list obtained from European Chemical Agency's website on August 8, 2019). An SVHC is a substance of particular concern for human health, including carcinogens, mutagens, reproductive toxicants, and chemicals that are persistent, bioaccumulative, and toxic. This involved searching for the SVHCs and submitting the findings to the "Pathway builder" module to assign any molecular associations. Second, a search to find relevant metabolites associated with IPA's toxicity pathways was conducted. The metabolite lists from both strategies were then refined by keeping only "endogenous chemicals." "Toxin and Toxin-Target Database (T3DB, or Toxic Exposome Database)" is an open resource developed by The Metabolomics Innovation Center, Canada, that focuses on providing mechanisms of toxicity and target molecules for each toxin (Lim et al., 2010). It is closely linked to the HMDB, Small Molecule Pathway Database (SMPDB), and PathBank DB. The T3DB contains 42 374 toxin-toxin target associations (accessed Wishart et al., 2007) and MetaboLights data repository (Haug et al., 2013). The third to fifth boxes represent the filters that were used to refine the metabolite list based on HMDB ontology, and in the sixth box further filters based on chemical ontology were applied. This process yielded the MML of detectable human-relevant metabolites.
on January 6, 2020). Due to its large and broad information content (chemical information, ontologies, origin, pathways, exposure, health effects, etc.), and despite regarding metabolites as "toxins," T3DB contains considerable relevant data on metabolites and therefore was selected for inclusion in this study. The initial matrix with abundant toxicological information was sourced from T3DB "toxin" matrix file. Metabolites with HMDB IDs (which were already present in this matrix) were selected for further curation.
Published literature. To provide a broader strategy to identify metabolic biomarkers of toxicity beyond those derived from the 3 multiplexed assays and 4 database resources, an extensive search of the published literature was undertaken. This strategy also helped to ensure the latest scientific discoveries were included in the proposed metabolic biomarker panel. With our focus on biomarkers for human health, publication abstracts in the PubMed repository (Sayers et al. 2022) were curated using Abstract Sifter (Baker et al., 2017). Abstract Sifter is a Microsoft Excel-based application that was developed by the U.S. Environmental Protection Agency to enhance existing search capabilities of PubMed. It allows keyword searching, effective organization and visualization of publication lists, and ranking of the processed references. Our structured search was performed using the query "metabolite and toxicity and biomarker." All publications meeting these search criteria, but which contained only gene or protein biomarkers, provided no evidence of substance toxicity, or were nonmammalian, were considered as false positives and therefore rejected. Publications that contained metabolic biomarkers of exposure (such as drugs and drug metabolites) were also rejected. If the abstract referred to nonspecific or partial metabolite names, the full text of the publication was then examined. The metabolite resource list from the published literature was manually prepared using the results from the Abstract Sifter. Finally, HMDB IDs were assigned to the metabolites using CTS.
Following the export of 8 individual resource lists, above, each list was filtered against the MML to ensure it contained only biological, human-relevant metabolites with unique HMDB IDs assigned to each metabolite ( Figure 2). Any metabolites that were found to occur in a resource list, but not in the MML, were manually re-inspected before deciding whether to accept or remove them from the resource list. This was necessary because some biologically relevant metabolites were listed as "undetected" in HMDB, but are "analytically detectable" according to the other toxicological resources that were examined (assays, databases, or publications), and these metabolites were therefore retained in this study. Furthermore, all metabolite entries were manually reviewed to identify any remaining errors, ie, to ensure that all metabolites were of biological origin by ensuring all drugs, drug metabolites, environmental pollutants/contaminants, and laboratory chemicals were removed. In addition, due to varying levels of confidence in metabolite identification, we derived metabolite identification levels (where possible) from each of the resources; ie, as based upon the Metabolomics Standards Initiative (MSI) guidelines (Sumner et al., 2007)-MSI level (1) Identified compounds, MSI level, (2) Putatively annotated compounds, MSI level, (3) Putatively characterized compound classes, and MSI level (4) Unknown compounds. The type of sample (eg, type of tissue, biofluid, and/or cells) in which each of the metabolites was measured (where the information was available) was also extracted. The next step was to merge all 8 resource lists to produce the proposed metabolic biomarker panel for toxicology-MTox700þ. This was achieved through the use of HMDB IDs that were assigned to all metabolites in all of the resource lists.
Associating Disease and/or AOs With Each Metabolite in the Proposed MTox7001 Panel Associations between each metabolite in the proposed MTox700þ panel with disease and/or AOs were derived using further data extracted from 4 AO sources-HMDB, IPA, AOP Wiki, and publications. Specifically, HMDB (version 4.0) contained metabolite-disease associations that were derived from the Online Mendelian Inheritance in Man (OMIM) database (Hamosh et al., 2000). Using IPA, a "metabolomics core (enrichment) analysis" was conducted employing all of the metabolites in the MTox700þ panel as input, in order to extract "disease and biological function" associations for each metabolite. The AOP Wiki was manually searched to extract any relevant associations between metabolic KEs and AOs. Publications were examined in Abstract Sifter and AOs associated with the publications containing metabolites of interest were extracted manually. In "IPA," similar to the disease workflow, all of the MTox700þ panel metabolites were used as input and "metabolomics core (enrichment) analysis" was conducted to extract pathway associations for each metabolite.
KEGG and Reactome were selected due to their rich pathway content and S1500þ panel compatibility. "KEGG" (by Kanehisa Laboratories, Japan) is an open-source pathway database for understanding high-level functions and utilities of biological systems (cells, organisms, and ecosystems) from molecular-level information. To extract pathway associations from KEGG, identifiers from the MTox700þ panel were first converted from HMDB to KEGG using CTS (Wohlgemuth et al., 2010). The metabolites were then submitted into KEGG Mapper and pathwaymetabolite associations were derived from the database.
"Reactome" (developed by Ontario Institute for Cancer Research, New York University School of Medicine, European Molecular Biology Laboratory's European Bioinformatics Institute, and Oregon Health & Science University) is an open source, manually curated, and peer-reviewed pathway database. It provides bioinformatics tools for the visualization, interpretation, and analysis of pathway knowledge to support basic and clinical research using omics data. Using Reactome, all of the pathway-metabolite associations were extracted, and identifiers were converted from ChEBI to HMDB using CTS. The metabolites with associations were then filtered against the MTox700þ panel metabolites yielding panel-specific pathwaymetabolite associations.
"PathBank" (closely connected to SMPDB and HMDB, by The Metabolomics Innovation Centre, Canada) is an open-source interactive pathway database with more than 100 000 pathways. Its primary focus is metabolomics, and it therefore contains a set of unique pathways not found in other databases. Using PathBank, all the pathway-metabolite associations were imported, and the metabolites with associations were filtered against the MTox700þ panel yielding panel-specific pathwaymetabolite associations.
For reliability of the interrogated pathways and consistency with the Tox21 S1500þ methodologies, after obtaining pathway association data from all 4 pathway sources, each pathway list was reviewed and only pathways with 3 or more participating metabolites (termed "reliable" molecular pathways) were retained. The list of metabolites associated with these reliable pathways was collated and described in the results.
Furthermore, the reliable molecular pathways (associated with metabolic biomarkers from the KEGG and Reactome databases) were compared with the pathways associated with the S1500þ gene biomarker panel (Mav et al., 2018) to determine the overlap of pathways between the MTox700þ and S1500þ biomarker panels.
Associating Each Metabolite in the Proposed MTox7001 Panel With Analytical Assays and Reference Standards
The MTox700þ panel metabolites were assigned an analytical assay type based upon information from several resources: all 3 multiplexed assay resources (BASF, and Tox21) the "exposure events" file, and then nonanalytical methods such as questionnaires, predictions, and computational analyses were removed from the final assay list. Analytical assay types were also extracted from publications, which were manually curated in Abstract Sifter (Baker et al., 2017); the full text was reviewed if the assay was not provided in the abstract. All assays were then merged into a single list, providing an overview of how each panel metabolite can be measured.
The availability of reference standards was first assessed by checking against 2 large metabolomics libraries-IROA Technologies Mass Spectrometry Metabolite Library of Standards and MetaSci COMPLETE Metabolite Library, followed by a manual search for the remaining metabolites from other vendors including ABI Chem, Avanti, CaymanChem, Enzo Life Sciences, MedChemExpress, MolPort, Sigma Aldrich (Merck), TargetMol, and Thermo Fisher.
Ranking the Importance of Metabolites in the Proposed MTox7001 Panel To aid the ranking of metabolites in the MTox700þ panel based on their perceived importance in toxicological responses, several criteria were developed. Three separate scores were assigned to each metabolite in the panel based upon each of 3 properties: (1) total coverage in toxicological resources, AO sources, and pathway sources, (2) pathway consistency with the S1500þ gene panel, and (3) measurement feasibility.
The first ranking score for each metabolite was based on its "total coverage in toxicological resources, AO sources, and pathway sources", considering whether the metabolite featured in the original 8 resources (described in Extraction, Filtering, and Merging of Metabolite Resource Lists to Create Proposed Metabolic Biomarker Panel-MTox700þ section; score of 1-8), whether the metabolite was present in the sources of AO information (described in Associating Disease and/or AOs With Each Metabolite in the Proposed MTox700þ Panel section; score of 0-4), and whether the metabolite was present in the pathway sources (described in Associating Molecular Pathways With Each Metabolite in the Proposed MTox700þ Panel section; score of 0 to 4). The total coverage score was calculated as: Score (total coverage %) ¼ [(toxicological resource count/8*100 þ AO source count/4*100 þ pathway source count/4*100)]/3. Finally, the total coverage score was categorized into 3 levels-low (score below 33%), medium (score 33-66%), and high (score above 66%).
The second ranking score for each metabolite described the "pathway consistency with the S1500þ gene panel," either as consistent (score of 1) or not (0), with consistency defined as the metabolite being present in a reliable molecular pathway (see definition in Associating Molecular Pathways With Each Metabolite in the Proposed MTox700þ Panel section), where that pathway is also measured by genes in the S1500þ panel.
Finally, the third ranking score was based on "measurement feasibility," based on the availability of an analytical assay (score of 1, if available) and a reference standard (score of 1, if available); total score of 0-2.
Creation of MML From HMDB and MetaboLights
Multiple international data resources utilize their own identifiers for individual metabolites. Since the principal objective of this work was to integrate data from multiple toxicological resources, we first established a core list of consistently named metabolites against which we could map each of the metabolite resource lists of proposed biomarkers. The HMDB was selected as the primary source of metabolites and their identifiers for the master list as it is the most extensive human-relevant metabolite resource internationally. This was complemented by metabolites reported in the European Bioinformatics Institute's MetaboLights database, the most extensive repository of experimental metabolomics data in Europe.
The HMDB unfiltered metabolite list contained 9052 detectable (quantified or nonquantified) metabolites. In addition, the MetaboLights database was imported to ensure that any (newly) detected metabolites missing from the HMDB were included, yielding 822 metabolites that could be assigned HMDB IDs (converted from ChEBI IDs using CTS). Of these, 121 were not present in the 9052 HMDB metabolite list, hence were added to form a master list with 9173 metabolites. After ontology filtering (Figure 1, boxes 3-6), 8658 metabolites remained (42 of those lacked any ontology terms, but were retained), forming the final MML, all with HMDB IDs (see Supplementary Material 1 MML and resource lists, tab "MML").
Creation of Metabolite Resource Lists and Proposed Metabolic Biomarker Panel-MTox7001
To create the metabolite resource lists of proposed biomarkers, multiple existing toxicological resources were interrogated including 3 multiplexed assays-BASF, Bowes-44, and Tox21, 4 databases-AOP Wiki, CTD, IPA, and T3DB, and the published literature, as introduced in the Materials and Methods section.
Curation of the BASF assay yielded a total of 202 metabolite entries, of which 21 were labeled as "unknown" in Sperber et al. (2019), and a further 27 metabolites lacked a sufficiently specific name to allow their identification, eg, "phosphatidylcholine No. 02" and "TAG (C16:0, C16:1)." Of the remaining 154 metabolite names, 147 were each assigned a unique HMDB ID, whereas 7 of the metabolite names were each assigned multiple possible HMDB IDs. This was necessary as the unsaturated bond configurations for these 7 lipids were unknown; hence, all possible HMDB IDs were retained for ease of filtering and comparison of lists. However, in the figures below, only the unique 154 (147 þ 7) metabolites are presented ( Figure 4A; see Supplementary Material 1, tab "BASF"). Importing the biomarkers from the Bowes-44 assay resulted in a list of 10 metabolites, all with HMDB IDs ( Figure 4A; see Supplementary Material 1, tab "Bowes-44"). Curation of the Tox21 assays yielded a total of 37 metabolite entries, of which 35 were assigned HMDB IDs; 2 entries lacked specificity and could not be identified ( Figure 4A; see Supplementary Material 1, tab "Tox21").
Next, the 4 database resources were interrogated. After performing a manual search of the AOP Wiki, 40 metabolites were extracted and all were assigned HMDB IDs ( Figure 4A; see Supplementary Material 1, tab "AOP Wiki"). A total of 601 metabolite entries were extracted from the CTD, of which 287 were assigned HMDB IDs ( Figure 4A; see Supplementary Material 1, tab "CTD"). Of the large number of rejected entries majority were xenobiotics, mainly biphenyls, diphenylethers, diphenylmethanes, phthalates, naphthalenes, halogenated, and inorganic compounds. Furthermore, 2 strategies were used to extract relevant metabolite information from the IPA database (described in Extraction, Filtering, and Merging of Metabolite Resource Lists to Create Proposed Metabolic Biomarker Panel-MTox700þ section). The first strategy comprised searching for the SVHCs and submitting the findings to the "Pathway builder" module to assign any molecular associations. Of the 203 SVHC that were searched for by CAS number in the IPA knowledgebase, 71 were present. Of these, IPA's "Pathway builder" module discovered endogenous metabolic associations for 22 of them, resulting in 68 SVHC-associated metabolites. The second strategy involved inspecting 26 of IPA's toxicity pathways and extracting the pathway-associated metabolites. Employing the second search approach resulted in 17 of IPA's toxicity pathways with metabolite associations, corresponding to 77 metabolites. The lists from both search approaches were combined to produce a single IPA resource list of 130 unique metabolites, of which 108 metabolites had HMDB IDs ( Figure 4A; see Supplementary Material 1, tab "IPA"). Finally, for the T3DB database, a total of 3533 small molecule entries were identified of which 1119 had HMDB IDs ( Figure 4A; see Supplementary Material 1, tab "T3DB"). The majority of the rejected small molecule entries arose from xenobiotics.
The initial query search of published literature using the Abstract Sifter returned 935 papers that were published between 1983 and 2020, and each abstract was examined manually for information on metabolic biomarkers. When the abstract was not sufficiently clear, the publications were studied in greater detail. From the 935 papers examined, 83 (published between 1991 and 2020) were retained. Curation of the retained publications yielded 511 metabolite entries, of which 439 were metabolites with HMDB IDs ( Figure 4A; see Supplementary Material 1, tab "Publications").
Having compiled the 8 metabolite resource lists, all with HMDB IDs, the next step was to filter these against the HMDB/ MetaboLights MML. In conjunction with manual inspecting and filtering, this ensured that the proposed biomarkers were human-relevant and that all drugs and xenobiotics were removed. Metabolites from the resource lists that were not present in the MML were individually reviewed to determine their origin, with only human-relevant metabolites retained. Figure 4B shows the number of potential metabolic biomarkers of toxicity, after filtering, for each of the 8 metabolite resource lists.
All 8 resource lists were then combined to produce the proposed MTox700þ panel for toxicology, which comprised a total of 722 unique metabolites (see details in Ranking the Importance of Metabolites in the Proposed MTox700þ Panel section). The metabolite resource lists shared common metabolites, although the extent of overlap was relatively low ( Figure 4C) with 525 (73%) metabolites being derived from a single resource. This highlights the importance of combining information from multiple toxicological resources, as reported here for the first time.
Associating Disease and/or AOs With Each Proposed Metabolic Biomarker in the MTox7001 Panel
To maximize the confidence in the predictivity of the biomarkers, it was important to attempt to assign 1 or more disease and/or AOs to each of the proposed 722 metabolites. According to information extracted from HMDB, 492 of the proposed metabolic biomarkers are associated with at least 1 disease, based on the disorder classification from OMIM (see Supplementary Material 2 AOs and Pathways, tab "Disease AO ToxFunction"). Interrogating IPA revealed that 178 of the proposed biomarkers are linked to 1 or more "toxicity functions." A manual search of the AOP Wiki showed that 33 of the proposed metabolic biomarkers (or molecular KEs in this case) are associated with 37 AOs. In addition, curation of publications revealed that 208 of the proposed biomarkers are linked to 1 or more AOs.
In total, 80% (578 out of 722) of the proposed metabolic biomarkers are associated with at least 1 disease or AO, with 8 of the proposed biomarkers having a recognized adverse phenotype in all 4 AO sources ( Figure 5). Of the 578 metabolites with AO associations, 453 were linked to multiple AOs (with 70% of those metabolites being associated with 10 or fewer AOs), and 125 metabolites were linked to a single AO.
Associating Molecular Pathways With Each Proposed Metabolic Biomarker in the MTox7001 Panel
To increase the informative value of the biomarkers, it was attempted to assign 1 or more reliable molecular pathways to each of the proposed 722 metabolites (see Supplementary Material 2 AOs and Pathways). Interrogating IPA revealed that 263 of the metabolites are linked to 1 or more canonical pathways. Examining KEGG revealed that 401 of the proposed metabolic biomarkers are associated with at least 1 canonical pathway. The data extracted from Reactome demonstrated that 301 proposed biomarkers are linked to 1 or more pathways. Although at least one of the PathBank pathways was associated with 342 of the metabolites. In total, 64% (465 out of 722) of the proposed metabolic biomarkers are associated with at least 1 molecular pathway, with 192 of these participating in pathways from all 4 pathway sources ( Figure 6).
Next, we sought to address the question whether the measurement of the proposed MTox700þ panel would add value to an experiment that is already applying the S1500þ gene panel. The reliable molecular pathways associated with both panels (MTox700þ and S1500þ) were compared, first based on KEGG pathways and then those in Reactome (see Supplementary Material 2, tab "S1500þ pathways"). Metabolite and geneassociated pathways exhibited a moderate overlap in KEGG, with 80 out of 186 S1500þ associated pathways also identified as reliable metabolite-associated pathways ( Figure 7A). Of the remaining 106 molecular pathways in S1500þ, 53 are gene specific, ie, are not metabolic pathways and therefore do not include any metabolites (a further 37 pathways are associated with metabolic biomarker metabolites; however, these are not "reliable" metabolite-associated pathways, ie, did not meet the minimum threshold of 3 metabolites per pathway). Metabolite and gene-associated pathways overlapped to a lesser extent in Reactome, with 215 out of 674 S1500þ associated pathways also identified as reliable metabolite-associated pathways ( Figure 7B). Of the remaining 459 molecular pathways in S1500þ, 156 are gene-specific (216 pathways are also associated with metabolic biomarker panel metabolites, but these metabolic biomarkers are not a part of "reliable" metaboliteassociated pathways). Considering the findings from the KEGG and Reactome databases together, the 80 KEGG pathways represented on both molecular panels include 375 MTox700þ panel metabolites, and the 215 common Reactome pathways encompass 277 MTox700þ panel metabolites. In total, 58% (420 unique metabolites out of 722) of the proposed metabolic biomarkers are associated with 295 molecular pathways included in the S1500þ panel. Of particular note is that measurement of the proposed MTox700þ panel would also provide information on many reliable molecular pathways that are not measured by the S1500þ panel, specifically 93 additional pathways in KEGG and 540 pathways in Reactome (see Supplementary Material 2, tab "S1500þ pathways").
Associating Analytical Assays and Reference Standards With Each Proposed Metabolic Biomarker in the MTox7001 Panel
To maximize the likelihood that the proposed MTox700þ panel can be implemented into regulatory testing, the availability of analytical assays and reference standards was assessed (see Supplementary Material 3 Ranked MTox700 panel). Reference standards can serve 2 purposes: first, they are required to achieve the highest level of confidence in metabolite identification, so-called MSI level 1 (Sumner et al., 2007); and they are required if the absolute quantification of metabolites is sought. The levels of analytical confidence in both the identification and quantification of each metabolite measured in metabolomics or targeted metabolite assay will need to be reported according to the OECD Metabolomics Reporting Framework (Harrill et al., 2021). Assay types were sourced from BASF, Bowes-44, Tox21, CTD, and multiple publications. Considering these sources, the majority of metabolites (432) have previously been measured using LC-MS, 93 metabolites detected using GC-MS, a further 164 metabolites were measured using either LC-MS or GC-MS (it was not clarified by BASF), 56 metabolites using NMR spectroscopy, and a further 77 metabolites have been measured using other analytical methods. Many metabolites will be detectable across multiple assays. In summary, 80% (578 out of 722) of the panel metabolites are measurable using at least 1 assay, with LC-MS being the most applicable. Based on our search criteria, reference standards are available for 90% (649 out of 722) of the panel metabolites.
Metabolic Biomarker Ranking
Several factors were considered to create a biomarker ranking system that prioritizes metabolites based on the relevance and reliability of each biomarker in the MTox700þ panel. These included the amount of existing information collected from multiple toxicological resources, AO sources, and pathway sources that indicated a metabolite was already used as a biomarker in toxicology; pathway consistency with the S1500þ gene panel; and practical considerations for measuring a metabolite in the laboratory (see Supplementary Material 3).
The first ranking took into account each metabolite's total coverage of 8 toxicological resources, 4 AO sources, and 4 pathway sources. The results indicated that 406 metabolites had limited existing information (scoring below 33%), generally being present in just 1 toxicological resource (typically from recent publications) and containing scarce information from AO sources and/or pathway sources. Higher coverage of resources, AO, and pathway sources was observed for 255 metabolites (medium score of 33-66%), and 61 metabolites scored highly with > 66% coverage in the resources, AO, and pathway sources.
The second ranking considered pathway consistency with S1500þ gene panel, resulting in 420 proposed metabolic biomarkers meeting the criteria.
The third ranking was derived based on the measurement feasibility for each metabolite, including assay and reference standard availability. Both of these were available for the majority of metabolites (515 of 722), with a further 197 metabolites associated with either an assay or a reference standard.
DISCUSSION
Although the publicly available S1500þ human biomarker panel has helped to drive the application of transcriptomics to predict pathway perturbations (Mav et al., 2018), no equivalent initiative has been reported to develop a metabolic biomarker panel. Yet, the need is great, as it is metabolomics that is capable of measuring downstream molecular phenotypes that more closely relate to adversity. To date, the only metabolic biomarker panel for toxicology is commercial, developed by BASF as a cornerstone of their MetaMapTox database that describes rodent responses to more than a thousand test chemicals (Van Ravenzwaay et al., 2015). The success of BASF's metabolite panel and database, applied to predict a substance's MoA and for category formation to support read-across, is evidenced by multiple publications (Sperber et . Another toxicology resource-the AOP Wiki, is freely available and hosts some metabolic KE-based AOPs, eg, AOP162 "Enhanced hepatic clearance of thyroid hormones leading to thyroid follicular cell adenomas and carcinomas in the rat and mouse" (Dellarco et al., 2006); however, this resource is still relatively small and few metabolic KEs have been documented. QIAGEN IPA is notable as it contains multiple metabolite associations with pathways and AOs, though it is primarily a biomedical database. Most molecular toxicology resources remain gene/ protein focused, with only a few featuring metabolites, highlighting the importance of the information assimilated in this study. Here, a metabolic biomarker panel for toxicology has been proposed by combining knowledge from multiple toxicological resources-including existing multiplexed molecular assays, databases, and the literature.
Several challenges were encountered while developing MTox700þ, particularly the lack of consistency in classifying and naming both metabolites and metabolic pathways. First, metabolite repositories rarely delineate between drugs, dietaryderived metabolites, other metabolites arising from environmental exposure (eg, xenobiotics), and endogenous metabolites, all of which can be part of the detectable human metabolome; some resources refer to these simply as "chemicals," lacking important subclassifications. Even where attempts have been made to define an appropriate ontology (eg, in HMDB; Wishart et al., 2018), there is no single filter that allows, eg, clear distinction between drugs and some endogenous metabolites that are sometimes labeled as drugs (eg, the amino acids L-arginine and L-tryptophan, and some vitamins and hormones). Therefore, metabolites had to be manually curated to resolve these apparent conflicts. The second challenge in assimilating multiple metabolic resources was the lack of standardized names and/or identifiers for metabolites. Although this problem can be alleviated using translation tools (van Iersel et al., 2010;Wohlgemuth et al., 2010), sometimes these tools do not recognize metabolite names/identifiers leading to manual translation of the identifiers. For metabolomics to grow as a tool for assessing chemical hazards, it will be important that study authors define metabolite names and identifiers, as recently proposed in the OECD Metabolomics Reporting Framework (Harrill et al., 2021). A further difficulty encountered was incomplete metabolic names, mainly for lipids, making it impossible to identify some potential biomarkers. Similar to the difficulties encountered for individual metabolites, inconsistent molecular pathway ontologies were also a major issue when working with multiple pathway sources. Despite some pathways bearing the same name in most pathway databases (eg, glycolysis/gluconeogenesis, pyruvate metabolism, sphingolipid metabolism, etc.), the pathway ontologies remain largely inconsistent, ie, similar pathways (in terms of contents and biological function) can have alternative pathway names, additional members/reactions between members, and/or be separated into multiple subpathways. Hence, there is a substantial need to standardize pathway ontology across databases, or minimally to describe the mapping between these resources.
Another challenge arose during the initial searches for the information now contained within the MTox700þ panel due to the relative sparsity of metabolomics data and knowledge in toxicology. For example, in contrast to the well-documented associations of metabolic biomarkers with disease outcomes and/or disease-related molecular pathways (Wishart et al., 2021), metabolite associations with toxicity-specific AOs and pathways are surprisingly rare and were available only from the AOP Wiki, IPA, and some publications. Furthermore, this study revealed that the intersection of metabolites between toxicological resources is low; eg, of the 435 proposed metabolic biomarkers derived from published literature, 273 (63.6%) were not included as putative biomarkers in any of the multiplexed assays or in the database resources. This highlights the considerable importance of recent publications as a source of putative biomarkers, although the depth of investigation of a biomarker in a single publication is typically less rigorous than for biomarkers in the already-established assay panels. This in turn highlights how AO predictions derived from applying the MTox700þ panel could potentially be misinterpreted. Specifically, metabolites that have been extensively researched, such as ATP and cholesterol, are linked to multiple AOs and can serve as more universal biomarkers. However, less studied metabolites that are currently associated with a single AO can be misinterpreted as being highly specific markers linked to an adversity, yet upon further investigation these may also prove to be universal markers. This lack of knowledge could be addressed by the metabolomics community targeting such biomarkers in the MTox700þ panel in future toxicology studies. It is also important to note that a single metabolite is not an adequate representative of an AO or a pathway as it will almost certainly participate in a number of AOs or pathways, hence only a combination of set metabolites is likely be specific for an MoA, and hence better suited to determine adversity.
Similar to the strategy employed here, the 2 main drivers for gene selection in the S1500þ panel were toxicological/pathological relevance and pathway representation (Mav et al., 2018). An important question still to be addressed in the emerging applications of omics technologies to regulatory toxicology is which approach(es) can deliver the minimal mechanistic information required to enable regulatory decision making, eg, whether a combination of upstream transcriptomics and downstream metabolomics is required to define a chemical's MoA and/or adversity. To support this ongoing discussion, the relationship between the S1500þ and MTox700þ panels was investigated. In addition to multiple overlapping reliable molecular pathways, which could be used in a weight-of-evidence approach to identify the MoA, subsets of both genes and metabolites each participate in unique pathways, suggesting a complementarity of the 2 molecular panels. Similar to the importance of new data for better defining the associations between metabolites and AOs, the generation of new multi-omics datasets that measure both molecular panels will help to inform the community on the relative contributions of transcriptomics and metabolomics to identifying and characterizing hazards.
Although the practical deployment of the MTox700þ panel in toxicology studies is a logical next step, this is not without challenges. As introduced above, a reference standard is required to identify each metabolite with the highest level of confidence (Sumner et al., 2007). Currently, this is not possible as only 646 of the metabolites in the panel are commercially available. Also, not all 722 metabolites will be detectable in all sample types, with subsets of the full panel applicable to different tissues and/or biofluids. For instance, metabolites extracted from the BASF multiplex assay were derived from studies on rat plasma and therefore will primarily be applicable to this sample type. Metabolites derived from the AOP Wiki, CTD, T3DB, and the published literature are applicable to a wider variety of sample types (tissues, biofluids, and/or cells), while the metabolites measured using the Bowes-44 and Tox21 multiplexed assays are of greatest relevance to in vitro cell lines.
The toxicological application of the panel may also dictate which subset of metabolites to measure, for instance, hazard identification may prioritize the measurement of metabolites associated with AOs. Irrespective of this granularity, we strongly advocate that the community attempts to measure as many of the MTox700þ metabolites as possible, to identify them confidently, and report their relative quantitative changes in response to chemical exposure, as this will increase the metabolic knowledge associated with the panel and increase its ability to predict downstream biological effects. In particular, quantitative metabolic measurements will be required to distinguish adaptive changes from adverse effects, consistent with the concept of a quantitative AOP (Conolly et al., 2017). By ranking the MTox700þ panel metabolites, it was determined that 316 metabolites (with medium or high total coverage score) have substantial toxicological relevance, and 498 metabolites have an analytical assay and a reference standard available; hence, the barrier to the community adopting at least part of the panel is relatively low and could realize a step-change in the field.
In conclusion, to facilitate the application of metabolomics data in regulatory toxicology, multiple existing toxicological resources have been interrogated-including multiplexed assays, databases, and published literature-to propose a panel of metabolic biomarkers that have the potential to predict MoA and adversity. The creation of the human-relevant MML, comprising 8658 metabolites, was an important step for enabling the management of individual metabolite lists. Selection and subsequent interrogation of the toxicological resources yielded 189 proposed metabolic biomarkers from 3 existing multiplexed assays (BASF, and Tox21), 346 proposed biomarkers from 4 database resources (AOP Wiki, CTD, IPA, and T3DB), and 435 proposed biomarkers from the literature. Merging all 8 resources generated a list of 722 metabolites, representing a metabolic biomarker panel for toxicology-MTox700þ, of which 578 (80%) of the markers are associated with a disease (or toxicity or AO) and 465 (64%) are associated with reliable molecular pathways. Assessing the pathway compatibility between the MTox700þ and S1500þ panels showed that 420 (58%) of the metabolic biomarkers are associated with shared reliable molecular pathways. Through the future use of this panel, it is possible that some metabolites may be removed (if they do not demonstrate sufficient predictivity) while further metabolic biomarkers will be added (discovered via untargeted or hybrid targeted-untargeted metabolomics), hence the MTox700þ panel is predicted to evolve over time. Here, we have launched this metabolic biomarker panel, with the intention to help build foundational knowledge to support the generation of molecular mechanistic data for chemical hazard assessments.
SUPPLEMENTARY DATA
Supplementary data are available at Toxicological Sciences online.
DECLARATION OF CONFLICTING INTERESTS
Professors Mark Viant and John Colbourne are employees of the University of Birmingham. They are also Founders and Directors of Michabo Health Science (MHS) Ltd., a spin-out company of the University of Birmingham. MHS also operates as a trading division of University of Birmingham Enterprise Ltd., a wholly owned subsidiary of the University of Birmingham. MHS provides scientific consultancy services in New Approach Methodologies (NAMs) specialising in 'omics technologies and computational toxicology. | 2022-02-01T06:23:06.253Z | 2022-01-30T00:00:00.000 | {
"year": 2022,
"sha1": "35708748eb1c5d7111b95a1e8fcef2bb9ede2b25",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/toxsci/advance-article-pdf/doi/10.1093/toxsci/kfac007/42540928/kfac007.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "83c5cbb33e11b74e12e822f8ff9ffa562fbd9c8a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251953324 | pes2o/s2orc | v3-fos-license | A new lattice Boltzmann scheme for linear elastic solids: periodic problems
We propose a new second-order accurate lattice Boltzmann scheme that solves the quasi-static equations of linear elasticity in two dimensions. In contrast to previous works, our formulation solves for a single distribution function with a standard velocity set and avoids any recourse to finite difference approximations. As a result, all computational benefits of the lattice Boltzmann method can be used to full capacity. The novel scheme is systematically derived using the asymptotic expansion technique and a detailed analysis of the leading-order error behavior is provided. As demonstrated by a linear stability analysis, the method is stable for a very large range of Poisson's ratios. We consider periodic problems to focus on the governing equations and rule out the influence of boundary conditions. The analytical derivations are verified by numerical experiments and convergence studies.
Introduction
The lattice Boltzmann method [1][2][3][4] is a numerical method that -in its native variant -is primarily used to solve fluid dynamics problems for nearly incompressible flows [5]. In contrast to alternative approaches such as the finite difference or the finite element method, the numerical scheme is not obtained by applying some local or non-local approximation to the exact differentials. Rather, the lattice Boltzmann method is motivated by a simplified microscopic model of a gas that is still capable of recovering macroscopic fluid behavior. This gas kinetic legacy comes with some advantageous properties, such as algorithmic simplicity and good scaling during parallelization. For these reasons, the lattice Boltzmann method is well suited to be run efficiently on modern massively parallel computing architectures [6].
Within a more general interpretation, the method can be viewed from a purely numerical standpoint without any connection to the kinetic theory of gases. This opens up the possibility to consider the lattice Boltzmann method with its already mentioned benefits as a numerical algorithm that can be utilized to find approximate solutions to certain classes of partial differential equations. In this spirit, lattice Boltzmann schemes solving the diffusion equation [7], shallow water equations [8], wave equation [9], Poisson equation [10], conservative phase-field equation [11] and many more problems have been proposed.
Initial ideas aimed at solving the linear elastodynamic equations with equal longitudinal and transverse wave speeds are reported by Marconi & Chopard (2003) [12]. In this work, a simple lattice Boltzmann scheme solving the wave equation is combined with a velocity-Verlet-type time integration to track the evolution of the displacement field. The distribution function is hereby physically interpreted as the interaction force between neighboring particles and an energy criterion allows to "switch off" bonds that have been loaded beyond a certain threshold. Only qualitative results are shown for a few fracture and fragmentation test cases. The approach by O' Brien et al. (2012) [13] solves the wave equations in 2D and 3D using extended velocity sets, but is also limited to equal wave speeds. In order to stabilize the method against oscillations, finite difference schemes are combined with flux limiters known from conventional fluid dynamics methods. A further extension to arbitrary Poisson's ratios and thus different speeds for the transverse and longitudinal waves is suggested by Murthy et al. (2018) [14]. However, the constitutive relation assumes hypoelastic material behavior. To solve this problem so-called crystallographic lattices are used in 3D, but the approach still relies on a finite difference approximation to calculate the local volume change. In contrast to the previous works, a mesh study with a quantitative error analysis is provided, which reveals less than first-order convergence under grid refinement. Schlüter et al. (2018) [15] propose a method to solve the elastodynamic problem under anti-plane shear deformation, which significantly simplifies the equations. The resulting scalar wave equation is solved using the lattice Boltzmann scheme presented in [9]. Furthermore, their contribution introduces formulations to handle Dirichlet-and Neumann-type boundary conditions so that physically relevant problems can be solved as well. Their method is extended to solve the general equations of elastodynamics [16] by applying a split into two wave equations (in 2D) governing the evolution of the dilation and rotation component of the displacement field. In order to combine the results of the two wave equations -each solved by a lattice Boltzmann scheme -finite difference approximations for the gradient and rotation operator are used. Along with an extension towards non mesh-conforming boundary formulations, qualitatively reasonable results for the tension test of a plate with hole are obtained.
In contrast to the previous works, Yin et al. (2016) [17] solve the quasi-static equations of linear elasticity using the lattice Boltzmann method. Because the numerical method can be viewed as an explicit scheme in time, it is unable to solve elliptic problems. For this reason, they propose to extend the target equation by a time-dependent damping term so that the lattice Boltzmann method can iterate on this modified problem until steady state is reached. Following their approach, the solution for the vector-valued displacement field is obtained using multiple distribution functions, with each distribution function governing the evolution of one component of the displacement field. For the evolution of each distribution, the divergence of the displacement field is required as input for the local equilibrium. Although not explicitly stated in the work, it is assumed that this quantity is computed using a finite difference stencil. The contribution introduces two schemes for 2D and 3D problems and convergence studies with simple numerical test cases demonstrate approximately second-order accuracy of the method. Although numerical examples with Dirichlettype boundary conditions are presented, no information on the boundary formulation is provided.
The main drawbacks of the previously cited studies can be summarized as follows: • All studies use finite difference stencils either to compute spatial gradients or to perform time integration. At first glance this combination of the lattice Boltzmann method with finite difference approximations may not seem problematic, however the high computational efficiency achieved by optimized implementations [18][19][20] relies on the very beneficial memory access pattern resulting from the native lattice Boltzmann method. Moreover, the overall order of accuracy is determined by the least accurate step, so that the use of inexpensive low-order finite difference approximations leads to a reduction of the total accuracy. For these reasons, it is generally preferred to avoid computations involving finite difference stencils.
• In some studies [13,14], the elastodynamic equations are recovered using extended velocity sets, which involve more than only next-neighbor communication. This is expected to reduce the achievable computational efficiency of the implementation. Additionally, when trying to enforce physically consistent boundary conditions, the larger velocity stencils most likely pose significant challenges. Potentially for this reason, the examples shown in [13,14] involve only periodic domains.
• In [17,16], multiple distribution functions must be solved for to determine the vector-valued solution field. However, for improved numerical efficiency and reduction of memory requirements, solving for a single distribution function is of paramount importance.
In summary, so far no lattice Boltzmann scheme can accurately solve the dynamic or quasi-static equations of linear elasticity without recourse to finite difference approximations and/or extended velocity sets or multiple distribution functions.
In this work, we design a novel second-order consistent lattice Boltzmann scheme solving the quasi-static equations of linear elasticity, which uses a single distribution function with a standard velocity set and does not resort to any finite difference approximations. We demonstrate that the method is stable and accurate for a large range of Poisson's ratios. Within the scope of the present work, we focus on the approximation of the governing equations in the bulk and rule out the influence of boundary conditions by considering periodic problems. All derivations and numerical examples are carried out in 2D in order to keep the expressions manageable. This manuscript is organized as follows. In the following Section 2, the target equation of quasi-static linear elasticity with the extension by a damping term is introduced. The derivation of the novel lattice Boltzmann scheme is outlined in Section 3, followed by an investigation of the leading-order error in Section 4 and a stability analysis in Section 5. Section 6 describes how initial conditions and periodic boundary conditions are handled. The final Section 7 shows the results of numerical convergence studies with the purpose of verifying the analytical derivations in the previous sections using the method of manufactured solutions [21].
Target problem of linear elasticity
This section provides a brief introduction to the equations of linear elasticity in 2D. In a next step, the quasi-static equations are endowed with a damping term so that the modified problem can be solved by the lattice Boltzmann method. Finally, we present a consistent non-dimensionalization of the problem.
Modified quasi-static linear elasticity in 2D
The target equation of 2D linear elasticity under the quasi-static assumption is obtained by combining three ingredients: • the linear momentum balance equation where σ denotes the second-order Cauchy stress tensor and b the external body load; • the linear elastic isotropic constitutive law where ε denotes the second-order infinitesimal strain tensor, λ and µ are known as Lamé parameters (µ is also called shear modulus), and I is the second-order unit tensor, and • the linear kinematic relations where u is the displacement field.
All three equations are valid in each point of the domain Ω ⊂ R 2 . Note that we assume this domain to lie in 2D space and not to be a 2D manifold embedded in 3D space, as more common in the mechanics literature, e. g. with the plane strain or plane stress assumption. Accordingly, the relations of the Lamé parameters to Young's modulus E and Poisson's ratio ν read as follows where we also introduced the bulk modulus K. The 2D assumption is not critical for the derivation of the numerical scheme, which can be carried out for the plane strain and plane stress cases in an analogous fashion; it is chosen in this paper as it leads to the most straightforward relations. It is clear from Eq. (4) 3 that in this case incompressible material behavior is obtained for ν = 1. The combination of Eqs. (1)-(3) delivers the governing equation in the primary variable u also known as the Navier-Cauchy equation. As proposed by [17], the elliptic equation (5) needs to be extended by some time-dependent damping term to be solved by the lattice Boltzmann scheme. To this end, Eq. (5) is replaced by the following time-dependent problem: Differently from [17], a damping constant κ is also introduced for dimensional consistency and to obtain a more straightforward control over the temporal evolution of the solution. Because of this extension, the domain on which the new relation is defined comprises an open set in space Ω and a time segment [0, t f ] with final time t f . Assuming that the body load is constant in time, the solution to Eq. (6) evolves towards a steady state as t f → ∞. Once steady state is reached, u naturally fulfills static equilibrium, i. e. Eq. (5). In practice, t f is chosen large enough such that ∂ t u < tol in some norm and for a given tolerance tol.
In this initial work, Eq. (6) is solved on a periodic domain. For simplicity, it is assumed that the periodicity is aligned with the primary lattice directions, spanned by the unit vectors e x and e y . The periodic lengths are assumed to be L x and L y so that the solution only needs to be computed on a L x × L y subset of R 2 . Therefore, the solution to Eq. (6) is subject to the additional constraint Finally, Eq. (6) is furnished with the following initial condition:
Non-dimensionalization
For the interpretation of the simulation results independently from the physical units, it is customary to state the governing equation in dimensionless form by scaling with the characteristic length L, time T and mass M, leading to the respective dimensionless quantities( •) as follows Note that as a result of the division by the damping coefficient κ the explicit appearance of the reference mass M is removed. Note also that the displacement field is normalized with the length scale U, and because the problem is linear this scaling factor can be chosen arbitrarily. Introducing the above non-dimensionalization into the governing equation, the periodicity constraint and the initial conditions, i. e. Eqs. (6)-(8), the dimensionless problem reads whereL x = L x /L andL y = L y /L and the differential operators are dimensionless as well. The dimensionless Cauchy stress is then easily obtained asσ
Lattice Boltzmann scheme for linear elasticity
In this section, the lattice Boltzmann scheme used to solve the target equation is derived. For this purpose, some notation conventions and basic definitions are introduced. After defining the structure of the numerical scheme, the asymptotic analysis technique is applied to understand how the target equation of linear elasticity can be solved by the scheme. In particular, we carry out a consistent derivation of the relaxation rates and equilibrium moments, which determine the governing equation being solved by the method. Finally, it is shown how the approximate Cauchy stress field can be retrieved from the numerical algorithm with minimal post-processing effort.
Basic definitions and notation conventions
The primary solution quantity of the lattice Boltzmann method is the distribution function f , thus indicating the connection with the Boltzmann equation (see [5] for an exhaustive review). f is a dimensionless function of space x, time t and microscopic velocity ξ, i. e. f = f (x, t, ξ) 1 . In the discretized setting of the numerical method however, the distribution function is retained only at a finite number of points in velocity space c i j and the distribution function evaluated at the discrete velocity c i j times some corresponding weight W i j is referred to as the population f i j , i. e.
Moreover, the domain of interest in 2D is discretized by a square lattice with a uniform grid spacing ∆x, whereas the time interval of interest is discretized with a uniform time step ∆t. Adopting the notation of [22], the two indices i and j denote the unit velocity components in x-direction and in y-direction, respectively; in other words, with the introduction of the scalar lattice speed c = ∆x/∆t, the microscopic velocity of the population f i j is defined as c i j = ice x + jce y . Figure 1 illustrates the most widely used set of microscopic velocities in 2D, which is known as D2Q9 [23] as it includes nine velocities on a 2D lattice, i. e. i, j ∈ {−1, 0, 1}. Note that the velocity of the so-called rest population c 00 = 0 is not visible in this depiction. This is also the set of velocities which will be used in this paper. Hence, the lattice Boltzmann method computes these nine populations f i j , i, j ∈ {−1, 0, 1} at each point of the lattice for each time step. The computation of the temporal evolution of the populations involves the following two stages in alternation: the collision stage, where the populations f i j locally interact with each other (resulting in a change of their values), and the streaming stage, where each population f i j travels in the direction determined by its microscopic velocity c i j (see Figure 2). Both stages will be constructed and described in the following sections. Note that the definition of the discrete set of microscopic velocities on the lattice permits an exact shift of the populations in the streaming stage from one point of the lattice to another without the need for interpolation. For a given time, once the populations are known at each point of the lattice, it is possible to compute their so-called raw countable moments as follows 2 These are by construction dimensionless. Later on, we will identify some of these moments with the dimensionless solution fields appearing in the governing Eq. (10) [22,24]. The so-called order of the moment m ab is given by a + b.
It is important to note that the size of the velocity set and thus the number of populations f i j governs the number of independent moments that can be obtained. For example, in the case of the D2Q9 velocity set where i, j ∈ {−1, 0, 1}, it can be easily verified that nine independent raw countable moments exist: {m 00 , m 10 , m 01 , m 11 , m 20 , m 02 , m 12 , m 21 , m 22 }, 1 In the Boltzmann equation, f represents the probability of finding a particle at position x featuring the velocity ξ at a given instant of time t. In practice, normalization of the probability distribution to unity is usually omitted such that, strictly speaking, f is a probability density times an arbitrary constant. 2 These are the discrete and dimensionless counterparts of the statistical moments of the distribution function f , defined as integrals over the velocity space. as the computation of all other higher-order moments using Eq. (15) leads to expressions that are identical to the ones contained in the set shown above, e. g. m 30 = m 10 or m 14 = m 12 . Because these moments are identified with the macroscopic variables of the physical problem being solved by the method, the size of the velocity set limits how complex the target problem can be. On the other hand, increasing the velocity set, as in the case of the extended velocity sets mentioned in the introduction, comes with drastically increased memory requirements and increased computational cost. Consequently, the velocity set should be chosen only as large as necessary for a given application. For the present study, as we will see later, the standard D2Q9 [23] velocity set provides enough independent moments to solve the target problem, i. e. Eqs. (10)- (12). A significant advantage of this standard velocity set is that it involves only next-neighbor node communication, thus enabling optimized implementation strategies [18][19][20]. In Section 3.3 it will be shown that the rest population f 00 can be removed for the present application, which leads to a further reduction in memory requirements and computational cost.
Structure of the lattice Boltzmann method
In this section, we describe the structure of the lattice Boltzmann method used in this work, including the two computational stages of collision and streaming. We employ the standard algorithm [25,2] along with the so-called multiple-relaxation-time (MRT) collision operator [26]. The resulting structure is general and can be used to solve different target problems, but some quantities which appear within this structure will be determined in Section 3.3 in such a way that the numerical scheme solves the target equation (10) with the desired values of the material parameters and of the applied body loads. The possibility to adjust independently both material parameters of linear elasticity is the reason for the choice of the MRT collision operator.
Collision
As introduced earlier, at the collision stage the populations f i j locally interact with each other, hence post-collision populations are computed from the pre-collision ones. As follows, we outline the structure of this computation using a variant of the MRT collision operator [26].
As the name indicates, the MRT collision operator involves multiple relaxation times (or equivalently relaxation rates -quantities to be introduced shortly). This allows to model a physical behavior involving multiple independent problem parameters, such as shear and bulk modulus in the case of linear isotropic elasticity. A needed preliminary step is that the populations are first converted into a suitable set of moments. With respect to the standard MRT scheme introduced in [26], this paper uses a different moment set C based on the raw countable moments (see Eq. (15)) as defined below C = {m α | α ∈ I = {00, 11, s, d, 12, 21, 22}}, where I denotes the corresponding index set. In the moment set the elements with numerical indices are the already introduced raw countable moments, whereas the definition of the moments m s and m d will be provided below. For each of the moments in C the collision rule reads where m * α denotes the post-collision value of m α . This expression implies a relaxation of each moment in C towards its local equilibrium value m eq α , where the rate of this relaxation is controlled by ω α . As will become clear in Section 3.3, the governing equation being solved by the method depends on the choice of the relaxation rates ω α and of the local equilibrium moments m eq α ; in particular, the material parameters of the target problem can be controlled by adjusting the relaxation rates ω α appropriately. In Section 3.3.2, we will derive the concrete expressions of m eq α and ω α to obtain the target equation (10) with the desired values of the material parameters.
Note that the collision is performed at each lattice node for each time step and involves only local information, which is one of the advantageous properties of the method. To fully characterize the collision stage and to explain why the moments m 01 , m 10 , m 02 and m 20 are missing in C whereas "new" indices s and d appear, a few more details are discussed in the following. Bared and collision moments. The second remark involves some additional moment definitions, useful for the following developments. Apart from the pre-and post-collision moments, the so-called bared momentsm α and collision moments Ω α with α ∈ I are defined as followsm with inverse relations These two types of moments are mainly introduced for conceptual purposes, especially during the asymptotic analysis outlined in Section 3.3, and do not explicitly appear during the actual simulations. The bared moments can be interpreted as the intermediate state of the moments during collision and some of them will be later identified with solution quantities of the target problem. On the other hand, the collision moments are mainly employed to enable the recursive definition of the asymptotic analysis (see Section 3.3). For the following developments, it is useful to transform the collision rule in Eq. (17) into a version that only involves bared and collision moments using Eqs. (21), (22). The result reads where the relaxation time τ α has been defined as with inverse relation Note that all efficient lattice Boltzmann implementations avoid storing both the pre-and post-collision populations so that it is not advisable to compute the bared moments by their definition in Eq. (19). Moreover, collision moments are never computed, so also Eq. (23) cannot be used to compute the bared moments. Instead, combining the two versions of the collision rule given in Eqs. (17) and (23) yields This relation enables to compute the bared moments based only on the pre-collision moments and their equilibrium values, which are both readily accessible during computations.
First-order moments. Lastly, the special role that the first-order moments m 10 and m 01 play during collision is discussed. Since, as will be shown in Section 3.3, these two moments are identified with the primary variables of the governing equation the scheme is solving for (i. e. with the two components of the displacement field), they are not contained in C, because they are not relaxed towards some local equilibrium state. However, during the collision step a forcing is applied to these moments in order to include the body load term of the target problem (see Eq. (10)).
In accordance with the interpretation of the lattice Boltzmann method as a Strang splitting scheme, the application of the external forcing is split equally in two sub-steps of the collision step [27]. The first half of the forcing is applied as followsm 10 to obtain the bared first-order moments, see Figure 3. Here g x and g y represent the x-and y-components of a forcing term that correctly applies the body load appearing in the target equation. The consistent expressions for them will be derived in Section 3.3. Note that, as will be found in Section 3.3, some of the local equilibrium moments depend on the first-order moments; these local equilibrium moments are computed using the bared first-order moments from (27) and (28), and not m 10 and m 01 (see also Figure 3 and Algorithm 1). After all other moments have undergone the collision as given by Eq. (17), the second half of the forcing term is applied to the bared first-order moments to obtain the post-collision values Post-collision populations. Once all post-collision moments are known, the post-collision populations f * i j , which are required for the next streaming step, are computed by solving for f * i j . 8
Streaming
The second stage of the lattice Boltzmann algorithm is the streaming step, see also Algorithm 1. It simply involves propagating each post-collision population f * i j , resulting from the collision step, to a neighboring node, where the direction of propagation is given by the microscopic velocity c i j , as follows Because the grid spacing and time step are related to each other by the microscopic lattice speed c = ∆x/∆t, streaming implies that the populations "jump" from one lattice node to another, which is one of the reasons for the algorithmic simplicity of the method.
input : populations and forcing at current time step: f i j (x, t), g x (x, t) and g y (x, t) output: populations at next time step: f i j (x, t + ∆t), numerical solution of target problem at current time step 1 for n ← 1 to nNodes do /* loop over all nodes */ Compute local equilibrium moments /* see Table 1
Construction of the scheme using asymptotic analysis
The main goal of this section is the systematic determination of the equilibrium moments m eq α , the relaxation rates ω α , and the forcing components g x and g y , such that the general lattice Boltzmann scheme in Section 3.2 (see Figure 3 and Algorithm 1) solves the target equation (10) with the desired values of material parameters and body loads. In order to analyze the lattice Boltzmann scheme of Section 3.2, we apply the asymptotic expansion technique [28][29][30], more specifically, the variant presented in [24]. First, the derivation of the asymptotic expansion from [24] is briefly outlined. Then we discuss the identification of the equilibrium moments, relaxation rates and forcing terms for the specific target problem of linear elasticity.
Derivation of the recursive asymptotic expansion
As follows, we provide an outline of the main steps needed to obtain the recursive asymptotic expansion that will be used in the remainder of this work. The presentation is mainly adopted from [24], with slight modifications that do not affect the final expression. The starting point of the asymptotic expansion is the streaming expression in Eq. (32) shifted backwards in time by half a time step: Next, it is assumed that the populations at pre-and post-collision state are regular enough so that we can perform the Taylor series expansion of Eq. (33) around (x, t), leading to m,n,l∈N 0 In order to facilitate the physical interpretability of the asymptotic expansion, this expression is transformed into the moment space by pre-multiplying it with i a j b and performing the summation over i and j. Additionally, the bared and collision moments defined in Eqs. (19) and (20) are introduced for more concise notation. The result of these algebraic modifications reads Subsequently, the expression is fully nondimensionalized by scaling the time and spatial derivatives with the reference length L and reference time T , which were introduced in Section 2. In order to obtain only a single smallness parameter ε > 0 that needs to be considered during the asymptotic expansion, a relation between ∆t and ∆x = c∆t is introduced. Because the target equation has a similar structure as the diffusion equation, the uniform grid spacing ∆x and the time step ∆t are related according to the so-called diffusive scaling, i. e. ε 2 ∼ (∆x) 2 ∼ ∆t [31]. Accordingly, the following expressions are introduced for the grid spacing and time step It will be shown at the end of this section that the diffusive scaling leads to the property that the relaxation rates governing the physical parameters of the target equation do not scale with the smallness parameter. Thus, the limiting process, i. e. ε → 0, with constant relaxation rates does not alter the physical problem being solved by the method. Substitution of (36)-(37) into (35) yields the following intermediate result Next, a regular expansion ansatz involving the same smallness parameter ε is assumed to hold for the populations at pre-and post-collision state, which naturally also applies to all types of moments through their definitions (see Eqs. (15), (31), (19) and (20)) Clearly, all higher-order expansion coefficients vanish during the limiting process ε → 0. Therefore, if the method is consistent, the zeroth-order expansion coefficient, i. e. that for q = 0, contains the exact solution and all higher-order terms constitute the numerical error. In order to increase the consistency order to e. g. second order, the expansion coefficient with q = 1 needs to vanish, so that the error scales with ε 2 . Thus, the application of the asymptotic expansion derived in this subsection will be used to establish consistency. The following Section 4 will identify conditions so that the first-order error contributions vanish. Introducing the regular expansion ansatz into Eq. (38) results in m,n,l,q∈N 0 Because ε > 0 is arbitrary, Eq. (40) holds only if the expressions vanish individually at each order in the smallness parameter. Therefore, in order to investigate the relation at some arbitrary order r, the summation in Eq. (40) is evaluated with the additional condition 2m + n + l + q = r. Continuing with the expression obtained by this evaluation, the final expression is established by isolating the rth-order expansion coefficient of the collision moment on the other side of the equation by requiring m + n + l 0 under the sum [24]: Combining Eq. (41) with the collision rule in Eq. (23) rewritten for each expansion order q, we obtain a recursive definition for Ω (r) ab with arbitrary a, b, r ∈ N 0 that terminates with Ω (0) ab = 0. The final expression obtained by this recursive definition involves only equilibrium moments m (q)eq α and relaxation times τ α , which will be chosen in the next subsection such that the target Eq. (10) is solved by the method in the bulk.
In order to properly carry out the collisions of the second-order moments that are split into their spherical and deviatoric parts, the following additional step is employed: wheneverm (q) 20 orm (q) 02 appears with arbitrary q, the moment is replaced by its volumetric and deviatoric components using the relations in Eq. (18). Afterwards, the collision rule in Eq. (23) is applied, followed by a back-transformation into raw countable moments. With this additional step, Eq.
Identification of the equilibrium moments, relaxation rates and forcing terms
An important design step during the construction of a new lattice Boltzmann scheme is the identification of some of the moments with physical quantities of the target equation. In this case the components of the dimensionless displacement fieldũ are identified with the first-order moments. Because there are always as many first-order moments as there are space dimensions considered by the lattice Boltzmann velocity set, all components of the vector-valued displacement field solution can be recovered using a single distribution function, which was one of the major goals outlined in the introduction. Combining this choice with the asymptotic expansion of the lattice Boltzmann scheme leads tom (q) where u (q) x and u (q) y denote the qth-order expansion coefficient of the approximate dimensionless displacement field components (for which we do not use the superposed˜symbol to avoid overloading the notation). Note that the bared first-order moments, as introduced in Eqs. (27) and (28), are used for the identification.
In the following, using the asymptotic expansion, we determine the generic form of the governing equations of the first-order moments that results from the lattice Boltzmann method with the MRT collision operator. In a subsequent step, conditions for the equilibrium moments and relaxation rates are identified so that the target Eq. (10) is recovered. This is demonstrated here by investigating the behavior of the x-component of the approximate dimensionless displacement field as an example. To this end, the recursive definition of the asymptotic expansion (Eq. (41)) is evaluated with a = 1 and b = 0. An analogous derivation can be performed for the y-component as well. However, this does not yield any new information and is therefore not shown here. Evaluating Eq. (41) at the zeroth and first order, i. e. r = 0 and r = 1, yields the following expressions r = 0: These relations do not include any quantity involved in the target equations and the only guidance they provide in designing the method is that they have to be fulfilled. This is the case if Ω (0) 10 = Ω (1) 10 = 0, which requires that the equilibrium second-order moments are constant to zeroth order in ε.
11
The next higher-order expression for the first-order moment yields an expression describing the leading-order equivalent partial differential equation being solved by the generic scheme. To this end, the recursion is evaluated with r = 2. Furthermore, the left hand-side of Eq. (41) is expanded using Eqs. (27) and (29), i. e.
Note that Ω (r) ab is always under our control as it represents the difference between post-and pre-collision moments. The resulting expression, obtained using the relationships in Appendix A, reads where the first-order moments have already been replaced using Eqs. (42) and (43).
Comparing the governing equation of the leading-order expansion coefficient with the x-component of the target immediately reveals a few requirements that the equilibrium moments need to satisfy: • Firstly, the third-order equilibrium moments need to be proportional to the displacement solution. This is achieved by setting where the proportionality constant θ has been introduced. As will be discussed later, this constant is typically set to a specific value to obtain isotropic behavior of the lattice 4 .
• Secondly, the first spatial derivatives of the second-order equilibrium moments m (1)eq s , m (1)eq d and m (1)eq 11 need to vanish, because no such derivatives appear in the target equation. The most straightforward solution to achieve this is setting m eq s = m eq d = m eq 11 = 0.
Note that this choice also trivially satisfies the relation in Eq. (45). Introducing these choices into Eq. (47) leads to Comparing this expression with Eq. (48) yields the following result for the relaxation times: • Finally, the forcing term of the lattice Boltzmann scheme has to be set as g (2) x = LU −1b x .
Performing the same steps starting with Eq. (41) and r = 2, a = 0, b = 1 leads to analogous results for the equilibrium moments and relaxation rates and to g (2) y = LU −1b y . Altogether, the previous analysis shows that, with the above choices of equilibrium moments, relaxation times and forcing terms, the leading-order expansion coefficient u (0) x solves the target equation in the bulk of the domain as stated in Eq. (10).
A few final remarks are pointed out concerning the yet unspecified equilibrium moments and relaxation rates, as well as the parameter θ, as follows: • The zeroth-order moment m 00 does not influence the leading-order physics so that its evolution does not need to be tracked. As a result, the so-called rest population f 00 can be removed, which reduces the memory requirements of the scheme by 1/9.
• The equilibrium value of the fourth-order moment m 22 is yet unknown, because it has no impact on the leadingorder solution. The following Section 4 will show that a second-order consistent scheme is achieved by choosing m eq 22 = 0. • A straightforward and robust choice for all relaxation rates that are not involved in Eq. (47), and hence do not affect the physics of the solution, is to set ω 12 = ω 21 = ω 22 = 1 [22], leading to τ 12 = τ 21 = τ 22 = 1/2. By Eq. (17) this leads to the behavior that the related moments are set to their equilibrium value during each collision.
• It can be shown [32] that there exist rotations that are rotational invariants of the velocity set, which map the second order moments m 11 and m d onto each other, i. e.
Therefore, these two moments represent exchangeable physical quantities and should hence, by physical intuition, relax with the same rate. As a result, ω 11 = ω d and equivalently τ 11 = τ d need to hold, which requires setting θ = 1/3. However, this condition is not strictly necessary, and an equivalent behavior of the target equations to leading order can be obtained by a different choice of θ ∈ (0, 1). In this case the relaxation rates ω 11 and ω d need to take on distinct values as given by Eqs. (53) and (54). For the remainder of the work, the standard value is assumed, i. e. θ = 1/3.
A summary of the equilibrium moments and relaxation times for all moments involved in the collision is provided in Table 1. To conclude this section, let us discuss an important consequence of the choice of diffusive scaling using Eqs. (53)-(55). Since all three equations lead to analogous conclusions, we will consider Eqs. (53). Let us transform the right-hand side of Eq. (53) back into physical units. To this end, reference length L and reference time T are related to lattice spacing ∆x and time step size ∆t through Eqs. (36) and (37) respectively, obtaining 13 Comparing the first and the last terms in Eq. (57) shows that the material parameter in physical units µ/κ is unaffected by the smallness parameter ε for a constant relaxation time τ 11 . This implies that diffusive scaling keeps the target physical problem being solved unchanged during the limiting process ε → 0 with constant relaxation rates. This property is of paramount importance because it largely simplifies the asymptotic analysis, which would otherwise require an asymptotic expansion of τ α such that Eq. (23) would no longer be valid in this simple form.
Cauchy stress solution
Apart from the displacement field there is generally a great interest in the Cauchy stress field. In the following, we demonstrate that the stress components are directly retrieved from the bared second-order moments, with no need for further post-processing calculations. Using the asymptotic analysis in Section 3.3, the expansion of the second-order moments has the following form Note that the bared second-order moments are efficiently calculated with Eq. (26). A comparison of Eq. (60) with the Cauchy stress relation in Eq. (13) together with the condition for the relaxation rate in Eq. (53) reveals that In Section 4 it will be shown that the method can be made second-order consistent in the displacement field solution. This means that u (1) x and u (1) y vanish and by Eq. (60) second-order consistency for the numerical approximation of the shear stress is established as well, i. e.σ Following analogous steps for the other Cauchy stress components yields the results shown below.
Altogether, this section showed that the components of the Cauchy stress field can be obtained from simple algebraic transformations involving quantities already present during the collision stage of the algorithm (see Algorithm 1). Another important result is that the numerical solution of the Cauchy stress components is second-order consistent if the displacement solution is second-order consistent as well. In comparison, using the finite element method as discretization scheme for the linear elasticity equation (and assuming sufficient regularity of the exact solution), a linear polynomial ansatz for the approximate solution leads to second-order consistency for the displacement field but only first-order consistency for the stress components.
It is now evident that the spherical-deviatoric decomposition of the dimensionless stress tensor reads which explains the designation ofm s andm d as spherical and deviatoric moments, respectively. A summary of all dimensionless solution fields obtained from the lattice Boltzmann scheme is provided in Table 2.
Leading-order error investigation
The asymptotic expansion technique can not only be used to design the lattice Boltzmann scheme so that it solves the equations of linear elasticity. Additionally, by continuing the expansion to higher orders, conditions to obtain a higher-order consistent method can be identified. This section will present conditions to remove the first-order error contribution so that a second-order accurate method is achieved. As a result the next higher-order contribution of the expansion takes on the role of the leading-order error. Because this error term can generally not be removed, its influence on the solution accuracy is investigated and measures to keep this error small are introduced.
So far we used the asymptotic expansion terms up to r = 2. Continuing the expansion to the next order, i. e. computing Eq. (41) with r = 3 and a = 1, b = 0 yields where all relations for the equilibrium moments identified in Section 3.3 have already been introduced. Comparing this equation with Eq. (52) reveals that the error coefficient u (1) x also solves the target problem, only with a different body force term. As indicated by Eq. (68), this term involves derivatives of the yet unspecified equilibrium moment m (0)eq 22 and two constants that depend on the relaxation times τ α . In order to obtain a second-order consistent method, the first-order expansion coefficient u (1) x needs to vanish. This is achieved if the governing Eq. (67) admits a null solution, which requires that the body force term is zero and that the problem is furnished with zero initial and boundary conditions. The former requirement can be easily fulfilled. The r (3) x term in Eq. (68) is zero if the fourth-order equilibrium moment m eq 22 is some arbitrary constant at zeroth order in ε. One possible way to achieve this involves setting m eq 22 = 0. To satisfy the latter requirement, the initialization needs to be second-order consistent, which will be discussed in Section 6. The periodic boundary conditions also introduced in Section 6 do not influence the consistency order of the method in the bulk. As a result, it can be summarized that the linear elasticity problem of Eq. (67) on a periodic domain with zero initial condition and no body load admits a zero solution for u (1) x . Performing the same analysis for the y-component of the displacement solution leads to identical conclusions.
Assuming that the first-order expansion coefficients have been successfully set to zero, the next higher-order coefficients u (2) x and u (2) y constitute the leading-order error. The structure and properties of the governing equation for u (2) x are obtained by evaluating Eq. (41) with r = 4, a = 1, b = 0 as shown below.
with r (4) The analysis of the equation governing the behavior of u (2) y leads to analogous results and will be not explicitly shown. Note that in the expression above all time derivatives of u (0) x and u (0) y have been replaced using the corresponding governing equations (see Eq. (52) for the x-component), which is also the origin of the forcing terms g (2) x and g (2) y . All coefficients C 1 . . . C 5 and D 1 . . . D 3 depend on the relaxation times τ α and the parameter θ (see Appendix B for the expressions). Comparing Eq. (69) with Eq. (52) shows that the governing equation of the second-order error contribution has the same structure as the target problem being solved by the method. This time the body force term r (4) x is composed of two types of contributions: the fourth-order spatial derivatives of the leading-order solution u (0) x , u (0) y , and the second-order spatial derivatives of the leading-order coefficients of the forcing terms g (2) x , g (2) y , which we know to be related to the physical body load components of the target problem, b x and b y (see Section 3.3).
The latter contribution can be removed. Indeed, b x and b y are known and we assume the analytical expressions of their derivatives to be available, so that the terms in r (4) x and r (4) y containing g (2) x and g (2) y can be compensated for by extending the forcing term with the following higher-order correction x − ε 4 D 1 (τ α , θ)∂ 2 x g (2) x + D 2 (τ α , θ)∂x∂ỹg (2) y + D 3 (τ α , θ)∂ 2 y g (2) x (71) On the other hand, the displacement field solution can take an arbitrary form and is obviously unknown. Therefore, the first terms in r (4) x can be removed only if C i = 0, i = 1 . . . 5. As outlined in Section 3.3.2, out of all the relaxation times τ α , α ∈ I, the ones governing the relaxation of the second-order moments (τ 11 , τ s and τ d ) are adjusted to match the dimensionless material parameters of the target problem and the parameter θ is fixed at its standard value. As will be shown below, the combination of the discretization parameters ∆x, ∆t and the artificial damping coefficient κ leaves one degree of freedom in the choice of the aforementioned relaxation times. Moving on to the relaxation times of the third-order moments, it can be shown that with a similar reasoning as for τ 11 and τ d (see Section 3.3.2) the third-order moments cannot be considered to be independent. Therefore, the corresponding relaxation times τ 12 and τ 21 need to be the same. Summarizing, this leaves in total three independent parameters: one coming from τ s , τ d and τ 11 , another one from τ 12 = τ 21 and lastly τ 22 . Accordingly, it cannot be expected that all independent conditions C i = 0, i = 1 . . . 5 are satisfied by some combination of the remaining free parameters, which is necessary to achieve r (4) x = 0 and r (4) y = 0. Thus, there is no way to achieve higher than second-order consistency, and the terms r (4) x and r (4) y have a pivotal influence on the behavior of the numerical error in the bulk.
To illustrate this point further, consider the following decomposition of the numerical approximation to the displacement solution obtained by the lattice Boltzmann schemē Note that an analogous relation is obtained for the y-component as well. In order to reduce the leading-order error contribution ε 2 u (2) x , there exist two possibilities: 1. Decrease ε, i. e. perform a mesh and time step refinement obeying the diffusive scaling assumption. 2. Reduce the magnitude of the solution u (2) x and u (2) y to the leading-order error governing equation (see Eq. (69) for the x-component).
It is important to keep in mind that reducing the numerical error with the first option comes with a considerable increase in computational effort. Therefore, it is highly advisable to initially exploit any possibility to reduce the error according to the second option. Accordingly, this option is investigated in some more detail in the following.
For periodic problems, the magnitude of the solutions u (2) x and u (2) y of the linear governing equations (see Eq. (69) for the x-component) is proportional to the magnitude of the body force terms r (4) x and r (4) y . To this end, r (4) x and r (4) y should made as small as possible. This is in turn realized for arbitrary solutions u (0) x and u (0) y if C i → 0, i = 1, . . . , 5 (see Eq. (70) for the x-component).
For this initial investigation we decided to keep the search space over which the C i are minimized fairly manageable. This is achieved by fixing all higher-order relaxation rates τ 12 = τ 21 and τ 22 at their standard values (see Table 1). As a result, only the relaxation rates τ 11 , τ s and τ d , which are used to adjust the dimensionless parametersμ and K, remain as independent parameters in the expressions for C 1 . . . C 5 .
In order to get a general idea how the body load in the governing equation for the leading-order error can be reduced, |C 1 | . . . |C 5 | are plotted in Figure 4 against the dimensionless Young's modulusẼ and the Poisson's ratio ν (which are related toμ andK by the expressions in Eq. (4) that also apply to the dimensionless quantities). The dimensionless Young's modulus is related to the physical one bỹ see Eqs. (36) and (37). Eq. (74) shows that for a given physical Young's modulus E the ratio between time step size ∆t and grid spacing ∆x as well as damping constant κ can be adjusted so that a specific dimensionless Young's modulus E is obtained that leads to small |C i |. Changing this ratio is equivalent to moving along horizontal lines in the contour plots of Figure 4. As an example and assuming a fixed grid spacing ∆x, moving to the left in the plot corresponds to choosing smaller time steps and/or increased damping of the problem. The visualization of the constants appearing in r (4) x and r (4) y shows that for each Poisson's ratio there exist parameter intervals of the dimensionless Young's modulus for which their values become small, thus we expect the leading-order error contribution to the numerical solution to be small as well. Further note that for some constants in Figure 4 an increase of its magnitude is observed for decreasingẼ. This implies that in general stronger damping or equivalently smaller time steps does not necessarily improve the numerical accuracy of the steady-state solution. This can be traced back to the fact that a different time step size changes the relaxation rates of the second-order moments, but leaves all other relaxation rates unaffected. In combination with the constant grid spacing this also violates the diffusive scaling on which the convergence property of the numerical scheme hinges. Similar observations in the context of fluid dynamics with the acoustic scaling and using the MRT collision operator have already been made in [33].
A final simplification step assumes that all fourth-order derivatives of the solution components u (0) x and u (0) y are of similar magnitude. Under this assumption, a combined advantageous ratio of the discretization parameters ∆x, ∆t and κ can be estimated by minimizing the root sum squared of the constants C i . If the influence of the leading-order error due to the body load is to be considered as well, the constants D i are also added. This in turn relies on the assumption that the spatial second-order derivatives of the forcing term are of similar magnitude as the fourth-order derivatives of the solution. The resulting error estimates R 1 and R 2 are given by Figure 5 visualizes the influence of the dimensionless material parametersẼ and ν on the qualitative estimate of the leading-order error. The direct comparison of the two contour plots showing R 1 and R 2 indicates that an error reduction of approximately one order of magnitude can be achieved by the partial removal of the leading-order error due to the body force in r (4) x and r (4) y . Numerical experiments carried out in Section 7 will confirm this prediction. Once again, the visualization of the combined constants shows that for each value of the Poisson's ratio there appears to be an advantageous interval of values for the dimensionless Young's modulus, which can be obtained by properly adjusting the relation between the discretization parameters in Eq. (74).
For nearly incompressible material behavior -corresponding to ν → 1 in 2D -the optimal range of Young's moduli becomes relatively narrow and the smallest achievable value for R 1 increases. This in turn leads to larger expansion coefficients u (2) x and u (2) y . Recalling the two possibilities of reducing the numerical error of the method, this observation shows that only a slight improvement following the strategy of the second option can be achieved. As a result, the numerical error of the method needs to be primarily reduced by the first option. Because this implies using a finer discretization, this can also be interpreted as decreasing numerical efficiency of the method for increasing Poisson's ratios. Numerical experiments in Section 7 will confirm this prediction as well.
Linear stability analysis
As follows, we investigate the stability properties of the novel lattice Boltzmann scheme. The stability analysis follows hereby similar steps as in previous studies that all perform a linear von Neumann stability analysis [34][35][36]. In contrast to these works that study the stability properties of various lattice Boltzmann algorithms for fluid mechanics, no linearization about some homogeneous reference state is required because the present method is already linear. In brief, the stability analysis tests the attenuation or amplification properties for a given set of monochromatic planar waves by numerically computing the spectral radius of the linear operator that describes the lattice Boltzmann scheme [37]. If the spectral radius becomes larger than 1, this implies that some modes of the distribution function grow for each iteration step and will eventually lead to an unbounded numerical solution, i. e. instability.
For the stability analysis, the populations are grouped in a vector-valued quantity f , whose components f i , i = 1, ..., 9 correspond to the populations f kl , k, l ∈ {−1, 0, 1} used so far. The mapping between the two-index notation used so far and the new single-index notation is described in Table 3 and follows the convention used e. g. in [25] for the D2Q9 stencil. The same one-index notation is used in the following for the microscopic velocities c i . Note that, for the present scheme, the rest population f 00 can be removed as it has no influence on the behavior of the method. Accordingly, the stability analysis is performed on the resulting D2Q8 stencil.
Similarly, we introduce the vector m of the moments participating in the collision, whose components m i , i = 1, ..., 8 correspond to the moments m α used so far as given in Table 4.
i. e. m i = 8 j=1 M i j f j . M is easily constructed using the moment definition of Eq. (15) along with Eq. (18). In moment space, the collision involves the diagonal matrix Λ = diag(ω α ), ∀α ∈ {10, 01, 11, s, d, 12, 21, 22} with ω 10 = ω 01 = 0. Naturally, the transformation back to the post-collision populations is carried out using the inverse of the transformation matrix, i. e. M −1 . Lastly, the equilibrium moments are constructed from the populations by applying to the population vector the matrix M eq with components Using these definitions, one iteration step of the lattice Boltzmann method can be described through a linear operator as follows where the linear collision operator A i j has been introduced. The properties of the corresponding matrix can be investigated using linear algebra, giving immediate information on the stability of the method. As in [36,37], the amplification or attenuation properties of the lattice Boltzmann scheme are investigated by assuming that each population in f has the form of a monochromatic planar wave with wave vector k = [k x k y ] T , complex frequency and amplitude a i where ı is the imaginary unit. Note that for the stability analysis c = ∆x = ∆t = 1 is assumed without loss of generality, because this can be interpreted as a re-scaling of the wave vector k and the complex frequency . Introducing this ansatz into Eq. (78) leads to the following eigenvalue problem with the matrix L defined as Instability is encountered if some mode of f is amplified during each iteration. Accordingly, the following condition needs to hold for a stable method: sup where ρ(L) denotes the spectral radius of L.
The condition above is approximated by sampling through the set of admissible wave vectors and numerically computing the largest eigenvalue of L. For this purpose, the following parameterization is adopted for the wave vectors in 2D wherek is the magnitude of the wave vector and ϕ the angle between the wave vector and the x-coordinate direction. The ranges of the two parameters follow from symmetry considerations [35,36]. In the long wave length limit, i. e. k → 0, the classical result for the relaxation rates is retrieved in order to guarantee stability, i. e.
For the case of arbitrary wave vectors, it has been observed that, for the present method, instability is always first encountered for axis-aligned wave vectors, i. e. ϕ = 0. For this reason, only 5 points are sampled from the domain of definition of ϕ. In order to sufficiently resolve the space of possible wave vectors the magnitudek is evaluated at 50 points in [0, π]. This procedure is repeated for a grid of values covering the parameter space spanned by the dimensionless Young's modulus and the Poisson's ratio. For all other parameters of the collision operator, the standard values as in Table 1 are set. The left side of Figure 6 shows the stability region for all admissible Poisson's ratios and the range of values for the dimensionless Young's modulus that is relevant for the simulations. The analysis indicates that there exists a region with unstable behavior for small values of the dimensionless Young's modulus and negative or small positive Poisson's ratio. Recalling the leading-order error analysis of the previous section it becomes apparent that the onset of instability poses a practical limit only for sufficiently small Poisson's ratios with approximately ν < −0.3. For larger Poisson's ratios the predicted beneficial combination of discretization parameters as demonstrated in Figure 5 lies in the stable region. However, for cases with ν < −0.3 the onset of instability inhibits the use of a beneficial discretization to reduce the leading-order error influence. As a result, problems with ν < −0.3 can only be solved with reduced accuracy or require significantly finer meshes to achieve the same accuracy. However, negative values of ν, while thermodynamically possible, are irrelevant for most practical purposes.
The right-hand side of Figure 6 shows another region of instability encountered for nearly incompressible material behavior. However, the instability occurs only approximately for ν > 0.996 for which the leading-order error analysis predicts very large errors. Therefore, this regions poses no practical limitations on the range of Poisson's ratios the method can handle. Altogether, the combined results of the stability analysis and the leading-order error investigation predict that the novel method is accurate and stable for approximately ν ∈ [−0.3, 0.95].
Periodic boundary and initial conditions
The application of physically consistent, accurate and stable boundary conditions for the lattice Boltzmann method is a challenging task, because the physical conditions need to be translated into expressions for the populations. For this reason, the influence of boundary conditions is not considered in this initial contribution and only problems with periodic boundary conditions are solved. To enforce periodicity, a simple formulation from the literature can be directly applied as will be outlined in the following.
Because the problem is solved by advancing in pseudo-time, an initial condition for the populations needs to be specified as well. For this purpose, the popular initialization at local equilibrium is investigated using the asymptotic expansion of Section 3.3.
Periodic boundary
For the periodic problem, axis-aligned rectangular domains of size L x ×L y are considered. The respective boundary conditions are realized by copying the outgoing populations to the opposite end of the domain [4]. In order to avoid duplicating the nodes on the boundaries that are connected by periodicity, the node lattice is offset by half a grid spacing from the physical domain boundary.
Initial condition
A popular method to enforce the initial conditionũ =ũ 0 (Eq. (11)) is to initialize all populations f i j at equilibrium. This is equivalent to prescribing the local equilibrium value for all moments (see Table 1) followed by a back-transformation into populations as done during the collision stage. The respective initialization of all moments is given by (85). m α (x, 0) = m eq α (ũ 0 (x)) ∀α ∈ {10, 01, 11, s, d, 12, 21, 22} In order to investigate the consistency order of Eq. (85), the asymptotic expansion of all moments is computed up to first order using Eq. (41) and the results in Section 3.3.
Introducing these results into Eq. (85) reveals that second-order consistency with u (1) = 0 can only be achieved with this initialization if ∇ũ 0 = 0. Because all numerical examples in Section 7 start fromũ 0 = 0, this simplified initial condition is sufficient.
Numerical verification
This section presents a few numerical examples. These serve the purpose of verifying the analytical derivations of the previous sections and thus of assessing the performance of the new scheme. To measure the error in the displacement and the stress solution, some grid norm definitions are introduced in the first subsection. In order to allow for as general conclusions as possible, the accuracy of the method is investigated using manufactured solutions. This concept is briefly introduced in the following subsection. Using the method of manufactured solutions, a series of convergence studies are carried out with the aim of verifying the results of the asymptotic expansion. By considering only periodic problems, the findings of the leading-order error and stability analysis of Section 4 can be numerically explored in great detail.
Grid norm definitions
Before defining the grid norms, the numerical (discretization) error in the displacement and the stress solution is introduced. As outlined in Section 3.3, the approximate displacement solution is obtained from the first-order moments, which are defined on the set of all lattice node positions {x i | x i ∈ Ω} N i=1 . N is the total number of grid nodes contained in the problem domain Ω. Keeping in mind that the present method advances in pseudo-time steps t j ∈ [0, t f ] until steady state is achieved within a specified tolerance, the numerical solution also depends on time. Provided that the exact solution is available at each lattice nodeû i =û(x i ), the error in the displacement field solution is defined as Note that the numerical approximation of the solution, obtained in dimensionless form, is transformed back into physical units for the error evaluation.
In order to simplify the grid norm computation of the error in the stress solution, the independent stress components are regrouped into a vector. As derived in section 3.4, the components of the approximate Cauchy stress are identified with the bared second-order moments, thus Once again, the numerical result is transformed into physical units (see Section 2.2 for the scaling factors). Herê σ i denotes the exact stress solution at lattice node i. As a next step, two grid norms are introduced to obtain global measures of the previously defined error functions. The norm definitions derive from the continuous L 2 (Ω) and L ∞ (Ω) norms by applying numerical quadrature at the grid nodes. Because the norm computation involves a summation only over all nodes in space, the error measure remains a function of time that indicates the convergence of the method towards the static equilibrium solution. Using the example of the displacement error, the two norm definitions are provided below L2 e u 2 (t j ) = where | · | 2 and | · | ∞ denote the Euclidean norm and the maximum norm of a vector. The following observations apply: • ∆x is the uniform grid spacing of the lattice and scales proportionally with the smallness parameter of the asymptotic expansion in Section 3.3, i. e. ∆x ∼ ε.
• Because the scheme is designed to determine the solution at static equilibrium (Eq. (5)), the grid norm of the error is considered at final time t f unless stated otherwise.
• In order to provide easily interpretable results, in the following we report the relative error, i. e. the absolute error divided by the L2 grid norm of the exact solution, e. g.: e u ∞ / û 2 .
Method of manufactured solutions
As follows, we briefly outline the method of manufactured solutions [21] that can generate almost arbitrary exact solutions to be used for the numerical verification. As a starting point for the method, the desired exact solutionû is freely chosen, but needs to be sufficiently differentiable so that it can be a solution to the problem. The next step involves computing the matching source term b so thatû actually solves the governing equation in conjunction with the source term. In the present context (see (6)) this implies computing a body load as shown below: For the case of quasi-static linear elasticity, time-independent solutions are assumed so that the first term on the right-hand side of Eq. (98) vanishes. Within the context of periodic problems, the manufactured solution ansatz needs to satisfy two additional constraints. The first condition is given by Eq. (7). The second requirement comes from the fact that the solution of the periodic problem conserves the mean value of the initialization. Because all examples in this section are initialized with u 0 = 0, the manufactured solution needs to have zero mean.
With the body load and initial condition defined, the numerical solution is computed and can be compared against the exact analytical solution using the grid norm definitions introduced before.
Numerical examples
In this section, a selection of numerical examples is presented that solve the periodic problem of Eqs. (10)-(12) for a given body load generated through the method of manufactured solutions. The first example demonstrates the second-order convergence property of the method in both the L2 and Linf norm for the displacement and the stress solution. The second example showcases the improvement of the accuracy of the method if the leading-order error contribution due to the body load is compensated for. This is followed by an example that is designed in such a way that fourth-order consistency is numerically observed if the analytical derivations in Section 4 are correct. Furthermore, the effectiveness of the error estimate to identify a beneficial ratio of grid spacing and time step size is investigated by numerical parameter studies that probe the actual error of the method for various combinations of discretization parameters. Subsequently, the analytically predicted location of the stability boundary in material parameter space is verified for a few testing points and finally the results obtained with the novel lattice Boltzmann scheme are compared with linear finite element results.
Convergence study of the standard scheme
As a first example, the periodic problem is solved with the standard numerical scheme, i. e. the scheme that does not compensate for the leading-order error due to the body load (see Section 4). In this case the material parameters are chosen as ν = 0.8 andẼ = 0.11 to obtain a small leading-order error contribution based on the investigation of withx = x/L andỹ = y/L as introduced in Section 2.2. The final time t f is set sufficiently large so that steady state is reached. An example of the convergence of the numerical solution towards steady state over the pseudo-time steps is provided by Figure 7. It shows the evolution of the relative error of the displacement and the stress solution in the L2 and the Linf norm. Note that this simulation is performed on the coarsest mesh of the following convergence study with ∆x = 0.05L, for which relative errors of approximately 3 − 4% in the Linf norm are obtained for the given discretization. Steady state is reached after roughly 10 3 time steps and the stress field converges faster than the displacement solution. Despite the large number of time steps, the total computing time of this example is still very low: ca. 1.6 s on a low-end notebook with a non-optimized implementation.
Using the already described problem definition, a grid convergence study is carried out. As predicted by the asymptotic analysis in Sections 3.3, 3.4 and 4, the results in Figure 8 indicate second-order convergence for both the displacement and the stress solution for decreasing smallness parameter ε = ∆x/L. For these results the total number of grid nodes is varied between 20 2 and 100 2 nodes. Notably, a second-order convergence rate is also achieved in the Linf norm for both solution quantities, which confirms the analytical derivations with high accuracy.
Convergence study of the improved scheme
As suggested in Section 4, the portion of the leading-order error due to the body load can be compensated for by a higher-order correction of the forcing term. With this modification, the optimal value of the dimensionless Young's modulus is predicted to be slightly different based on the method in Section 4. Therefore, the simulation is performed for ν = 0.8 andẼ = 0.085. Besides this adjustment, the same problem as for the previous case is solved. Figure 9 shows the results for the scheme with compensation of the leading-order error due to the body load. Once again, approximately second-order convergence is observed from the numerical experiments, because the leadingorder error is only partially removed. However, comparing the results of the convergence studies for the two variants (Figures 8 and 9) shows a reduction of the errors by roughly one order of magnitude. This improvement was already concluded from the derivations in Section 4 and specifically by Figure 5. In summary, a significant improvement of the numerical accuracy can be expected for this partial compensation of the leading-order error that requires only minimal extra computational effort. The next convergence study on periodic domains aims at constructing a special case in which, according to the asymptotic expansion, fourth-order convergence is expected, and at verifying this expectation numerically. To this end, an analytical solution with vanishing mixed spatial derivatives is chosen as followŝ u x /U = 9 · 10 −4 (cos (2πx) + sin (2πỹ)) (101) u y /U = 7 · 10 −4 (sin (2πx) + cos (2πỹ)) .
As a result, only the two terms with the constants C 1 and C 5 influence the leading-order error of both u (2) x and u (2) y as can be seen from Eq. (69) for the x-component. Therefore, only two requirements (C 1 = 0 and C 5 = 0) need to be satisfied to remove the r (4) x and r (4) y terms, which is less than the number of free parameters in the MRT collision operator (see Section 4) so that a complete removal of r (4) x and r (4) y is theoretically possible. For this example the relaxation times τ 12 = τ 21 and τ 22 have been appropriately chosen to achieve this and are reported in Appendix C. Furthermore, the asymptotic analysis also shows that the next higher-order error contribution vanishes if u (1) x = u (1) y = 0 (not demonstrated here). This is fulfilled by the second-order consistency of the method so that a fourth-order accurate method can be expected. Figure 10 shows the numerically obtained fourth-order convergence in the displacement field solution in both norms, which is a strong confirmation of the analytical derivations in Section 4. It is also observed that the leadingorder error correction for the displacement solution does not significantly affect the accuracy of the numerical stress solution.
Unfortunately, this fourth-order consistent scheme requires a special structure of the exact solution, specifically that all mixed fourth-order spatial derivatives vanish. Furthermore, the higher-order relaxation rates required to let C 1 and C 5 vanish can be shown to violate the stability condition of Eq. (84) for a large region in the space of the admissible combinations ofẼ and ν. Therefore, this higher-order accurate method is only applicable to a restricted class of problems.
Leading-order error behavior for different combinations of discretization parameters
During the convergence studies with the second-order consistent schemes, the magnitude of the leading-order error contribution was already reduced by appropriately choosingẼ (which, for given E and lattice spacing, amounts to choosing κ or ∆t) . The purpose of the following study is to investigate the accuracy and reliability of the leadingorder error estimate R 1 that has been introduced in in Eq. (75) in Section 5. To this end, parameter studies are carried out for several values of the Poisson's ratio, by varying the time scaling in order to realize different values for the dimensionless Young's modulus. This is equivalent to moving along horizontal lines in the contour plot of Figure 5. Further note that all simulations are performed using the scheme with compensation of the leading-order error due to the body load. Using the manufactured solution of Eqs. (99) and (100), the numerical error of the displacement solution is evaluated in the L2 norm for a range of different combinations of discretization parameters and compared with the error estimate. The value of the error estimate R 1 (see Eq. (75)) and the actual numerical error does not match in general, because the latter is (to leading-order) the solution to Eq. (69), whereas the former is computed from the coefficients appearing in the body load term of the same governing equation. However, it is expected that the qualitative shape of the error graphs agrees with each other. Specifically, the discretization for which the estimate predicts the smallest leading-order error contribution should lie in the vicinity of the value for which the actual numerical error has its minimum.
This comparison is presented in Figure 11 at four different values of the Poisson's ratio. The examples demonstrate a satisfactory qualitative agreement between the error estimate R 1 (see Eq. (75)) and the actual error. Comparing the results for the different Poisson's ratios shows that the estimate predicts slightly too large values for the dimensionless Young's modulus in the case of large Poisson's ratios and vice versa. When comparing the smallest error observed during the parameter studies with the numerical error obtained for the discretization as predicted by the estimate, it appears that -in the worst case -the error is 2 . . . 3 times larger than the actual minimum. However, taking into account the high sensitivity of the numerical error with respect to different values of the dimensionless Young's modulus, this deviation is still acceptable. In summary, it can be concluded that the error estimate from Section 4 provides adequate guidance for a beneficial discretization for a broad range of Poisson's ratios.
Additionally, the results in Figure 11 confirm another observation from the leading-order error analysis, which predicted that the minimal achievable error increases when moving closer to incompressible material behavior. Comparing the numerical errors for the cases with ν = 0.1 and ν = 0.9 shows that when choosing the best-possible combination of discretization parameters in each case, the error for ν = 0.9 is roughly one order of magnitude larger. As a result, finer meshes are required to achieve the same solution accuracy with larger Poisson's ratios.
Stability boundaries in the material parameter space
The next numerical study investigates the stability bounds of Section 5. To this end, several examples with material parameters close to the stability boundary are simulated. The examples are grouped into pairs, where for each pair one combination of discretization parameters lies in the stable domain and the other in the unstable regime. For the material parameters used in this investigation see Figure 6.
In order to ensure that the theoretically predicted onset of instability is actually triggered during the numerical experiments, a manufactured solution with a dense frequency spectrum is selected. Accordingly, the target solution is a 2D Gaussian hill located in the center of the problem domain [0, L] × [0, L] The constant offset ensures that the solution has zero mean. In the equation above, erf is the error function and the values of the standard deviations aligned with the coordinate directions σ i , i = 1 . . . 4 are listed in Table 5. The numerical examples in Figure 12 demonstrate that the semi-analytically determined stability boundary is very accurately recovered by the given examples. The first two pairs of simulations (subfigures a), b), d) and e)) show that a high number of iterations is needed to actually observe the unstable behavior. In contrast, the last pair of simulations, which is carried out for nearly incompressible material behavior, displays a significantly faster onset of the instability once the theoretically predicted stability boundary is crossed. This differing behavior may result from the fact that the dimensionless material parameters are chosen in close vicinity to the stability boundary (see Figure 6) and that the onset of instability might not be as sharp as in theory or at exactly the predicted location because of numerical round-off or spectral filtering due to a finite mesh resolution [37].
Of the three considered pairs of material parameters, the stability region of the method poses an effective limitation only for ν = −0.4. For this Poisson's ratio the leading-order error investigation suggests a good value ofẼ to be 0.082, which lies well beyond the stability boundary that has been numerically verified in Figure 12. Note that this deterioration of the numerically achievable accuracy even increases for smaller values of the Poisson's ratio, because the advantageous choice ofẼ from the leading-order error analysis moves further into the unstable regime shown in Figure 6.
Comparison with linear finite element results
The final numerical study compares the results obtained by the improved lattice Boltzmann method (see Section 7.3.2) with simulations using standard bilinear quadrilateral finite elements. For this comparison the more challenging solution of Eqs. (103) and (104) is considered. This manufactured solution results in a body load which is fairly concentrated in the interior of the domain, so that the problem can be considered to be approximately periodic. It is therefore acceptable to solve this problem with the lattice Boltzmann scheme using periodic boundary conditions (see Section 6). For the finite element analysis it is more straightforward to employ Dirichlet-type boundary conditions that prescribe the analytical solution on the whole domain boundary.
For both methods a convergence study is carried out using a structured mesh with element size ∆x = εL in the case of the finite element analysis and a regular lattice with the same dimensionless grid spacing ε for the lattice Boltzmann method. For the convergence study shown in Figure 13, the number of (pseudo-)time steps to obtain the static equilibrium solution with the lattice Boltzmann method ranges between 5 400 and 60 000 steps and follows the diffusive scaling assumption, i. e. ∆t = ε 2 T (see Section 3.3).
The results of the convergence studies in Figure 13 reveal similar error behavior in the L2 norm for the displacement solution. Overall, slightly smaller errors in the displacement solution can be observed for the lattice Boltzmann Figure 13: Comparison of the L2 errors using the improved lattice Boltzmann scheme and the finite element method method. However, as expected from the discussion in Section 3.4, the L2 errors in the Cauchy stress approximation decrease one order faster for the lattice Boltzmann scheme because of the higher consistency order, and are also significantly smaller. In summary, when comparing the accuracy on analogous discretizations, the lattice Boltzmann method performs similarly well in the displacement solution, but significantly better for the Cauchy stress.
Conclusion
We developed a novel lattice Boltzmann scheme to solve the quasi-static equations of linear elasticity. Because of the explicit time stepping inherent to the method, the target equation is extended by a damping term. Accordingly, the static equilibrium solution is obtained at the end of a transient phase that starts from a given initial condition. The proposed method is devised using the generalized multiple relaxation time collision operator as a starting point. This allows to independently adjust the two material parameters of linear isotropic elasticity. In contrast to previous works, three important properties of the method are established: 1. Only a single distribution function is required to determine the vector-valued solution field; 2. Only a standard velocity set without rest population is necessary, i. e. the D2Q8 set in 2D; 3. The scheme involves no finite difference approximations.
As a result, the computational efficiency and algorithmic simplicity of the native lattice Boltzmann method are fully retained for this new application.
Using the asymptotic expansion technique, second-order consistency is demonstrated analytically and a leadingorder error analysis provides effective guidance on how to adjust the numerical parameters of grid spacing, time step size and damping coefficient for improved accuracy. A stability analysis of the linear operator reveals a very broad range of Poisson's ratios that can be handled by the method. All analytical results are in excellent agreement with numerical results obtained with the method of manufactured solutions.
This initial contribution was limited only to the case of periodic problems in order to establish a thorough understanding of the method in the bulk. In a forthcoming contribution, the scheme will be furnished with suitable boundary formulations to be able to simulate physically relevant problems with Dirichlet-and Neumann-type boundaries.
With respect to established numerical methods for the solution of the linear elasticity equation, the lattice Boltzmann method proposed in this paper has drawbacks (mainly the need for time stepping to recover the quasi-static solution at steady state) but also advantages (second-order consistency also in the stresses, algorithmic simplicity and good scaling in parallelization). Based on the first results obtained in this paper, we believe that the method holds potential for further development. On the other hand, the present contribution may serve as an initial step and foundation for future investigations to solve more complex (nonlinear and/or dynamic) problems. Finally, in the context of multi-physics problems -such as for the simulation of additive manufacturing processes -it is anticipated that the handling of all physics (e. g. solid and fluid mechanics) within a single computational framework comes with significant advantages over the often used alternative of coupling separate codes based on different methods. | 2022-09-01T06:41:29.455Z | 2022-08-31T00:00:00.000 | {
"year": 2022,
"sha1": "eb5546c6a30da932f73e271767aaf252ff805ca3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0e25b2f197b71f843182cf3597d584e52040c31d",
"s2fieldsofstudy": [
"Physics",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics"
]
} |
55677497 | pes2o/s2orc | v3-fos-license | Symmetry-broken dissipative exchange flows in thin-film ferromagnets with in-plane anisotropy
Planar ferromagnetic channels have been shown to theoretically support a long-range ordered and coherently precessing state where the balance between local spin injection at one edge and damping along the channel establishes a dissipative exchange flow, sometimes referred to as a spin superfluid. However, realistic materials exhibit in-plane anisotropy, which breaks the axial symmetry assumed in current theoretical models. Here, we study dissipative exchange flows in a ferromagnet with in-plane anisotropy from a dispersive hydrodynamic perspective. Through the analysis of a boundary value problem for a damped sine-Gordon equation, dissipative exchange flows in a ferromagnetic channel can be excited above a spin current threshold that depends on material parameters and the length of the channel. Symmetry-broken dissipative exchange flows display harmonic overtones that redshift the fundamental precessional frequency and lead to a reduced spin pumping efficiency when compared to their symmetric counterpart. Micromagnetic simulations are used to verify that the analytical results are qualitatively accurate, even in the presence of nonlocal dipole fields. Simulations also confirm that dissipative exchange flows can be driven by spin transfer torque in a finite-sized region. These results delineate the important material parameters that must be optimized for the excitation of dissipative exchange flows in realistic systems.
INTRODUCTION
Spin current utilized as a source to excite magnetization dynamics has attracted significant research efforts in the past few years [1]. In contrast to charge currents, spin currents describe the spatial flow of electron angular momentum in the form of quantum mechanical spin. Spin currents can be generated by a variety of means. For example, pure spin currents arise by charge-spin transduction in materials with strong spin-orbit coupling [1,2] as electrons of a given spin predominantly flow in a specific direction, leading to spin accumulation at the materials' edges. Utilizing this effect, current-induced magnetization dynamics have been demonstrated in devices based on a metallic / magnetic material bilayer [3][4][5][6][7][8].
However, spin current transport in metals is limited by the spin diffusion length, typically on the order of hundreds of nanometers.
An alternative perspective is gained by recognizing that spin current is the Onsager reciprocal of spin precession [9]. Spin precession excited by means of spin currents has been experimentally demonstrated as the generation of small-amplitude spin waves in bilayers [1,5,6,8]. Spin waves are typically defined as a perturbation about a uniform magnetization state whose coherence and energy are lost by scattering events that populate the dispersion relation and couple to lattice vibrations when in a thermal bath. This implies that the spin wave amplitude decays exponentially [10] and, consequently, spin current transport in magnetic materials is limited by a spin wave propagation length that is inversely proportional to the magnetic damping.
Recent theoretical works have shown that magnetic materials support a fundamentally different magnetization state exhibiting a spatially homogenous precessional frequency that can pump dc spin currents into a suitable reservoir, such as an adjacent nonmagnetic metal.
In their more general manifestation, planar magnetic materials in the conservative limit (α = 0) support extended uniform hydrodynamic states (UHSs) [11,12] whereby the magnetization undergoes a spatial, large-amplitude rotation about the normal-to-plane axis. A notable feature of UHSs is that the magnetization is textured, i.e., non-collinear in neighboring sites, and establishes an equilibrium exchange flow [13] that can be analytically described by a homogeneous fluid velocity u, schematically shown in Fig. 1(a). We emphasize that UHSs are different from spin waves in which the former are nonlinear, spatially textured magnetization states while the latter are small-amplitude, linear perturbations of a magnetization state. Furthermore, UHSs are topologically protected by the in-plane magnetization's phase winding and concomitant large cone angle. This topological protection also gives rise to peculiar effects such as broken Galilean invariance [11] and the shedding of topology-conserving vortex-antivortex pairs from an impenetrable obstacle [12].
For the case of a finite-sized magnetic material, a canonical theoretical model is an effective one-dimensional, planar ferromagnetic channel subject to a non-equilibrium spin accumulation, or spin injection, at one edge. A solution to this model is a large-amplitude magnetization state exhibiting a spatially homogeneous precessional frequency and algebraic decay of fluid velocity as a result of damping [14][15][16][17], schematically depicted in Fig. 1 This solution is sometimes called a spin superfluid, a term originally proposed by Sonin [15], who was motivated by the fact that the order parameter for an easy plane ferromagnet is topologically identical to that for a superfluid. Such a magnetization state is similar to a UHS as it describes a large-amplitude, textured magnetization and a homogeneous precessional frequency. However, a notable difference is that the fluid velocity or, equivalently, the exchange flow is dissipated by damping along the channel. We refer to this state as a dissipative exchange flow, whereby a textured magnetization state exhibiting a well-defined precessional frequency is sustained by the balance between spin injection (forcing) and damping (dissipation). The use of the terminology dissipative exchange flow is motivated by other steady state excitations in magnetic materials, such as propagating and localized modes [18][19][20][21] and dissipative droplets [22][23][24][25][26][27], that are established by a local balance between forcing and dissipation. In contrast, the salient feature of dissipative exchange flows is that the balance is nonlocal; i.e., spin injection is established at the edge while dissipation occurs along the entire length of the channel. The main implication of the previous statement is that dissipative exchange flows can, in principle, be established in an arbitrarily long channel at the expense of the magnitude of the spatially homogeneous precessional frequency [15,17].
It is important to emphasize that the precessional frequency can pump dc spin current into a suitable reservoir at the unforced edge of the channel, or any other location along the channel, and with equal efficiency everywhere. This defining property of the dissipative exchange flow constitutes a novel feature that may be useful for spintronic applications.
The theoretical studies on dissipative exchange flows to date have made a crucial assumption: axial symmetry. This assumption breaks down, for example, in realistic materials whose crystal structure establishes a magnetocrystalline anisotropy or in ferromagnetic channels (nanowires) whose shape will induce an effective in-plane anisotropy. From an energetic perspective, domain walls are favored by in-plane anisotropy [28], in which case a train of domain walls with identical chirality or a soliton lattice will ensue [15], which can be interpreted as a symmetry-broken UHS. However, the excitation of a dissipative exchange flow upon spin injection in materials with in-plane anisotropy remains an open question. Within the linear, weak anisotropy regime, it has been speculated that symmetry-breaking terms are detrimental to dissipative exchange flows and would establish a minimum or threshold spin current density for their excitation [14,16,17]. Here, we provide a quantitative description of the onset and characteristic features of symmetry-broken dissipative exchange flows in ferromagnetic channels with in-plane anisotropy.
In this paper, we demonstrate the nature of hydrodynamic states in ferromagnetic materials with in-plane anisotropy. In the particular case of a ferromagnetic channel subject to spin injection, two characteristic features emerge. First, a critical spin injection threshold must be overcome to excite dissipative exchange flows, which we quantify in terms of the channel length and magnetic material parameters. Second, dissipative exchange flows exhibit hydrodynamic oscillations, described by a damped sine-Gordon equation. This implies that the precessional frequency develops harmonic overtones that reduce the efficiency of dc spin current pumped into an adjacent spin reservoir relative to a planar ferromagnetic channel. The dissipative exchange flow features mentioned above are also observed in the presence of nonlocal dipole fields by micromagnetic simulations. Moreover, we show that the spin injection threshold for a dissipative exchange flow can be exceeded by spin transfer torque [29] from a finite-sized contact region, taking advantage of the contact-to-nanowire area ratio. These results establish design parameters and constraints that must be taken into account to pursue an experimental demonstration of dissipative exchange flows in ferromagnetic materials.
The paper is organized as follows: In Sec. II, we derive the dispersive hydrodynamic formulation for a symmetry-broken ferromagnet and show the relevant scalings to reduce the model to a damped sine-Gordon equation. In Sec. III, the existence of hydrodynamic states is explored for an unforced, extended thin film using periodic traveling wave solutions of the undamped sine-Gordon equation. The particular case of a channel subject to injection is studied in Sec. IV both analytically and numerically. In Sec. V we perform micromagnetic simulations as a proof of concept. Finally, we provide a discussion on the hydrodynamic interpretation of the phenomena and concluding remarks in Sec. VI.
ANALYTICAL MODEL
Magnetization dynamics in ferromagnetic materials can be described by the Landau- In order to capture the full nonlinearity and exchange dispersion of Eqs. (1) and (2) in an analytically tractable representation, it is possible to map the magnetization vector to hydrodynamic variables. In particular, we implement the transformation n = m z and where n is the longitudinal spin density and u is the fluid velocity.The fluid velocity plays the role of the texture's wavevector, suggesting an intimate relationship to the exchange length that typically scales the dispersion relation of small-amplitude spin waves [10].
Introducing the hydrodynamic variables into Eqs. (1) and (2), we obtain the dispersive hydrodynamic (DH) formulation of magnetization dynamics [11] with the addition of inplane anisotropy. Considering a one-dimensional channel elongated in thex direction, such that the one-dimensional fluid velocity is u = u ·x = −∂ x Φ, the resulting DH equations are We emphasize that these equations represent an exact transformation of Eqs. (1) and (2).
The change in density is driven by the flux the first term in the right-hand side of Eq. (3a). This dimensionless flux is identical to the equilibrium spin current density that results from non-collinear magnetic moments (u = 0) [30].
If we consider a small, but non-zero in-plane anisotropy in Eqs. (3a) and (3b), 0 ≪ h an ≪ 1, it is possible to introduce the slow time, long wavelength, and small density h an x, and N = n/ √ h an to approximate Eq. (3b) to leading order with N = ∂ T Φ and Eq. (3a) by the damped sine-Gordon equation Interestingly, it is possible to quench the effect of anisotropy in this limit when the relative angle between the anisotropy and the fluid velocity is 45 degrees i.e., k x = k y . More generally, we here consider the anisotropy to be finite and aligned with the fluid velocity, i.e., k x = 1 and k y = 0. Different anisotropy geometries simply lead to a rescaling of time, space, and damping. Other symmetry breaking terms such as a small in-plane field may be introduced in Eq. (2) and interpreted hydrodynamically in a fashion similar to what has been presented above.
HYDRODYNAMIC STATES IN SYMMETRY-BROKEN FERROMAGNETS
We first study the existence of hydrodynamic-type solutions to Eqs. (3a) and (3b) by analyzing the conservative limit, α = 0. In the case of axially symmetric, planar ferromagnets, both static, spin density waves (SDWs) and dynamic, uniform hydrodynamic states (UHSs) parametrized by a constant density and fluid velocity are supported [11]. The trigonometric terms arising from in-plane anisotropy in Eqs. (3a) and (3b) modify the SDWs and UHSs.
We can use Eq. (5) and, e.g., Ref. 31 to obtain approximate, traveling wave solutions in the where v is the velocity. The conservative (α = 0), dynamic solution of Eqs. (3a) and (3b) for weak anisotropy (0 < h an ≪ 1) can be approximately expressed as and n ∼ v √ h an ∂ x Φ, where sn is a Jacobi elliptic function, ξ 0 sets the initial phase, and 0 < m < 1 is a parameter that determines the form of the solution. Equation (6) where ∆Φ determines the precession orientation (positive is anti-clockwise), λ = 2K(m) m|1 − v 2 |/h an is the oscillation wavelength, K(m) is the complete elliptic integral of the first kind,Ω is the frequency, andN andŪ are the mean density and fluid velocity, respectively. The nonlinear dispersion relationΩ = −N in Eq. (7) agrees with that of the axially symmetric UHS in an averaged sense [11]. This identification implies that the symmetric UHS velocity, v UHS , is identical to the symmetry-broken UHS velocity, v. WhenΩ =N = v = 0, this static solution represents a symmetry-broken SDW whose symmetric counterpart was studied in Ref. 11. The above analysis demonstrates that hydrodynamic states exist in materials with in-plane anisotropy, featuring hydrodynamic oscillations that agree with axially symmetric UHSs and SDWs in an averaged sense.
CHANNEL
We now consider a channel of length L subject to spin injection polarized along theẑ direction at the left edge of the channel. It is critical to find a hydrodynamic representation for spin injection. In general, this is achieved by adding a spin-transfer torque (STT) term to the right-hand side of Eq. (1) in the form [29] τ = −µm × m ×ẑ, In the case of an isotropic planar ferromagnet, h an = 0, and under appropriate long channel and weak injection approximations, Eqs. (3a) and (3b) subject to Eqs. (9a) and (9b) (equivalently Eq. (5) with k x = k y ) yield the approximate dissipative exchange flow [15,17] where the subscript "s" indicates an axially symmetric solution with the fluid velocity u s , fluid density n s , and precessional frequency Ω s . Dissipative exchange flows exhibit a uniform precessional frequency for any nonzero input flowū. Notably, the dispersion relation of a symmetry-broken UHS is maintained but the wave velocity is space dependent, v s (x) = −n s /u s (x). The linear decay of fluid velocity u s along the channel manifests as a spatial increase of the in-plane magnetization wavelength, see Fig. 1(b). It is important to emphasize that the balance between the edge input flow and dissipation along the channel that sustains dissipative exchange flows manifest in the precessional frequency as the ratioū/αL.
For the nonzero but small anisotropy regime, 0 < h an ≪ 1, we study the approximate damped sine-Gordon Eq. (5) subject to Neumann boundary conditions modeling the ferromagnetic channel in the low frequency, long wavelength regime.
Equation (5) Fig. 3(a), bottom panel. However, the entire solution coherently precesses at a fixed, fundamental frequency, exhibiting higher harmonic content due to nonlinearity. This implies that the oscillations will also manifest spectrally.
The above simulations expose a competition between the different length scales that exist in the system, namely, the domain wall length scale, the channel length, and the dissipative exchange flow wavelength proportional to l dw , L, andū −1 , respectively. Animated versions of the dissipative exchange flows shown in Fig. 3 can be found in the supplemental videos 1 and 2, respectively.
The threshold (or critical) input flow, u c , can be determined from the existence of the static solution, Eqs. (12a) and (12b). In general, the critical input flow can be found numer-ically by solving the transcendental equation (12b), which identifies the maximum allowedū for a given channel length L. This is shown by the solid blue curve in Fig. 4(a). The critical input flow exhibits two asymptotic limits where we show both the dimensionless critical input flow u c and its conversion to a dimensional equilibrium spin current density Q s,c in SI units, J/m 2 , under the assumption that n is small. These limiting behaviors are shown in Fig. 4(a) It is important to recall that these estimates indicate the amount of angular momentum necessary to tilt the in-plane magnetization towards the hard axis. These high spin current density thresholds can be partially mitigated by working with shorter channels, as shown in Fig. 4(a). Alternatively, utilizing a finite-sized region placed on top of the channel to induce STT makes it possible to effectively achieve such high spin current densities by inducing magnetization precession with experimentally moderate charge current densities.
Above threshold, a symmetry-broken oscillatory dissipative exchange flow is established Fig. 4(c). Recalling that magnetization precession can pump spin current to an adjacent metallic reservoir, it is possible to define a spin pumping efficiency, η, as whereQ s is the input spin current density and Q s,p is pumped spin current density that could be determined by inverse spin Hall measurements [1] from a neighboring heavy metal, in which case different boundary conditions must be considered depending on the location of the spin reservoir. The efficiencies for the curves in Fig. 4(c) are shown in Fig. 4(d).
Note that the efficiency can approach unity. This is because we define the efficiency relative to the axially symmetric solution that already takes into account the balance between spin injection and damping in establishing the steady state magnetization precession.
MICROMAGNETIC SIMULATIONS IN A SYMMETRY-BROKEN FERROMAG-NET
The above analytical results can be validated by micromagnetic simulations utilizing a local dipole field. However, we note that in-plane anisotropy can arise from the shape of an elongated channel by considering nonlocal dipole fields that are not incorporated in the analytical framework studied above. As a proof of concept for the validity of our local dipole field results, we run micromagnetic simulations in MuMax3 [33] for a Py channel of dimensions 2500 nm × 100 nm × 1 nm. Spin injection is modeled as a symmetric STT [29] impinging on a 1.2 nm × 100 nm area located on the top left edge of the channel [16]. As mentioned above, STT induces magnetization precession and, therefore, parametrizes the spin injection µ in Eq. (8). In order to micromagnetically model the charge to spin current density transduction, we perform simulations with local dipole field (axially symmetric
DISCUSSION AND CONCLUSION
We have shown that dissipative exchange flows exist in ferromagnetic channels with inplane anisotropy, i.e., broken axial symmetry. We quantitatively determined the injection threshold as a function of material parameters, corresponding to that necessary for a spin-current-driven tilt of the magnetization along the hard axis. For spin injection above threshold, oscillatory solutions are obtained, whereby the magnetization temporal precession exhibits higher harmonic content that reduces the spin pumping efficiency when compared to the axially symmetric case.
The dispersive hydrodynamic formulation allows us to draw an analogy for the dissipative exchange flows described above with hydrodynamics. Spin injection can be viewed as fluid flow injected from a nozzle into a pipe. In-plane anisotropy acts in two different ways: first, as a lift-check valve at the exit of the nozzle, establishing a velocity (or pressure) barrier, and second, as a periodic corrugation in the pipe that leads to an oscillatory fluid density and velocity. However, this analogy is limited as a fluid interpretation disregards the peculiarities of magnetization dynamics. Notably, the flow experiences constant deceleration to damping while maintaining the density independently of in-plane anisotropy; see Fig. 2(c).
Interestingly, the hydrodynamic interpretation of magnetization dynamics here is described by the phase Φ or exchange flow velocity potential (u = −∇Φ), which admits the nonzero precessional frequency Ω = ∂ t Φ as an observable that is determined by the magnetic analog of Bernoulli's equation [11]. This is in contrast to classical fluids where the velocity potential is obtained from the fluid velocity under the premise of irrotational flow and is not a physical observable.
Finally, micromagnetic simulations qualitatively agree with the analytical results in the presence of nonlocal dipole fields which will inevitably exist at the channel's edges. We further showed numerically that spin transfer torque from a finite-sized region placed on | 2018-12-11T02:37:57.149Z | 2017-07-14T00:00:00.000 | {
"year": 2017,
"sha1": "f7b08c79a69747e444e65ad18b580de668165a0e",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.96.134434",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "c998fd1b9905f73827d84475f7d42301d78d0dd4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
233265856 | pes2o/s2orc | v3-fos-license | Design of high efficiency achromatic metalens with large operation bandwidth using bilayer architecture
Achromatic metalens composed of arrays of subwavelength nanostructures with spatially varying geometries is attract-ive for a number of optical applications. However, the limited degree of freedom in the single layer achromatic metasurface design makes it difficult to simultaneously guarantee the sufficient phase dispersion and high diffraction efficiency, which restricts the achromatic bandwidth and efficiency of metalens. Here we propose and demonstrate a high efficiency achromatic metalens with diffraction-limited focusing capability at the wavelength ranging from 1000 nm to 1700 nm. The metalens comprises two stacked nanopillar metasurfaces, by which the required focusing phase and dispersion compensation can be controlled independently. As a result, in addition to the large achromatic bandwidth, the averaged focusing efficiency of the bilayer metalens is higher than 64% at the near-infrared region. Our design opens up the possibility to obtain the required phase dispersion and efficiency simultaneously, which is of great significance to design broadband metasurface-based optical devices.
Introduction
Dispersion, as one of the most fundamental properties of optical materials, leads to a spatial separation of different wavelengths. As a result, conventional refractive optical components, such as glass lens, always have chromatic aberration. Although such chromatic aberration can be used for spectrometry, it will significantly degrade the image quality in many imaging-related applications. For the correction of chromatic aberration in multi-wavelengths applications, traditional strategy is to integrate different dispersive materials to form an apochromatic or super-achromatic lens system 1 . Nevertheless, this approach inevitably adds weight, complexity and cost to optical devices, which further restricts their applications in some ultracompact systems.
In recent years, metasurface, which is comprised of arrays of subwavelength nanoscatterers with spatially varying geometries, has shown excellent ability to shape the electromagnetic field at will by manipulating its amplitude, phase, and polarization 2−12 . Ultra-compact planar architecture, ease-of-fabrication and high diffraction efficiency make metasurface the excellent candidate for various applications such as holograms 13−19 , metalenses 20−22 , and polarimeters 23,24 . However, metasurface-based functional elements are classified as diffractive devices, which possess severe chromatic aberrations and thus limit their broadband optical operations. In this context, several pioneering researches have been proposed in terms of eliminating the chromatic aberrations of metalens 25−39 . For example, metasurface unit-cells designed for several discrete operation wavelengths can be multiplexed or stacked to achieve multi-wavelengths, narrow-band achromatic metalenses 25−31 . In addition, based on the integrated-resonant unit elements to compensate the phase dispersion, broadband achromatic metalenses have also been demonstrated 32−40 . Nevertheless, the limited degree of freedom in the single layer achromatic metasurface design makes it difficult to simultaneously guarantee the sufficient phase dispersion and diffraction efficiency. As a result, the operation bandwidth and averaged efficiency of the single layer achromatic metalens are typically smaller than 40% relative to central wavelength and lower than 50%, respectively.
In this work, we propose a new approach to design high efficiency achromatic metalens with large operation bandwidth based on a stacked bilayer architecture. In contrast to the conventional single layer metasurface, two layers of nanostructures in the bilayer configuration are designed to manipulate phase profile and phase dispersion respectively, which would significantly improve the operation bandwidth and efficiency while giving more options for the design. As a proof-of-concept demonstration, we design a Si bilayer metalens working at the near-infrared region, achieving a large continuous achromatic wavelength from 1000 nm to 1700 nm, about 52% operation bandwidth relative to the central wavelength. In addition, the bilayer Si metalens has an averaged diffraction efficiency about 80% at the near-infrared region, and the focusing efficiency at the central wavelength of 1350 nm reaches up to 75%. Our design opens up the possibility to overcome the challenge of improving phase dispersion and efficiency at the same time, which is of great significance to the broadband applications of meta-devices.
Design of the bilayer achromatic metalens
To realize a high-efficiency achromatic metalens, we use a tightly spaced bilayer metasurfaces architecture, as shown in Fig. 1. For the design of the top layer, the geometric phase is employed to impose a phase profile on transmitted waves. The generated phase profile only depends on the orientation of a waveplate-like birefringent rectangular nanopillar, which is insensitive to the wavelength. The bottom layer of the device, composed of cylindrical nanopillars with different diameters, imposes the propagation phase on incident light. The propagation phase modulation is a function of operating wavelength and can provide the appro- priate phase dispersion to compensate phase difference between various working wavelengths. It should note that the propagation phase modulation also introduces an additional phase profile that would affect the convergence of the incident light. Therefore, the required focusing phase profile consists of the geometric phase modulation of the top layer and the propagation phase modulation of the bottom layer. For the case of operating wavelength , the phase profile of the whole achromatic metalens can be expressed as: (2) and The phase dispersion is only determined by the propagation phase modulation of the bottom layer. The phase profile should keep at a relatively low value by optimizing the structure parameters so that the phase profile is mainly determined by the geometry phase modulation of the top layer. As a result, in contrast to the single layer achromatic metalens, the proposed bilayer metasurface architecture allows simultaneous realization of broadband response with a large bandwidth and an improved efficiency due to its capability of providing the phase profile and phase dispersion independently.
As a proof-of-concept demonstration, here we use Si nanopillar arrays to design an achromatic bilayer metalens working at near-infrared region. As shown in Fig. 1, each unit cell of the bilayer metalens contains two Si nanopillars. As mentioned before, the top rectangular nanopillar is design to provide basic phase by tailoring the orientations of structure and the bottom cylindrical nanopillar is design to provide phase dispersion by tailoring the diameters of structure, which can ensure large phase dispersion compensation and high efficiency. Considering the feasibility of nanofabrication, the bottom nanopillar is immersed in a polymer photoresist SU8 while the top one is exposed to the air with the gap between them g= 400 nm. The heights for top and bottom nanopillars are h 1 = 850 nm and h 2 = 1500 nm, respectively, with a same square lattice constant P = 500 nm. For incident circularly polarized light, as high cross-polarization conversion efficiency is essential for the realization of efficient geometry phase modulation, here we pick up the optimized structural parameters for long and short axis lengths D x = 420 nm and D y = 190 nm for the top rectangular nanopillar. As shown in Fig. 2(a), the calculated cross-polarization conversion efficiency of the rectangular nanopillar is relatively high across the near-infrared wavelength range from 1000 nm to 1700 nm expect for two resonant positions. As a res-ult, by changing the orientation angle of rectangular nanopillar, it can readily provide 0−2π geometry phase modulation.
On the other side, for the bottom cylindrical nanopillar, Figs. 2(b) and 2(c) respectively illustrate the calculated transmission coefficient and propagation phase as a function of its diameter d at near-infrared wavelength region. From Fig. 2(b), it can be clearly seen that the cylindrical nanopillars with different diameters always keep a high transmission coefficient in the broad spectral range, which is a prerequisite for achieving high efficiency lens. Besides the high transmission coefficient, the Si cylindrical nanopillar with different diameters also exhibits smooth and large phase modulation coverage at the wavelength ranging from 1000 nm to 1700 nm (Fig. 2(c)). From these results, eleven Si cylindrical nanopillars with different diameters are selected to constitute the metasurface. The phase spectra of the four structures are shown in Fig. 2(d). We need to choose smooth and linear structure of the phase spectra to achieve the function of achromatic aberration. The slope in phase spectra also can intuitively reflect the magnitude of phase dispersion. The above method is able to provide sufficient phase dispersion compensation for the design of broadband achromatic metalens. In addition, due to the employment of Si with high refractive index, the coupling effect between neighboring nanopillars is very weak, and thus each nanopillar can be regarded as an isolated waveguide. This makes the phase design for each nanopillar element stay accurate even when they are arranged in a square lattice to form the metalens.
Opto-Electronic Advances
θ Based on the above approach, we design and demonstrate a bilayer broadband achromatic metalens with D = 100 μm and f = 340 μm working at near-infrared wavelength region. The top Si rectangular nanopillars have the same lengths of long and short sides of D x = 420 nm and D y = 190 nm but with different orientation angle . The bottom Si cylindrical nanopillars are designed with the diameter ranging from 130 nm to 380 nm. The ideal phase profile is given in Fig. 3(a), which can theoretically achieve a perfect focusing and is mainly provided by the top layer. As expected, different phase φ add (λ, x, y) dispersion compensation can be obtained at the different wavelengths. Fig. 3(b) reveals the bottom layer normalized phase compensation profile at the different wavelengths while it introduces an additional focusing phase. To mitigate the effect of propagation phase on convergence, we expect the to be as small as possible under the condition of satisfying the phase dispersion, and thus the optimized phase profile of top layer becomes as shown in Fig. 3(c).
Characterization of bilayer achromatic metalens
For intuitively exhibiting the achromatic characteristics, bilayer achromatic metalens and single layer chromatic metalens with same diameter and focal length are simultaneously numerically investigated with a right-handed circularly polarized light. Figs. 4(a) and 4(b) denote their intensity profiles simulated along x-z plane over a wavelength region from 1000 nm to 1700 nm, at a step of 100 nm. In contrast to the chromatic metalens exhibiting a large focal length shift similar to a Fresnel lens with increased incident wavelength (Fig. 4(a)), the bilayer chromatic metalens can converge the incident light with similar focal length at these wavelengths ( Fig. 4(b)). Figs. 4(c) and 4(d) shows the simulated focal spot and corresponding cross-section of the intensity profiles at the focal plane, respectively. It can be clearly seen that the focal spot has a circularly symmetric shape and the crosssection exhibits an Airy disk distribution with low side lobes, which demonstrate the good lensing quality of the designed bilayer achromatic metalens.
In order to quantitatively reveal the performance of 15. In addition to the achromatic bandwidth, the bilayer metalens also has high diffraction and focusing efficiency, as shown in Fig. 5(c). The focusing efficiency is defined as the ratio of light intensity from the focal spot to the light intensity of incident beam. Due to the employment of low loss Si as constituent material and the optimization of the geometry parameters for the nanopillar structures, the bilayer metalens has the averaged diffraction and focusing efficiencies about 81% and 64% for the investigated wavelength range, respectively, and the focusing efficiency can research up to 78% around the wavelength of 1300 nm. The focusing efficiency decreases quickly at the wavelength longer than 1400 nm, which is mainly attributed to the lower polarization conversion efficiency of the nanopillar. Moreover, although similar efficiency of achromatic metalens has been reported in previous work 40 , it has much larger diameter while our design is more feasible for practical imaging applications. These results imply that the bilayer achromatic metalens proposed here have good performance of correcting chromatic aberration over a continuous wide range of near-infrared wavelength. Our method can also be used to design the high efficiency achromatic metalens with large diameter or high numerical aperture. Finally, considering the alignment tolerance between two metasurface layers during the fabrication of bilayer metalens, we further investigate the influence of structural deviations on the lensing quality. Figs. 6(a)-6(d) shows the simulated focal intensity profiles of bilayer metalens with perfect alignment, misalignment of 500 nm (one unit cell), 1 μm (two unit cells) and 1.5 μm (three unit cells) between two metasurface layers at three different wavelengths. It can be clearly seen that the all the cases have similar focal intensity profiles and the focal lengths are almost the same. This originates from the fact that the required focusing phase are from the top metasurface layer while the bottom metasuface layer mainly provides dispersion compensation. Therefore, this proposed bilayer achromatic metalens has a robust tolerance for nanofabrication.
Conclusion
In summary, we propose a new approach to design high efficiency achromatic metalens with large operation bandwidth by the modulation of dispersion using bilayer architecture. Two stacked metasufaces layers are designed to separately provide required focusing phase profile and dispersion compensation. As a proof-ofconcept demonstration, by using Si rectangular and cylindrical nanopillar arrays, we design a broadband achromatic metalens with 700 nm operation bandwidth and 64% averaged focusing efficiency working at nearinfrared region. Compared with conventional achromatic multilevel diffractive elements 41 , our device provides a general approach for achieving multifunctional high pixel density achromatic optics. This method solves the problem of mutual constraint between required phase dispersion and efficiency in single layer metasurface and opens up the possibility to design multifunctional broadband meta-devices. | 2021-04-17T02:53:35.705Z | 2021-01-27T00:00:00.000 | {
"year": 2021,
"sha1": "978de65626fd08fbe82ee0ab12b04486c03ce6c2",
"oa_license": "CCBY",
"oa_url": "https://www.oejournal.org/data/article/export-pdf?id=60a46a6c99d8812b7585fe44",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "94381aba753b412ed7198ab4917953d5b970023b",
"s2fieldsofstudy": [
"Physics",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
18250300 | pes2o/s2orc | v3-fos-license | Chemically Modified Interleukin-6 Aptamer Inhibits Development of Collagen-Induced Arthritis in Cynomolgus Monkeys
Interleukin-6 (IL-6) is a potent mediator of inflammatory and immune responses, and a validated target for therapeutic intervention of inflammatory diseases. Previous studies have shown that SL1026, a slow off-rate modified aptamer (SOMAmer) antagonist of IL-6, neutralizes IL-6 signaling in vitro. In the present study, we show that SL1026 delays the onset and reduces the severity of rheumatoid symptoms in a collagen-induced arthritis model in cynomolgus monkeys. SL1026 (1 and 10 mg/kg), administered q.i.d., delayed the progression of arthritis and the concomitant increase in serum IL-6 levels compared to the untreated control group. Furthermore, SL1026 inhibited IL-6-induced STAT3 phosphorylation ex vivo in T lymphocytes from human blood and IL-6-induced C-reactive protein and serum amyloid A production in human primary hepatocytes. Importantly, SOMAmer treatment did not elicit an immune response, as evidenced by the absence of anti-SOMAmer antibodies in plasma of treated monkeys. These results demonstrate that SOMAmer antagonists of IL-6 may be attractive agents for the treatment of IL-6-mediated diseases, including rheumatoid arthritis.
Introduction
R heumatoid arthritis (RA) is an autoimmune inflammatory disease associated with persistent synovitis and progressive joint damage [1,2]. Although the causes of RA are not fully understood, proinflammatory cytokines, such as tumor necrosis factor-alpha, interleukin-1 (IL-1) and interleukin-6 (IL- 6), are known to be involved in the progression of this disease [3][4][5][6]. Constitutive overproduction of IL-6 is observed in the synovial fluid, bone marrow, and serum of patients with RA [7][8][9][10][11]. IL-6 activity in synovial fluid is greater than in serum [8], indicating that IL-6 is generated from activated and/or inflamed cells in articular cavities and is subsequently released into serum. The abnormally high concentration of IL-6 exacerbates disease progression, and normalization of serum IL-6 levels is an effective treatment for this disease [12,13].
There is no cure for RA, and current treatments are designed to slow progression of the disease. First-line therapies for RA include nonsteroidal anti-inflammatory drugs and small-molecule disease-modifying antirheumatic agents such as methotrexate; however, there is a growing role for biological agents, including tocilizumab, a humanized anti-IL-6 receptor antibody [14] that blocks IL-6 signaling. Tocilizumab is an approved drug for treatment of RA and other diseases mediated by IL-6, such as Castleman's disease, juvenile idiopathic arthritis, and Crohn's disease [14][15][16].
We previously reported the discovery and optimization of SL1025, a single-stranded DNA slow off-rate modified aptamer (SOMAmer) that binds with high affinity to human (K d = 0.2 nM) and monkey (K d = 2.5 nM) IL-6 and inhibits IL-6-dependent cell signaling pathways [29]. Similar to traditional aptamers, SOMAmers are selected in vitro from large random libraries, but are uniformly functionalized with hydrophobic moieties (eg, benzyl-, 2-naphthyl-, or 3-indolylcarboxamide) at the 5-position of uridine through a carboxamide linker [30]. These hydrophobic groups can participate in interactions with target molecules as well as form novel intramolecular secondary and tertiary structural motifs [31,32]. In addition to improved affinities, which are comparable to those of antibodies, SOMAmer technology offers several advantages over traditional aptamers, including enhanced nuclease resistance and greater selection success rates [33].
SL1025 is a 32 nucleotide sequence with ten hydrophobic modifications (eight benzyl, one naphthylmethyl and one phenylethyl), as well as six 2¢-methoxy ribose modifications to further enhance nuclease stability (Fig. 1A). Analysis of the crystal structure of SL1025 in a complex with IL-6 revealed that the majority of the IL-6 contact surfaces for both IL-6R and gp130 are occluded by SL1025 in the complex [31] (Fig. 1B, C). Furthermore, nearly all of the hydrophobic modifications are clustered on one side of SL1025 and make direct contact with IL-6.
SL1026 is a PEGylated form of SL1025 and has similar affinity (K d = 0.2 nM) for human IL-6 and similar inhibition activity for human IL-6 [29] and monkey IL-6 (data not shown). In this study, we report that SL1026 can delay the progression of RA in a nonhuman primate collagen-induced arthritis (CIA) model. We also show that SL1026 can inhibit STAT3 phosphorylation in human T lymphocytes as well as C-reactive protein (CRP) and serum amyloid A (SAA) protein production in human primary hepatocytes. Taken together, these results show that SL1026 is a potent antagonist of the IL-6 signaling pathway and represents a potential new drug candidate for the treatment of IL-6-mediated diseases, including RA.
Reagents
SL1026 was prepared with a hexylamine modification at the 5¢ terminus by solid phase synthesis at Agilent Technologies (Boulder, CO) as described previously [29]. Polyethylene glycol (PEG) (branched 2 · 20 kDa NHS ester; JenKem Technology, Plano, TX) was conjugated to the terminal amine using standard methods. SOMAmer concentrations for all studies were calculated using only the mass of the DNA component (excluding the mass of the PEG component). SL1026 for animal studies was tested for bacterial endotoxin contamination by the Limulus amebocyte lysate method [34] and determined to be below the lower limit of assay detection (<0.009 EU/mg). Human recombinant IL-6 was purchased from Millipore, Inc. (Billerica, MA). Tocilizumab (Actemra Ò 200 mg) was manufactured by Genentech, Inc. (San Francisco, CA).
FIG. 1. SL1025 occludes binding sites of IL-6R and gp130. (A) Sequences of SL1025 and SL1026 with 5-dU modifications indicated (Z = benzyl, P = naphthyl, E = phenethyl) and 2¢-methoxy positions highlighted gray. SL1026 is comprised of SL1025 with a 40 kDa polyethylene glycol (PEG) conjugated to its 5¢ terminus. Both sequences have a 3¢ inverted dT (idT). (B) X-ray crystal structure of the IL-6:SL1025 complex in a cartoon and transparent surface rendering representation (PDB ID: 4NI9) [31]. IL-6 is colored blue, and SL1025 is colored green. Hydrophobic modifications in SL1025 that make direct contact with IL-6 are highlighted green. (C) X-ray crystal structure of the IL-6:IL-6R:gp130 complex in a cartoon and transparent surface rendering representation (PDB ID: 1P9M) [67]. IL-6 is colored blue, IL-6R is colored brown, and gp130 is colored pink.
Measurement of STAT3 inhibition
Measurement of STAT3 phosphorylation in T lymphocytes was conducted as described previously [35]. Whole blood was collected from the antecubital vein of 10 healthy Japanese human volunteers. SL1026 or tocilizumab was preequilibrated with human recombinant IL-6 (10 mg/mL, 0.4 mM) and incubated with 200 mL of human blood for 20 min at 37°C. After removing the red blood cells with fluorescence activated cell sorting (FACS) lysing solution (BD Biosciences, San Diego, CA), the cells were suspended in 1 mL of methanol to permeabilize the cell membrane. Methanol was removed and the cells were then resuspended in 100 mL of FACS buffer (phosphate buffered saline [PBS] containing 2% fetal bovine serum) containing 10% Alexa-Fluor 488-conjugated anti-p-STAT3 antibody (BD Biosciences), 0.2% PE-conjugated anti-CD3 antibody (BD Biosciences), and 0.2% PerCP-conjugated anti-CD4 antibody (BD Biosciences). After incubation for 1 h on ice, the cells were resuspended in 700 mL of FACS buffer. Samples were analyzed using a FACS Calibur Flow Cytometer (Becton Dickinson and Company, Tokyo, Japan), and the mean fluorescence intensity of the cells was analyzed using CellQuest software version 3.3 (Becton Dickinson and Company). Protocols were approved by the Ethics Committee of the Otsuka Pharmaceutical Company and conducted in accordance with the guidelines for human experimentation established by the Declaration of Helsinki. Each subject provided written informed consent to participate in the study.
Measurement of CRP and SAA
Human primary hepatocytes (KAC Co., Ltd., Kyoto, Japan) were seeded into 96-well plates at a concentration of 2.8 · 10 4 cells/well with incubation medium (KAC Co., LTD.). The following day, fresh medium was added containing human recombinant IL-6 (10 ng/mL) pre-equilibrated with SL1026 or tocilizumab. Twenty-four hours later, supernatants were collected and the concentration of CRP and SAA were determined by ELISA. CRP was measured with a CircuLex High-Sensitivity CRP ELISA Kit (CircuLex Co., Ltd., Nagano, Japan), and SAA was measured with an Invitrogen Hu SAA ELISA Kit (Thermo Fisher Scientific, Inc., Waltham, MA).
Animal care
Twenty-four female cynomolgus monkeys (Macaca fasciularis), aged 3-5 years were obtained from Guangxi Grandforest Scientific Primate Company, Ltd. (Guangxi, China). Twelve monkeys were used for the pharmacokinetic study and 12 were used for the CIA model study. Animals were housed individually at a temperature of 26°C -3°C and relative humidity of 55 -20%. Monkey chow (HF Primate 5K91 12G 5K9J; Purina Mills, LLC) was provided at *108 g/day and tap water was provided ad libitum from an automatic supply system (Edstrom Industries, Inc., Waterford, WI). Studies were performed by Shin Nippon Biomedical Laboratories, Ltd. (Kagoshima, Japan) in accordance with standards published by the National Research Council (Guide for the Care and Use of Laboratory Animals, NIH OACU) of the National Institutes of Health Policy on Human Care and Use of Laboratory Animals. In accordance with these standards, every effort was made to ensure that the animals were free of pain and discomfort.
Pharmacokinetic study SL1026 was formulated in a vehicle consisting of 10 mM phosphate buffer (pH 7) containing 5 mM MgCl 2 , 135 mM NaCl, and 0.05% (w/v) Polysorbate 20. SL1026 was administered by bolus injection into the cephalic vein. Twelve animals were assigned to 3 dose groups (n = 4 per group): 1, 10, and 30 mg/kg. Blood was collected from the femoral vein in K 2 EDTA vacutainers (BD Biosciences) at 0.083, 0.25, 0.5, 1, 2, 4, 6, 8, 12, 24, 48, and 72 h after dose administration. SL1026 concentrations in plasma were measured by the dual hybridization method [36]. Briefly, a capture probe with a 3¢ amine was designed to hybridize with 17 bases on the 5¢ terminus of SL1026, and a detection probe with a 5¢ fluorescein isothiocyanate (FITC) label was designed to hybridize with 15 bases on the 3¢ terminus of SL1026. The capture probe was immobilized in a 96-well activated plate (Sumitomo Bakelite, Tokyo, Japan) and the plate was washed and blocked. The detection probe was incubated with the plasma samples for 15 min at 80°C to allow annealing to SL1026. Plasma samples were then added to the capture probe plate and incubated for 2 h at 38°C to allow annealing to SL1026. After washing the plate, horseradish peroxidase (HRP)-conjugated anti-FITC antibody (Southern Biotechnology Associates, Inc., Birmingham, AL) was added to each well and incubated for 2 h at room temperature. After washing the plate, HRP substrate solution (0.5 mg/mL 3,3¢,5,5¢-tetramethylbenzidine [TMB], 0.33 mM EDTA-2K, 0.2% acetic acid, 25% diethylformamide) was added to each well and incubated for 7-15 min at room temperature. Reactions were stopped by the addition of 0.5 M H 2 SO 4 and absorbance was measured. Blank and standard samples were prepared for each plate and a calibration curve was used to determine the SL1026 concentration in each plasma sample. Pharmacokinetic parameters were determined by noncompartmental analyses using WinNonlin software (version 5.2.1; Pharsight Corp., St. Louis, MO).
CIA study
Twelve cynomolgus monkeys were assigned to 3 groups (4 animals per group): control and two SL1026-treated groups (1 and 10 mg/kg/dose). Arthritis was induced by collagen treatment as described previously [37]. Bovine type II collagen (K41S type2 collagen, 0.4% solution) was purchased from Collagen Research Center (Tokyo, Japan), diluted to 4 mg/mL and then mixed with an equal volume of complete Freund¢s adjuvant (BD Biosciences). Monkeys were immunized on Study Day 0 by intradermal 2 mL injections into the back. Animals received a booster 3 weeks later (day 21) by the same procedure. SL1026 was formulated as described above and intravenous bolus doses (1 mL/kg) were administered into the cephalic vein every 6 h for 11 days starting on the first day of immunization. The control group received vehicle alone following the same dosing schedule. Arthritis scores and general condition scores of monkeys were observed and recorded once a week for 5 weeks, at 6, 12, 20, 27, and 34 days after the first immunization. Clinical assessment was performed according to established methods, which were modified in consideration of joint function [37]. Arthritis scores were evaluated by monitoring the degree of swelling and rigidity at the metacarpophalangeal, proximal interphalangeal, and distal interphalangeal joints, and of the wrist, ankle, elbow, and knee (total 64 joints). Each joint was assessed according to the following evaluation criteria: Score 0, no abnormality; Score 1, swelling not visible, but can be determined by touch; Score 2, swelling slightly visible and can be confirmed by touch; Score 3, swelling clearly visible, but joint can be completely flexed; Score 4, swelling clearly visible, but joint cannot be completely flexed; Score 5, rigid joints. The arthritis score for each animal was designated as the total score of individual joints. General condition scores were evaluated by monitoring behavior and movement of the monkeys. Each monkey was assessed according to the following evaluation criteria: Score 0, No abnormality; Score 1, Difficulty in hanging from the bars of the home cage by the fingers; Score 2, Inability to hang from the bars of the home cage by the fingers (using wrist); Score 3, Movement only by using forelimbs or hindlimbs; Score 4, Crouching; Score 5, Abnormal body position. Arthritis and general condition scores were determined by investigators blind to treatment assignment.
Measurement of SL1026, IL-6, and anti-SL1026 antibody in monkey plasma Blood was drawn from the femoral vein of each monkey. Samples were processed into plasma using K 2 EDTA for the measurement of SL1026 concentration, or using heparin sodium (Ajinomoto Pharmaceutical Co. Ltd., Tokyo, Japan) for the measurement of IL-6 and anti-SL1026 antibody. SL1026 concentration in plasma was measured by the dual hybridization method as described above. IL-6 concentration in the plasma was measured using a Quantikine human IL-6 ELISA Kit (R&D Systems, Inc., Minneapolis, MN). SL1026 did not interfere with the measurement of IL-6 with this kit (data not shown). Anti-SL1026 antibody in the plasma was measured by ELISA. Briefly, an immunoplate (Sumitomo Bakelite) was coated with 50 pmol/well of the DNA component or with 10 pmol/well of the PEG component of SL1026 according to the manufacturer's instructions. Monkey plasma (1,000-fold diluted) was added to each well and incubated for 2 h at room temperature with shaking (200 rpm). After washing the wells with PBS containing 0.05% Polysorbate, horseradish peroxidase-conjugated antihuman IgG+IgM+IgA (H&L) (Biodesign, Saco, ME) was diluted 40,000-fold in PBS containing Polysorbate and added to the wells for 1 h at room temperature with shaking (200 rpm). After washing, TMB substrate (Thermo Fisher Scientific) was added, and the colorimetric reaction was measured using an EMax plate reader (Molecular Devices, Tokyo, Japan) at 450 nm. Normal cynomolgus monkey plasma served as a negative control, while an anti-DNA antibody (Millipore) and an anti-PEG antibody (Epitomics, Burlingame, CA) served as positive controls.
Statistics
Graphical presentations and calculations were carried out using Microsoft Excel 2003 (SP1; Microsoft Co., Redmond, WA). Statistical analyses were performed using SAS System for windows (release 9.1 and 9.3; SAS Institute, Inc., Cary, NC). For the ex vivo test using human lymphocytes, the Dunnett's test and unpaired t-test were conducted. For the monkey studies, the mixed effect model for repeated measures method was used for the comparison of clinical scores. Twotailed P < 0.05 was considered significant. For the comparison of IL-6 concentrations on day 34, the Kruskal-Wallis test was performed comparing treatment groups with control, followed by a Dunn's post test corrected for multiplicity of comparison.
Inhibition of IL-6-induced CRP and SAA production in human primary hepatocytes To evaluate the inhibitory effect of SL1026 on IL-6-induced production of CRP and SAA, we performed an ex vivo assay using human primary hepatocytes (Fig. 3). CRP and SAA concentrations in supernatants from hepatocytes were 1.8 and 9.4 ng/mL (mean, n = 3), respectively. Treatment of cells with IL-6 increased CRP concentration *2-fold and SAA more than 10-fold compared to nonstimulated cells. Similar to tocilizumab, SL1026 showed dose-dependent inhibition of IL-6induced production of CRP and SAA.
Pharmacokinetics of SL1026
To establish a dose regimen for the CIA study, a plasma pharmacokinetic evaluation was performed in female cynomolgus monkeys following a single 1, 10, or 30 mg/kg FIG. 2. SL1026 inhibits IL-6-induced STAT3 phosphorylation in human lymphocytes. Cells were induced with IL-6 and STAT3 phosphorylation was determined by FACS using a fluorescent anti-p-STAT3 antibody. Percent inhibition values (relative to no IL-6 and no inhibitor control samples) are plotted as the mean -SEM of 10 samples at each concentration. A statistically significant increase of inhibition was observed compared to no-inhibitor control (0.0 -2.8%) for all SL1026 and tocilizumab groups (Dunnett's test, two-tailed, P < 0.01).
intravenous bolus dose (n = 4 per group). Mean extrapolated maximum SL1026 plasma concentrations (C 0 ) were approximately dose linear over the 30-fold dose range studied (Table 1). Plasma concentration-time curves showed a biphasic decline (Fig. 4) with mean terminal (t½ ß) half-lives of 5.33, 164, and 51.8 h for the 1, 10, and 30 mg/kg dose groups, respectively ( Table 1). As assessed by plasma area under the concentration-time curves (AUC), the increase of total SL1026 exposure with dose was greater than doseproportional, indicating saturation of plasma clearance. Thus, over the dose range studied, plasma clearance values declined 4-fold from 79.1 mL/h/kg for the low-dose group to 19.9 mL/h/kg for the high-dose group.
Reduction of arthritis symptoms in monkeys treated with SL1026
Monkeys received a q.i.d. administration of either 0, 1, or 10 mg/kg SL1026 (0, 4, or 40 mg/kg/day), with the schedule of collagen treatment, plasma collection, and arthritis score assessment indicated in Fig. 5A. All monkeys in the un-treated control group (0 mg/kg SL1026) began to show clinical signs of arthritis on day 13, with an arthritis score of 0.3 -0.3 (mean -SEM). This score continued to increase throughout the study and reached 93.3 -22.8 on day 34 (Fig. 5B). In contrast, no clinical signs of arthritis were detected until day 20 in monkeys treated with either 1 or 10 mg/kg SL1026. On day 34, arthritis scores for SL1026-treated animals were 68.5 -10.7 and 41.0 -14.6 for the 1 and 10 mg/kg dose groups, respectively. The reduced arthritis score of the 10 mg/kg dose group on day 34 was significantly different than the control group (P < 0.05). The general condition score for the untreated control monkeys also continued to worsen throughout the study, reaching a value of 3.3 -0.5 (mean -SEM) on day 34 (Fig. 5C), compared to 1.5 -0.5 and 1.5 -0.6 for the 1 and 10 mg/kg SL1026 treatment groups, respectively. The improved general condition score for both treatment groups on day 34 was significantly different than the control groups (P < 0.05). Overall, Values were calculated from three or four monkeys in each group. Values are plotted as the mean -SD of four monkeys. Values measured during the first 8 h after administration are shown in the inset plot.
SL1026 dose-dependent improvements in both arthritis score and general condition score were observed in this study.
Plasma SL1026 concentration in monkeys with CIA
Plasma SL1026 concentrations were measured throughout the CIA study and are shown in Table 2. The concentrations of SL1026 5 min after the first 1 or 10 mg/kg dose were 19.4 and 184 mg/mL (1.6 and 15.4 mM), respectively, reflecting the dose-dependent initial exposure on day 0. Peak plasma concentrations measured on days 7 and 11 were similar to those on day 0. To determine the trough plasma concentrations, plasma samples were collected immediately before the first SL1026 administration of day 1 (5th dose) and day 6 (25th dose). SL1026 trough concentrations in the 10 mg/kg group were significantly greater than predicted by the singledose concentration-time profile. In both the 1 and 10 mg/kg groups, the mean plasma trough concentrations on day 6 were 2-3 times greater than on day 1 (0.018 mg/mL and 0.049 mg/mL on day 1 and day 6, respectively for the 1 mg/kg group, and 12.7 mg/mL and 26.6 mg/mL on day 1 and day 6, respectively for the 10 mg/kg group). Furthermore, SL1026 was still detectable in the plasma at 0.011 mg/mL (1 mg/kg group) and 0.082 mg/mL (10 mg/kg group) on day 13, *48 h after the final administration.
Plasma IL-6 concentration in SL1026-treated monkeys
In control monkeys, median plasma concentrations of IL-6 remained at or below 37.8 pg/mL throughout the dosing period (days 0-11), but rapidly increased to 130 pg/mL on day 13 and then remained at or above 56.0 pg/mL through day 34 (Fig. 5D). In monkeys administered 10 mg/kg SL1026, median IL-6 concentrations dramatically increased to 347 pg/mL on day 1, but then steadily decreased to 4.7 pg/mL by day 13 where median values remained at or below 26.4 pg/mL through day 34. In monkeys administered 1 mg/kg SL1026, median IL-6 concentrations were similar to the control group during the treatment period and showed only a moderate increase from days 13-34, peaking at 51.1 pg/mL on day 27. On the final day of the study (day 34), the difference between Animals were sensitized on day 0 and 21 with collagen and dosed with slow off-rate modified aptamer (SOMAmer) every 6 h on days 0-11. Plasma was collected and measurements of arthritis score and general condition score were made on days indicated with a dot. Arthritis score (B) and general condition score (C) were evaluated at days 6, 13, 20, 27, and 34 after the first sensitization. Values are plotted as the mean -SEM (n = 4 in each group) for groups administered 0 (B), 1 ( ), or 10 (:) mg/kg SL1026. A statistically significant decrease of arthritis score was noted in the 10 mg/kg group (*P < 0.05, 10 mg/kg group vs. 0 mg/kg group) and of general condition score in both the 1 and 10 mg/kg groups (*P < 0.05, 1 and 10 mg/kg groups vs. 0 mg/kg group) by mixed effect model for repeated measures method in overall mean. (D) IL-6 concentration was measured in plasma samples collected at various times after the first immunization. Values are plotted as the median and interquartile range (n = 4 in each group) for groups administered 0 (B), 1 ( ), or 10 (:) mg/kg SL1026. median IL-6 concentrations in the SL1026 treatment groups was significantly different than the control group (P = 0.0263). After dosing, a trend toward reduced IL-6 concentrations was observed in SL1026-treated animals compared to control animals (P = 0.079 for 1 mg/kg SL1026 vs. control and P = 0.037 for 10 mg/kg SL1026 vs. control).
Plasma anti-SL1026 antibody titers in monkeys treated with SL1026 We screened for production of antibodies against the DNA and PEG components of SL1026. Compared to predose, no anti-SL1026 antibodies were detected for the DNA or PEG components of SL1026 over the 34-day experimental period (Fig. 6). The coating of the DNA and PEG components was confirmed with anti-DNA and anti-PEG antibodies. No signal was observed in this assay with normal monkey plasma (data not shown).
Discussion
The nonhuman primate CIA model is an established system for studying RA [37], and the therapeutic and preventive effects of several existing drugs, including tocilizumab, have been assessed in this model [38]. Animals display an autoimmune-mediated polyarthritis, synovitis, and erosion of cartilage and bone [37,39,40]. These symptoms begin to manifest at the clinical onset of arthritis [41], and are similar to human RA [42]. As in human RA, IL-6 is believed to be one of the important triggers in this monkey model, and is thought to play a key role in contributing to the severity of disease [43]. SL1026 treatment delayed the onset of arthritis in monkeys in this study and reduced the severity of symptoms. Not only did joint swelling and stiffness occur at a lower frequency in monkeys treated with SL1026 compared to the control monkeys, but also an overall improvement in general health condition was observed. Furthermore, SOMAmer treatment resulted in a sustained reduction in plasma IL-6 levels that corresponded precisely with the reduction in RA symptoms.
SL1026 administration began on the day animals received their first collagen immunization. Thus, SL1026 was given before the expected increase in serum IL-6, allowing SL1026 to access target tissues before the onset of inflammation. Results from the single-dose pharmacokinetic study in monkeys indicated an expected plasma concentration of 13 ng/mL (1 nM) at 6 h after administration of a 10 mg/kg dose (Fig. 4). Based on these results, a dosing regimen of four administrations per day was chosen to ensure a plasma concentration of SL1026 in excess of the in vitro IC 50 value (2.4 nM) [29] for nearly the entire dosing interval for the 10 mg/kg dose. SL1026 measurements in samples collected during the study (Table 2) indicated that SL1026 concentrations remained above the IL-6 concentration (less than 500 pg/mL, Fig. 5D) at both dosing concentrations.
The trough concentrations of SL1026 were 0.018 mg/mL and 0.049 mg/mL in plasma collected 5 min before the 5th and 25th dose at 1 mg/kg, and 12.7 mg/mL and 26.6 mg/mL in plasma collected 5 min before the 5th and 25th dose at 10 mg/ kg, *1,000-fold greater than predicted by the single-dose concentration-time profile. This suggests that the clearance rate was decreasing after repeated doses, perhaps due to saturation of a clearance mechanism, allowing measurable concentrations of SL1026 (11 and 82 ng/mL in the low-and high-dose groups, respectively) to remain in the plasma on day 13, *48 h after the final administration. Even if SL1026 concentrations dropped below the presumed pharmacologically effective level in plasma after the last administration, we expected SL1026 to accumulate in target tissues, such as articular cavities, by the enhanced permeation and retention effect [44][45][46]. Thus, pharmacologically effective concentrations of SL1026 might have persisted at the target tissues long after the final dose and contributed to the overall inhibition of the onset of arthritis in joints and restoration of the general health condition of the treated animals.
Serum IL-6 levels reflect the normal endogenous production of IL-6 [47], and IL-6 is an important serum biomarker for treatments that exert their effects through IL-6 signal inhibition. The rate of IL-6 clearance from the blood is increased significantly upon binding IL-6R [48], and anti-IL-6R antibodies such as tocilizumab block this clearance mechanism, leading to a temporary increase in free IL-6 levels in serum of animals and humans [47,48]. While tocilizumab significantly reduces the disease activity of RA by effectively inhibiting IL-6 signal transduction and the subsequent anti-inflammatory response [49,50], the increase in blood levels of free IL-6 may result in high IL-6 exposure to organs during drug treatment. Horai et al. reported serum IL-6 concentrations in monkeys when CIA begin to rise at about 14 days after the first immunization and peak at 21-28 days [37]. This observation was recapitulated in our control group, but in the treatment groups, an increase in total serum IL-6 was observed during the drug administration period, particularly at the high dose (Fig. 5D). IL-6 concentrations returned to baseline in the treatment groups after administration of the final dose on day 11, and were consistently lower than in the control group from days 13-34 during the expected peak period. Notably, arthritis scores and general health scores in the treatment groups were lower than those in the control group during this same period. IL-6 levels in the 1 mg/kg dose group were only slightly greater than the control group during SL1026 dosing, but remained lower than the control group after dosing, providing an intermediate reduction in RA symptoms.
Because SL1026 is known to block binding of IL-6 to IL-6R, the IL-6 spike during SOMAmer administration may have been due to interference of receptor-dependent IL-6 elimination, as was observed with tocilizumab. Additionally, the rise in IL-6 levels during SOMAmer administration may be an indication of an on-target effect commonly seen with antibody drugs, where inhibitor binding alters the rates of target distribution and elimination, resulting in increased plasma target concentrations. [51][52][53]. However, while total IL-6 levels increased during SL1026 administration, free IL-6 levels likely decreased, as the majority of serum IL-6 existed as a complex with SOMAmer and was, therefore, inactive (IL-6 was not detected in plasma after depletion of SL1026:IL-6 complexes with anti-PEG antibody-coated beads, data not shown). This inhibition of IL-6 activity during the early stages of disease development led to the reduction in RA symptoms in both dose groups. We worried that the rise in IL-6 levels during SOMAmer dosing could be due to activation of toll-like receptors (TLR) by SL1026, or contaminating endotoxin. However, no evidence of TLR9 activation was observed in an in vitro cellular assay with as much as 230 mM SL1026 (data not shown), and the endotoxin level in the test material was below the detectable measurement limit (<0.009 EU/mg).
Activation of CD4 + lymphocytes has been observed in the earliest clinical stage of RA [54], and IL-6 induction of STAT3 phosphorylation in T lymphocytes is believed to be closely associated with RA [55][56][57]. SL1026 fully inhibited IL-6 signaling and STAT3 phosphorylation in isolated human T lymphocytes with potency comparable to tocilizumab. Additionally, IL-6 induces production of CRP and SAA by hepatocytes, and tocilizumab was previously shown to inhibit this activity by blocking the IL-6 signal transduction pathway [19]. Similar to tocilizumab, SL1026 exhibited dose-dependent inhibition of IL-6-induced production of CRP and SAA by isolated human primary hepatocytes. These ex vivo results further support the in vivo observations and indicate that the suppression of RA symptoms in the CIA model resulted directly or indirectly from IL-6 signal inhibition by SL1026. Due to the clinical success of tocilizumab, IL-6 signal blockade is considered to be a powerful therapeutic strategy for the treatment of RA. Many other biologics targeting IL-6 are in development [58,59], including other anti-IL-6R antibodies (such as sarilumab [60]), anti-IL-6 antibodies (such as sirukumab [61], siltuximab [62], clazakizumab [63], and olokizumab), and anti-gp130 antibodies and their fusion proteins [64]. All of these agents are antibodies or synthe-sized proteins, while SL1026 is a nucleic acid-based antagonist with certain advantages over antibody drugs in vivo. Antibody drugs can induce an immune response in patients after several administrations, whereby neutralizing antibodies are generated against the drug and generally weaken its efficacy [38,65,66]. This has not been observed for aptamer therapies to date, and in this study, anti-SL1026 antibodies were not detectable in any of the monkeys during the examination period. Furthermore, the aggressive dosing regimen of up to 10 mg/kg of SL1026 every 6 h for 11 days was well tolerated in all animals and no adverse effects were observed.
IL-6 is a multifunctional cytokine that promotes cell growth and differentiation and influences the expression of a variety of proteins. In addition to its role in inflammation, IL-6 is known to regulate tumor development, including initiation, promotion, malignant conversion, invasion, and metastasis, and a relationship between IL-6 expression and cancer pathology has been reported [67]. SL1026 was previously shown to inhibit the growth of several tumor cell lines in vitro [29], and has the potential to be an effective suppressor of tumor proliferation in vivo. These combined studies confirm that SL1026 is a potent antagonist of the IL-6 signaling pathway and represents a new class of drug candidate for the treatment of IL-6-mediated diseases including RA, inflammation, and cancer. | 2018-04-03T02:51:18.440Z | 2016-02-01T00:00:00.000 | {
"year": 2016,
"sha1": "f31f5354a784ea28b79e93e3151256b4a36c61dc",
"oa_license": "CCBY",
"oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/nat.2015.0567",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f31f5354a784ea28b79e93e3151256b4a36c61dc",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
254210685 | pes2o/s2orc | v3-fos-license | Prevalence of anemia among Indigenous children in Latin America: a systematic review
ABSTRACT OBJECTIVE: To describe the prevalence pattern of anemia among Indigenous children in Latin America. METHODS: PRISMA guidelines were followed. Records were identified from the databases PubMed, Google Scholar, and Lilacs by two independent researchers between May and June 2021. Studies were included if the following criteria were met: a) studied Indigenous people b) was about children (from 0 to 12 years old); c) reported a prevalence estimate of anemia; d) had been conducted in any of the countries of Latin America; e) was published either in English, Portuguese, or Spanish; f) is a peer-reviewed article; and g) was published at any date. RESULTS: Out of 2,401 unique records retrieved, 42 articles met the inclusion criteria. A total of 39 different Indigenous communities were analyzed in the articles, and in 21 of them (54.0%) child anemia was a severe public health problem (prevalence ≥ 40%). Those communities were the Aymara (Bolivia); Aruak, Guaraní, Kamaiurá, Karapotó, Karibe, Kaxinanuá, Ma-cro-Jê, Suruí, Terena, Xavante (Brazil); Cabécar (Costa Rica), Achuar, Aguaruna, Awajún, Urarina, Yomybato (Peru); Piaroa and Yucpa (Venezuela); and Quechua (Peru and Bolivia). Children below two years had the highest prevalence of anemia (between 16.2% and 86.1%). Among Indigenous people, risk factors for anemia include nutrition, poor living conditions, access to health services, racism, and discrimination. Bolivia and Guatemala are scarcely studied, despite having the highest proportion of Indigenous communities in Latin America. CONCLUSIONS: Anemia constitutes a poorly documented public health problem among Indigenous children in 21 Indigenous communities in Bolivia, Brazil, Colombia, Costa Rica, Ecuador, Guatemala, Mexico, and Peru. In all Indigenous communities included in this study child anemia was an issue, especially in younger children.
INTRODUCTION
Anemia is a disorder in which the number of red blood cells is insufficient to meet the body's needs 1 . Iron deficiency is the most common cause of anemia, but other nutritional deficiencies, acute and chronic inflammation, parasites, and inherited or acquired diseases that affect hemoglobin synthesis and red blood cell production or survival can cause anemia 1 . It is the most common blood disorder in developing countries 2 and the health condition that affected the greatest number of people around the world (2.36 billion people) in 2015 3 . Multiple factors often contribute to the etiology of anemia, and sociodemographic conditions play a key role, especially in low-income countries 4 . For example, Leite et al. 5 , who studied Indigenous children in Brazil, documented higher risk of anemia for boys, children with lower maternal schooling, lower household socioeconomic status, poorer sanitary conditions, presence of maternal anemia, and anthropometric deficits.
Numerous factors, ranging from a lack of accurate and easily accessible information to the very nature of Indigenous identities in Latin America and the Caribbean, hinder the determination of the exact number of Indigenous people in that region 6 . To define Indigenous peoples in an efficient manner, the International Labor Organization (ILO) Convention 169 on Indigenous and Tribal Peoples in Independent Countries provides a definition that can be used to identify at least four dimensions within Indigenous peoples: recognition of identity, common origin, territoriality, and the linguistic and cultural dimension, which must be considered when establishing operational criteria 7 . Around 58 million Indigenous people live in Latin America, constituting 9.8% of the population 8 . The proportion of the population considered Indigenous in Latin America varies by country and ranges from 41.0% in Bolivia and Guatemala, to 0.5% in Brazil, and 0% in Uruguay 6 . In many countries of the region, Indigenous children are in a highly vulnerable situation, due to very high infant mortality rates, alarming levels of malnutrition in the context of food insecurity, precarious access to water, and high prevalence of diarrheal infections 9 . The situation has become a humanitarian crisis recognized by several national governments and this illustrates the dire state of Indigenous populations in many countries of Latin America. Until now, few studies from Latin America have considered the prevalence of anemia among Indigenous children. One is from Brazil 10 and another includes only four countries from the region (Brazil, Guatemala, Mexico, and Venezuela) 11 . Other studies included either Indigenous children without reporting values of prevalence of anemia 12,13 or without differentiating among Indigenous and non-Indigenous children [14][15][16][17][18][19][20] .
Since anemia is an indicator of both poor health and poor nutrition 2 , combining evidence from the literature about the prevalence of anemia in Indigenous children in Latin American countries can provide valuable data to governments and public health policies. Consequently, our objective was to describe the prevalence pattern of anemia among Indigenous children in Latin America.
Literature Search
PRISMA guidelines were followed, and the Systematic Review was registered in PROSPERO under the number CRD42022300601. Records were identified from the databases PubMed, Google Scholar, and Lilacs by two independent researchers between May and June 2021. The search strategy combined four main categories which correspond to the inclusion criteria (a) Indigenous people, b) children, c) anemia, and d) Latin America with the Boolean operator AND. Within the main categories we used MeSH terms (if available) and free text for different variations of the category topic, for instance, "children" OR "childhood", etc. The names of Latin American countries and the names of some representative Indigenous communities in the region were additionally included. Despite the controversies about what Latin America is and that such a position cannot be entirely correct 21 , in this study Latin America includes the following countries: Argentina, Belize, Bolivia, Brazil, Colombia, Chile, Costa Rica, Ecuador, El Salvador, French Guyana, Guyana, Guatemala, Honduras, Mexico, Nicaragua, Panamá, Paraguay, Peru, Suriname, Uruguay, and Venezuela. The spelling of countries and the Indigenous communities were varied according to differences in common use, for example "Peru" and "Perú". However, the different spelling of the words for other places such as Latin America, Central America, and South America was not included since the searches obtained with those search terms were equal; for example, search results with the terms "Latin America" and "América Latina" had the same results. The searches were run by combining the search terms with the Boolean operator "AND". The different terms within each of the four categories already mentioned were listed separated by the Boolean operator "OR". Search terms in Lilacs: 214 searches were made combining the search terms, as follows: 1. "Anemia" AND "child" AND each of the countries in Latin America; 2. "Anemia" AND "children" AND each of the countries in Latin America; 3. "Anemia" AND "child" AND each of the Indigenous communities; 4. "Anemia" AND "children" AND each of the Indigenous communities; and 5. "Anemia" AND "crianças" AND "indios".
Inclusion and Exclusion Criteria
Studies were included if the following criteria were met: a) studied Indigenous people b) was about children (from 0 to 12 years old); c) reported a prevalence estimate of anemia; d) had been conducted in any of the countries of Latin America; e) was published either in English, Portuguese, or Spanish; f) is a peer-reviewed article; and g) was published at any date. PubMed, Google Scholar, and Lilacs searches were carried out with the defined search terms. Studies that did not met the previous criteria were not included in this systematic review. We sorted the search results in Google scholar by relevance and screened the first 200 hits, followed by the next 200 and so forth, until we were unable to find any more relevant results. Title and abstract screening were performed, and, in some cases, the methods section of the article was screened to make sure that the inclusion criteria were met. Duplicates were excluded and a full list of relevant articles was created for data extraction.
Data Extraction and Quality Assessment and Analysis
Relevant information from selected studies was recorded in a data extraction sheet according to the following categories: Study (title, author, year, journal), geographical location (country, region), type of study, study objective, dataset, sampling technique (random or convenience), date of data collection, sample size, age group, exposure, outcome, statistical methods used, study results, study conclusions, study limitations, and study recommendations. Afterwards, the description of selected studies was assessed according to the STROBE statement 22 . Each item of the list was assigned one point: cross-sectional studies were evaluated over a total of 33 points and cohort studies over 35 points. For both types of studies, a threshold of 12 points was established to determine that they were of sufficient quality for consideration in this systematic review. The 12 points included: 1) gives a scientific background and rationale for the reported investigation; 2) states specific objectives, including any prespecified hypotheses; 3) describes the setting, locations, and relevant dates; 4) defines outcomes; 5) describes assessment methods; 6) shows the study size; 7) describes statistical methods; 8) gives the characteristics of study participants; 9) reports summary measures; 10) summarizes key results referring to study objectives; 11) gives a cautious overall interpretation of results; and 12) discuss the external validity of the study results. Thereafter, the systematic search and the study characteristics were analyzed. The descriptive analysis of the prevalence of anemia was grouped by country, by age range, and by Indigenous community.
Literature Search
The initial search yielded 174 records ( Figure 1). Out of these, 42 articles met the eligibility criteria 5,23-63 and were included in the systematic review. A second reviewer Ramírez-Luzurriaga et al. 46 (2020) Continue independently replicated the searches. Scores ranged from 13 to 21 points for cross-sectional studies and the two cohort studies received scores of 21 and 35. All articles scored above the established threshold of 12 and thus were judged of good quality for inclusion in the subsequent analyses. Most articles included more than one sample taken to measure the prevalence of anemia. Therefore, Table 1 shows the different samples taken in all the studies, numbered from 1 to 133.
General Characteristics of the Studies
Selected articles were written one the following three languages: English ( The sample sizes diverged widely across the studies. The smallest sample size was 36 47 , whereas the largest sample sizes were from studies including secondary data from national databases; for example, the highest number was from the National Maternal and Child Health Surveys in Guatemala (n = 5,735) 48 . Age ranges varied greatly: some studies considered children only within a narrow age group, e.g., 6-24 months 52 , whereas other studies, included broader age ranges, e.g., 0-17 years old 58 63 (1999) recommendations of the World Health Organization to diagnose anemia among children (for children from 6 to 59 months: < 11.0 mg/dL; for children from 5 to 11 years old: < 11.5 mg/dL; and for children from 12 to 14 years old: < 12.0 mg/dL) (1). Two studies used other diagnostic thresholds 55,63 (Table 1).
Prevalence of Anemia by Country and Indigenous Community
Bolivia, Brazil, and Peru showed the highest overall prevalence of anemia (Figure 2), with the highest values among Guaraní children from 6 to 11 months (88.9%) in five villages in the state of Rio de Janeiro (Sapukai, Parati-Mirim, Araponga, Sítio Rio Pequeno, and Mamanguá) and one village in the state of São Paulo (Boa Vista) in Brazil 28 . The lowest prevalence of anemia (0%) was observed in Bolivia among children between 11 and 12 years from the Aymara Indigenous community in Caranavi in the Taraco district situated on the shores of Lake Titicaca at a high altitude 24 . The second lowest prevalence of anemia including boys and girls (4.5%) was found in Chile, among Mapuche children from 8 to 14 months The data labels contain the sample numbers according to Table 1. old from rural areas in the province of Cautín 37 . We found that the prevalence of anemia among Indigenous children has been reported in 10 countries in Latin America. However, almost one-third of the studies were from Brazil (13 studies). Also, from the 39 Indigenous communities included in this systematic review, only four were part of more than one study: Aymara (2 studies in Bolivia) 24 Prevalence of anemia among children is a severe public health problem in 54% (21 out of 39) of the Indigenous communities in Latin America included in this systematic review (prevalence ≥ 40%) 4 . The Indigenous communities that showed prevalence of anemia above 40% were the Aymara (Bolivia); Aruak, Guaraní, Kamaiurá, Karapotó, Karibe, Kaxinanuá, Figure 3. Prevalence of anemia by Indigenous community.
DISCUSSION
This systematic review revealed an alarming situation regarding anemia in Indigenous communities in Latin America. Data obtained from Brazil confirm the gravity of the situation in the Indigenous population, with extremely high prevalence rates among children as described by Lício, Fávaro and Chaves 10 , who allege that such information is essential to contribute to health care priorities in these communities.
We found that the prevalence of anemia among Indigenous children has been reported in 10 countries in Latin America. However, almost one-third of the studies were from Brazil (13 studies). As Licio et al. 10 stated, since 2001, studies investigating the occurrence of anemia in Indigenous populations increased significantly in Brazil, occupying an important space in the health status and social inequalities debate. Indigenous communities in Brazil correspond to 0.5% of the entire population (820,000 inhabitants), whereas, for example, 41% of the country population in both Guatemala (5,880,000) and Bolivia (4,120,000) is considered Indigenous 6 . Only one study from Guatemala and two from Bolivia could be included in this systematic review. However, we reported seven studies from Mexico and five from Peru which are the two countries with the highest numbers of Indigenous people in Latin America, namely 16,830,000 (40.3%) and 7,600,000 (18.2%) from a total Indigenous population of 41,810,000 6 . In addition, from the 39 Indigenous communities included in this systematic review, only four of them appeared in more than one study: Aymara in Bolivia; Quechua in Bolivia, Colombia, and Peru; Karapotó and Xavante in Brazil. These results suggest that the data on specific Indigenous communities is scarce. Therefore, a consistent, collaborative approach is crucial to deliver medical research and care of Indigenous communities 66 .
As stated above, numerous factors hinder the determination of the exact number of Indigenous people in Latin America; however, considering the existence of approximately 780 Indigenous groups 6 , this means that this systematic review covered 5.0% of the Indigenous communities in the region. Therefore, studies about the situation of specific Indigenous communities in the region are lacking. We found studies about the prevalence of anemia among children but without making any distinction between non-Indigenous and Indigenous children: for example, a study by Vázquez et al. in 2019 14 showed that the overall prevalence of anemia in children from Latin America in general is 28.56%, and Mujica et al. 16 found anemia prevalences from 4.0% to no more than 61.3% in children under six years of age based on 2014 data in Latin America and the Caribbean. Several researchers acknowledge that they have underrepresented Indigenous populations in Latin America, with significant underrepresentation of those groups in their research 17 . We also found that studies including national databases present lower values of anemia prevalence among children compared with the studies specifically targeting Indigenous communities. Using data at the national level provides reliable information only for large geographic domains, thus it can provide information at the national level for the central government, but has less reliability for regional governments and is useless for local governments. Thus, governments and private organizations in Latin American countries should be aware that national databases could be masking extreme prevalences of anemia in vulnerable populations.
Furthermore, recent studies are lacking. Data for studies in all countries, except for Brazil, was outdated. Our findings show that the most recent data collection on prevalence of anemia from Indigenous communities was done in 2017, followed by 2016 and 2013. All other studies collected data before 2013, which means that currently (2021) recent data on the prevalence of anemia among Indigenous children in Latin America is awfully scarce. Moreover, reports covering a wide range of years are lacking. Since almost all the studies included in this systematic review were cross-sectional, possible causes of the prevalence of anemia cannot be determined. On the other hand, the lack of studies with longer-term follow-ups of specific Indigenous communities hinders monitoring of the course of child anemia and comparing anemia between different countries and regions.
Age groups: Our results indicate that the youngest children reported higher prevalence of anemia. The highest values were observed in the age groups between 6 and 35 months. Older children also showed high prevalence of anemia, but not as high as the youngest. For example, particularly among the Aruak people in Brazil, a very high number of children between six and 23 months old show moderate or severe anemia when compared with those between 24 and 59 months 32 . In the Peruvian Amazon, anemia was more prevalent in the 0 to 5-year age group from the Achuar, Urarina, and Quichua Indigenous communities 58 . 26 found that Xavante children's high rates of anemia reveal an ethnic disparity between them and the Brazilian population in general, suggesting the causes of anemia are largely dependent on complex, variable relationships between socioeconomic, sociodemographic, and biological factors. In general, Indigenous peoples in Latin America have very limited access to health care services, often lacking geographical access, which means the closest health ward is far away from their communities. Moreover, they usually face obstacles in accessing health services due to mistrust generated by a history of racism and structural discrimination 67 .
Besides, specific nutritional causes can be considered as risk factors for high prevalence of anemia. Franco et al. 37 suggested that maternal milk might have a protective biologic role in preventing iron deficiency anemia among the Mapuche Indigenous community in Chile. Furthermore, Mujica et al. 68 showed recently that the low prevalence of anemia in Chile is very likely a result of the iron-fortified milk provided by the National Complementary Feeding Program. The Aguarunas in Peru are also an example in which anemia prevalence is associated with their diet, based primarily on cassava and bananas, with little animal protein 60 . On the other hand, in the case of the Yucpa Indigenous community in Venezuela, the high frequency of anemia without nutritional cause is attributed to the prevalence of infectious diseases such as hepatitis, parasites, and skin and respiratory tract infections 63 .
Prevention: Khambalia et al. 11 identified iron deficiency, malaria, and helminth infections as the three main causes of anemia among Indigenous populations, which can be addressed by a combination of interventions, such as fortification of staple foods with iron and other micronutrients, iron supplements targeted to risk groups, use of insecticide-treated materials and bed nets, deworming (anthelminthics) in risk groups, and prevention and treatment of malaria. Anemia can also be prevented and controlled by fully immunizing children; treating communicable diseases; managing obstetric complications, particularly excessive bleeding; and using modern family planning methods. For the first six months of a child's life, breastfeeding must be the sole basis for feeding, iron-rich foods should be given as a supplement, and sanitation facilities must be improved 11 .
Public policies: Several health programs and policies have been implemented in Latin American countries in the past two decades to generally address anemia as a public health problem 14 . However, Indigenous communities continue experiencing difficult situations. According to CEPAL 9 , Guatemala has the highest proportion of Indigenous population living in municipalities with high or critical vulnerability (77.9%); Colombia (65.8%) follows in the list, then Mexico (38.8%), Peru (33%), and finally Chile (20.9%). Beyond this variability, a common pattern is the inequality affecting Indigenous peoples, with the widest gaps found in Colombia and Mexico. Although inequalities are also systematic within municipalities, the vulnerability index among Indigenous populations seems to always be higher than that estimated for non-Indigenous populations 9 . The Indigenous peoples of Latin America and the Caribbean continue to experience political, social, and economic marginalization that has relegated them to conditions of poverty and extreme poverty 67 . This is mainly expressed in the adoption of sectoral laws that subordinate Indigenous rights to business and state interests, and in implementation gaps, particularly in delimitating, demarcating, and titling lands 7 .
Indigenous communities should be seriously considered in national programs to reduce anemia since, as Vázquez et al. 14 said, anemia prevalence was reduced in Latin American countries only by national programs that covered a wide geographical area, were well monitored, and were extended over time. According to the United Nations Office for the Coordination of Humanitarian Affairs 67 , the purchasing power of Indigenous peoples to access basic commodities, including food, has even been recently decreasing due to quarantine and social mobility limitation measures dictated by governments to contain the SARS-CoV-2 pandemic, thus increasing the risk of food insecurity, particularly in areas where subsistence activities based on traditional land-based livelihoods are not an option. Despite the state responses aimed at containing and mitigating the impact of COVID-19 on Indigenous peoples and the 285 social protection measures adopted in Latin America, the state responses to date have not been tailored to the needs of Indigenous people 67 . In addition, the traditional territories of 108 Indigenous peoples in Latin America straddle national boundaries, which means that cooperation and cohesive strategies between governments are needed to design policies to tackle child anemia as a public health problem.
Strengths and Limitations of the Study
The main strength of this study is that it is a systematic review, covering a large geographical area where Indigenous populations are common. Also, it recovers data about specific Indigenous communities and national databases. It provides detailed information about the different samples from all the studies about the prevalence of anemia among Indigenous children performed in the last 35 years in Latin America. Nevertheless, it has some limitations. First, we included only articles that explicitly mentioned any Indigenous community in Latin America. Other articles that did not mention that they were studying Indigenous populations or that Indigenous people were part of the overall study sample may have been overlooked. Second, mainly cross-sectional studies were available, which disallows the establishment of causal relationships. Third, as designed, the National Surveys do not provide information about specific Indigenous ethnic groups. Fourth, this systematic review only included data about the prevalence of anemia among Indigenous children. Combining these data with other studies about malnutrition is important to better understand the relationship between the prevalence of anemia and malnutrition in Indigenous children in Latin America. Fifth, limitations in the evaluation of the quality of articles in this systematic review are possible. The STROBE statement was only a guideline for reporting observational studies, not precisely a quality assessment tool. Therefore, we did not exactly assess for the quality of the selected articles, we just gave scores to a description of each selected study.
Recommendations
We recommend further studies update and extend the overview on anemia as a public health problem in Indigenous communities in Latin America. Besides, studies about the prevalence of anemia among Indigenous children should be combined with other studies about malnutrition to raise more information about how to prevent child anemia. Nutrition programs directed to Indigenous communities should be adjusted to the way of living of these populations without disturbing their cultural traditions. Also, exploring social acceptance, community willingness, and participation of Indigenous populations in nutrition programs is important. Finally, assessing the Indigenous community leaders' ability to get involved in research and nutrition programs is necessary.
CONCLUSIONS
Anemia constitutes a severe and poorly documented public health problem among Indigenous children in 21 Indigenous communities in Bolivia, Brazil, Colombia, Costa Rica, Ecuador, Guatemala, Mexico, and Peru. All Indigenous communities included in this study showed child anemia as a public health problem, especially in younger children. | 2022-12-04T16:21:43.269Z | 2022-11-18T00:00:00.000 | {
"year": 2022,
"sha1": "677593c5807dd53bbae5f9582c2f183fb496fb10",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "290efe437f00cae1b92cfa44702015be138f6028",
"s2fieldsofstudy": [
"Political Science",
"Medicine"
],
"extfieldsofstudy": []
} |
251754387 | pes2o/s2orc | v3-fos-license | Viability of fibroblasts from "Curraleiro Pé Duro" cattle after different cryopreservation protocols
In the present work, it was possible to verify that there is no significant difference in the cryopreservation protocols tested for cells from the ear tissue of Curraleiro Pé Duro cattle, which maintained both their viability and their capacity for growth and confluence at good levels, even in cells from different passages, showing their ability to resist cryopreservation protocols and to be used as a source of genetic material in cryobanks for research and for use in assisted reproduction biotechniques, even if coming from protocols that used simpler equipment with lower cost.
The breed Curraleiro Pé-Duro was named after research revealed the genetic equality of the Curraleiro and the Pé-Duro cattle (Carvalho et al., 2012).It is one of the most important cattle breeds in the history of the northeastern hinterland and the São Francisco valley, present for almost 500 years in the Northeast of Brazil (Santiago, 1975).The conservation of native breeds directly reflects on the environment and culture, making up a genetic, historical, and cultural heritage.Crossing between breeds for new strains can subdue the real potential of a native breed (Fioravanti, 2015), which with a relatively small number of individuals, the loss of genetic material can occur even with the death of a single individual, reducing the gene pool and even genes important for the maintenance of the species (Martins et al., 2007).
For the conservation of genetic material and biodiversity, it is necessary to develop and improve techniques, such as the cultivation of somatic cells that can be cryopreserved and stored in cryobanks to later be used in reproductive biotechnologies (Wani and Hong, 2018).Briefly, somatic cells are diploid cells of the animal's body, such as fibroblasts, that is, all cells that have a complete gene set, therefore, all cells of the individual except for reproductive cells (Strachan and Read, 2012).In general, somatic samples for cell cryopreservation are obtained from skin-derived tissues and cultured in vitro in the form of fibroblasts, which are Corresponding author: wbrunomp@gmail.comSubmitted: November 29, 2021.Accepted: April 28, 2022.stored in cryobanks and can be used for biotechnologies such as cloning (Mestre-Citrinovitz et al., 2016).
Cryopreservation can bring forth a reduction in cell metabolism for maintenance of functional and structural aspects for long periods in liquid nitrogen, enabling the future use of samples in reproductive biotechniques such as cloning and Somatic Cell Nuclear Transfer (Costa et al., 2016).
Germplasm banks or cryobanks are a highly viable alternative for the maintenance of biological and genetic diversity (Silva et al., 2012).Therefore, the objective of this work is to evaluate the different cryopreservation protocols for maintaining the cell viability of fibroblasts isolated from the Curraleiro Pé Duro cattle breed.
The experiment was carried out in the city of Teresina-PI, at the Laboratory of Animal Reproduction Biotechnology -LBRA, at the Center for Agricultural Sciences at UFPI.For this purpose, a cattle breeder of the breed Curraleiro Pé-Duro, healthy, fed on pasture and complementary fed, with access to water and mineral salt at will.The animal underwent clinical evaluation as well as additional evaluation.
The procedures were registered in the National System for the Management of Genetic Heritage and Associated Traditional Knowledge (SISGEN) under registration number AD94098 and approved by the Ethics Committee on Animal Experimentation of the Federal University of Piauí (CEEA-UFPI), registered under number 673/21.
After the initial evaluations, the animal was placed in a containment trunk, where the animal's ear was cleaned and anesthetized, from which a skin sample of about 1cm² was taken and transported in a sterile tube containing PBS solution plus antibiotic, in a thermal box with monitored temperature, reaching the laboratory in up to 2 hours.
In the laboratory, the sample was dissected, removing cartilage, fat, hair, and other remaining tissue.The skin fragment was then chopped into smaller fragments of approximately 2mm².These new fragments were placed in 35mm dry and sterile Petri dishes, with a total of 5 fragments in each plate and 4 ml of Dulbecco MEM culture medium (DMEM Cell Culture Medium, Vitrocell Embriolife, Brazil) was added and incubated in a controlled atmosphere of 5% carbon dioxide (CO2) in air, at a temperature of 39ºC and high humidity of CO2 (HF151UV, Heal Force, China) where they remained in cultivation for one week.
On D7, the biopsies were removed, and the entire contents of the solution and culture of each plate were washed with 1mL of the DMEM culture medium and then added with another 3mL of the same.The plates again remained in cultivation for 1 week.
On D14, the cell culture plates were transferred to culture bottles after removing all the culture medium from the plates and detaching the plate using Trypsin/EDTA (Trypsin 0.05x, LGC Biotecnologia, Brazil) for 5min at 39ºC and centrifuged at 200G for 5 minutes and after centrifugation, the supernatant was removed and the pellet resuspended with 1mL of DMEM medium and transferred to two culture bottles, remaining in the incubator under the same atmospheric conditions.On D21, the cells, already in the confluence stage, began to decrease the meiotic process and then the first subculture was performed, initially with the removal of all the culture medium from the flasks and then trypsinized, centrifuged, resuspended, and transferred to two new culture bottles to be incubated for another 7 days.On D28, D35, D42 and D49, the 2nd, 3rd, 4th and 5th passes were performed respectively, using the same protocol.
Cryopreservation was performed with cells from the 4th and 5th passage, which during the subculture process and after centrifugation, half of the sample was poured into 0.25mL straws.A small aliquot of 10 uL was taken for cell concentration counting in a Neubauer chamber.Cells were poured into straws (1.0x106 Cel) with culture medium plus 10% DMSO.After the straws were sealed and identified, they were subjected to 3 cryopreservation protocols, where in Treatment 1 (T1), the samples went through a freezing curve by storing them in a freezer (Frost Free, electronic 280, Brastemp, Brazil) at -20°C for 24h and immersion in Liquid Nitrogen (NL2).Treatment 2 (T2) underwent a freezing curve by storing the straws in a freezer at -80°C (CL 347-86v, Cold Lab, Brazil) for 24h and immersion in NL2.Treatment 3 (T3) passed through an automated machine (TK 3000, TK Tecnologia, Brazil) and a cryopreservation curve normally used for embryos was used, in which the straws with cells were placed in a container previously stabilized at -6ºC, where they were crystallized and then passed through a negative ramp of -0.5ºC/min until reaching a temperature of -32°C where they stabilized for 5min and immersed in NL2.The cells were stored for approximately 10 days in the cryobank and then they were thawed by removing the straws from the cryogenic cylinders and then immersing them for 20s in a water bath previously heated to 38°C.
During all cell passages of the experiment, morphological evaluations were carried out in a binocular microscope (BX41, Olympus) observing their size, appearance, opacity, shape, and adhesion patterns.The viability analysis was performed with Trypan Blue Dye (0.4% Trypan Blue in PBS, LGC Biotecnologia, Brazil).Postcryopreservation growth capacity, confluence and morphology were performed with cell culture for an additional 7 days after thawing.In the different analyses, cell confluence and cell viability data were analyzed by ANOVA and the means compared by Tukey's test (5% probability).
Arq. Bras. Med. Vet. Zootec., v.74, n.4, p.754-758, 2022
All cells had their morphology monitored during growth, presenting a fusiform shape accompanied by cellular extensions, with a round, large and centralized nucleus in a wellfilled cytoplasm.The Curraleiro Pé Duro cattle fibroblasts derived from ear biopsy had a uniform growth, presenting a regular confluence during the culture days, reaching 90% of confluence in approximately 7 days of culture.
After cryopreserved in 0.25mL straws at a concentration of approximately 1x10^6, the cells maintained the same morphological pattern of preserved fibroblasts, which reinforces work, where it was already possible to verify that at a concentration of 1x10^6 cells per straw, it is possible to obtain good results for cryopreservation of cells of ear origin in cattle, when compared to other concentrations such as 3x10^6 and 5x10^6 (Urio, 2012).It was also possible to observe a statistically non-significant decrease in the confluence capacity, expressed as a percentage, in the cells of Curraleiro Pé Duro (83.33±5.16),when compared to the same pre-freezing (96.67±5.77),which may slightly affect cell viability, since in the work of Munhoz and Costa (2012), it was found that the best viability index of cryopreserved cattle cells was reached when cell confluences reached the closest to 100%, those with 73.6% of cell viability, while cells with 70 to 80% of confluence showed 69.9% of cell viability.
Curraleiro Pé Duro somatic cells managed to maintain good levels of cell viability (Table 1) in all treatments that used freezers (T1 and T2), observed by the technique of staining in Trypan Blue and corroborating the results of Urio ( 2012) which he used in his work with 10% DMSO and 0.25mL straws containing cattle somatic cells, observing a cell viability of 72.9%±11.7.At the same time, the treatment using a freezing machine with automatic temperature control (T3) proved to be a method as efficient as the others, as shown in the work by Cetinkaya and Arat (2011), who found good viability of somatic cells frozen in automatic curves of -0.5, 1 and 2°C/min.No statistical difference was observed on the viability and confluence of 4th and 5th passage cells (Table 2), thus both passages were viable for cryopreservation.According to Garfield (2010), the use of well-differentiated cells from the 1st to the 4th passage preserves the normal structure and function of the cells since this replication process is limited in addition to reducing the risks of apoptosis of the generated embryos.The results of this work showed the possibility of cryopreserving fibroblasts from Curraleiro Pé Duro cattle, even with less specialized equipment, which is an important step for several reproductive biotechnologies assisted in the preservation of local and/or endangered species, as occurred in other studies, such as that of Srirattana et al. (2012), where, through the use of somatic cells, an interspecific cloning was performed, with a donor animal of genetic material from the Gauro breed (Bos gaurus) and a recipient female cattle (Bos taurus).
In the present work, it was possible to verify that there is no significant difference in the cryopreservation protocols tested for cells from the ear tissue of Curraleiro Pé Duro cattle, which maintained both their viability and their capacity for growth and confluence at good levels, even in cells from different passages, showing their ability to resist cryopreservation protocols and to be used as a source of genetic material in cryobanks for research and for use in assisted reproduction biotechniques, even if coming from protocols that used simpler equipment with lower cost. | 2022-08-24T15:27:01.394Z | 2022-08-01T00:00:00.000 | {
"year": 2022,
"sha1": "70ab6571547e7847e71f1bf5ca2ef8ff2e9ae953",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/abmvz/a/qJsJfmQngPMvpJgJH364gqw/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "58c8c6aa2c7b0a906146f888382b79e9a03eb136",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
79469518 | pes2o/s2orc | v3-fos-license | Hemolytic uremic syndrome and hypertensive crisis post dengue hemorrhagic fever : a case report
Hemolytic-uremic syndrome (HUS) clinically manifests as acute renal failure, hemolytic anemia and thrombocytopenia. Acute renal failure with oliguria, hypertension, and proteinuria usually develops in affected patients.1,2 In children under 15 years of age, typical HUS occurs at a rate of 0.91 cases per 100,000 population.3 The initial onset of this disease usually happens in children below 3 years of age. Incidence is similar in boys and girls. Seasonal variation occurs, with HUS peaking in the summer and fall. In young children, spontaneous recovery is common. In adults, the probability of recovery is low when HUS is associated with severe hypertension.2
H emolytic-uremic syndrome (HUS) clinically manifests as acute renal failure, hemolytic anemia and thrombocytopenia.Acute renal failure with oliguria, hypertension, and proteinuria usually develops in affected patients. 1,2In children under 15 years of age, typical HUS occurs at a rate of 0.91 cases per 100,000 population. 3The initial onset of this disease usually happens in children below 3 years of age.Incidence is similar in boys and girls.Seasonal variation occurs, with HuS peaking in the summer and fall.In young children, spontaneous recovery is common.In adults, the probability of recovery is low when HuS is associated with severe hypertension. 2amage to endothelial cells is the primary event in the pathogenesis of HuS.This damage can occur as a result of dengue virus infection.Deficiency of factor H, membrane cofactor protein, or factor I results in excessive complement deposition, which promotes the development of microthrombi in the kidneys and other tissues.][6] HuS is mostly associated with E.coli 0157:H7 and classified into 2 main categories, depending on its association with Shiga-like toxin.Shiga-like toxin-associated HUS (Stx-HUS) is the classic, typical, primary or epidemic form of HUS.One-fourth of patients present without diarrhea (denoted as D-HUS).Most cases of D+HUS occur in epidemics, and are associated with contaminated food or water, swimming in contaminated water, contact with cattle, contaminated environments or person to person transmission.Acute renal failure occurs in 55-70% of patients, but as many as 70-85% of patients recover renal function. 1Hypertensive crisis with blood pressure 50% higher than the 95th percentile for age, height, and gender, is a complication of acute renal failure due to HuS. 7,8 Viral etiologies for HUS (atypical HUS), such as Portillo, Coxsackie, influenza, Epstein Barr, rotavirus, and dengue are rare (incidence rate <0.3%). 4upportive care is the mainstay of HuS therapy.Dialysis and renal transplantation are performed when necessary.The role of antimicrobial agents in the treatment of HuS is controversial.There is little evidence that antibiotics are beneficial.Furthermore, indirect evidence suggests that antimicrobials may be dangerous in many cases of E.coli O157:H7.There is also controversy on the benefits of therapy with fresh frozen plasma transfusion, glucocorticoids, heparin, thrombolytic agents, and prostacyclin. 1,9][11][12][13][14] We present a case of atypical HuS with acute renal failure and hypertensive crisis following a dengue fever infection.Supportive therapy resulted in a good outcome, without need for hemodialysis.
Case Report
An 8-year-old boy was referred to a government hospital with a 12 day history of sudden, high, continuous fever and vomiting 1-2 times daily.He had no history of seizures, shivering, cough, sneezing, diarrhea, purpuric rash or other bleeding.During first 4 days of fever, he went to a private hospital for blood tests which showed haemoglobin (Hb) of 13.2 g/dl, leukocytes 2,700/mL, platelet count 116,000/ml, and negative Widal test.He refused hospitalization.on the 7th day of fever, he had petechiae and abdominal pain.blood tests showed a secondary dengue fever infection (positive dengue IgG and IgM), so he was hospitalized.He received intravenous Ringer's lactate solution, amoxicillin injection, dexamethasone injection, paracetamol, lanzoprazole, ondansetron, and multivitamins.Amoxicillin was changed to ceftriaxone injection on the second day of hospitalization.His platelet count decreased to 47,000/ml, followed by a slow increase.on the fourth day of hospitalization, he had vomiting, oliguria, and hypertension (130/90 mmHg).His laboratory results were Hb 8.2 g/dl and platelet count 76,000/ ml.renal sonography suggested glomerulonephritis.His pediatrician changed ceftriaxone to amoxicillin injection again, and gave him IV vitamin C 1x100 mg, IV furosemide, and captopril 2x12.5 mg.On the fifth day of hospitalization, blood gas analysis (BGA) showed pH 7.47, pO 2 106, pCO 2 26, BE -5, HCO 3 18.2, and FiO 2 21%.Blood tests showed Hb 6.6 g/dl, leukocytes 13,500/mL, hematocrit (Ht) 19%, platelet count 95,000 /ml, and blood smears revealed schistocytes and burr cells.The pediatrician diagnosed HuS and recommended hemodialysis, therefore, he was referred to Dr. Kariadi Hospital due to his government insurance coverage (Jamkesda Kodya Semarang).
The patient was assessed as having HuS with hypertensive crisis and was given supportive therapy, along with intravenous furosemide, ceftriaxone, calcium gluconate, ranitidine, sublingual nifedipine, roborantia, captopril, and allopurinol.After four days, his blood pressure returned towards normal (110/90 mmHg), his extremity edema disappeared, his lungs were clear and his liver was no longer palpable.He received PRBC transfusion on day two, but no platelet concentrate, and his post-transfusion Hb was 14 mg/dl with platelet count 81,000/ml.His discharge diagnosis was HUS with hypertension (95th-99th percentile) and severe myopia.
On his first follow-up visit (6 days following discharge), he had a desquamative post-drug eruption, and was treated with urederm twice a day.by the end of six weeks, the dermatitis had improved and his blood pressure had returned to normal (100/70 mmHg), so the captopril was stopped.Laboratory findings revealed Hb 12.6 g/dl, leukocytes 10,200/ml, platelet count 471,000/ml, urea 49 mg/ dl, creatinine 0.6 mg/dl, with normal electrolytes and urinalysis.
Discussion
This patient exhibited the triad signs of HUS: 5,6 (1) hemolytic anemia, in that he had a severely decreased Hb level along with hemolysis [Burr cells, schistocytes] potentially triggered by dengue fever [IgG and IgM-positive] and perhaps a nonspecific bacterial infection [leukocytosis, but no clear infection focus], (2) thrombocytopenia, and (3) acute renal failure, with oliguria, azotemia [increased urea, creatinine], and decreased GFr.The pathogenesis of HuS is unknown.Damage to endothelial cells occurring mostly in the renal arterioles and glomerular capillaries are central lesions in the pathogenesis of HuS.Proinflammatory and prothrombotic events, as well as changes in the coagulation system, along with dysfunctional endothelial cells may result in organ damage.endothelial cell damage may be caused by lipopolysaccharide (LPS) originating from E.Coli/Shigella.LPS in circulation will stimulate monocytes to release interleukin 1 (IL-1) and tumor necrosis factor alpha (TNF-α), which will activate the coagulation cascade producing fibrin.][12][13][14] Thrombocytopenia is a primary manifestation in HuS, possibly caused by peripheral destruction of thrombocytes.endothelial cell destruction is followed by fibrin formation inside arterioles and capillaries, causing narrowed lumen, decreased GFR, oliguria, azotemia, and disturbances in other biochemical processes in the body. 7Narrowed arterioles and glomerular capillaries may also be caused by increased endothelin levels in plasma.endothelin is produced by endothelial cells and functions as a vasoconstrictor affecting blood flow in kidneys, GFR, and blood pressure (BP).Fibrin, formed on endothelial cells inside the microvascular kidney, destroys erythrocytes through a microangiopathic process as they move across the vessels.Hemolysis typically occurs suddenly, marked by a Hb level as low as 4 g/dl.Examination showed increased reticulocytes, with usually negative direct and indirect Coombs tests.by microscopic immunofluorescence, fibrin, fibronectin, Igm and C 3 on capillary walls, as well as mesangium and subendothelial space can be observed.
The pathogenic links between viral infection and concomitant renal dysfunction are often difficult to establish.HuS caused by dengue fever infection is rare, except in dengue shock syndrome (DSS) which induces acute tubular necrosis.Various signs of acute tubular necrosis include IgG, IgM and/or C 3 deposition and thickening of the glomerular basement membrane.Acute renal failure and multiple organ failure may also be a manifestation of rhabdomyolysis.The role of immune complex in development of renal failure in dengue infection is still unclear.Wiwanitkit in Gulati et al discovered the diameter of dengue virusimmunoglobulin complex to be much smaller than the diameter of the glomerulus.Thus he postulated that the immune complex can be entrapped only if a previous glomerular lesion caused narrowing of the glomerular diameter.He concluded that the immune complex did not play a significant role in pathogenesis of renal failure in dengue infection. 15enal failure due to HuS was described in an isolated case report where renal biopsy revealed thrombotic microangiopathy with glomerular and arteriolar microthrombi.electron microscopy demonstrated the presence of microtubuloreticular structures, suggesting a viral infection.Acute renal failure could be partly mediated by the tubular damage, which in turn could be mediated by the direct cytopathic effect of viral proteins and cytokineinduced injury.12 renal biopsy was not performed on our patient.
In our subject, we found hemolytic anemia with Hb 6.5 g/dl (with schistocytes and Burr cells), thrombocytopenia (platelet count reached 75,000/ml ) , and normal coagulation (PT, PTT), but with increased fibrinogen (including increased acute reactive protein such as CRP), positive dengue IgG and IgM, azotemia (serum urea 171 mg/dl and serum creatinine 4.49 mg/dl), decreased GFR, proteinuria, granular casts in urine, and normal potassium.Hyperuricemia was due to acute renal failure, dehydration, and cell damage.leukocytosis was present, but no bacteria was cultured from the blood.Stool culture was also unproductive.
Biopsy may be used to establish an HUS diag-iopsy may be used to establish an HuS diagnosis, however, we did not perform a renal biopsy.
Peripheral smears for schistocytes and thrombocytopenia are similarly important for HuS diagnosis.The characteristic HuS pathologic findings are occlusive lesions of the arterioles and small arteries, as well as subsequent tissue microinfarctions. 1,5A fully developed vascular lesion consists of an amorphousappearing, hyaline-like, thrombi-containing platelet aggregation and a small amount of fibrin that partially or fully occludes the involved small vessels (see images below).
Glomerular thrombotic microangiopathic lesions and cortical necrosis are the most frequent histologic findings in Stx-HUS, whereas arterial thrombotic microangiopathic lesions are the most frequent features in non -Stx-HUS. 5US is a self-limiting disease with spontaneous recovery, although strict monitoring and treatment of symptoms are important.because HuS has highly variable clinical symptoms, supportive therapy (good nutrition, anti-hypertensive drugs, strict monitoring of fluid and electrolytes) is important for good outcomes.In our case, we gave supportive therapy (fluid and electrolyte balance, uremic diet for nutrition, PRBC transfusion for hemolytic anemia, and anti-hypertensive drugs with furosemide, captopril and nifedipine).
In a previous meta-analysis, higher risk of HUS was not associated with antibiotic administration. 18ur patient was given amoxicillin and ceftriaxone.
Indications for hemodialysis are clinical and laboratory in nature.Clinical indications include uremic syndrome (vomiting, seizure, unconciousness), fluid overload (evidenced by cardiac failure, pulmonary edema, and/or hypertension), and metabolic acidosis (Kussmaul).Laboratory indications include urea > 200 mg/dl, creatinine > 15 mg/dl, hyperkalemia (K+ > 7), and HCO 3 < 12 meq/l.Hemodialysis may improve HuS prognosis. 7our patient did not receive hemodialysis, since he did not meet the above criteria, and his azotemia was improving with no evidence of uremic syndrome.based on available literature, patient management without dialysis is appropriate when the patient is passing urine (nonanuric), and the acid-base balance, serum electrolyte concentrations and fluid balance can be managed without dialysis. 13ne complication of HuS is hypertension due to fluid overload or increased renin associated with renal vascular disturbance. 1 Hypertension is defined as average systolic and /or diastolic BP >95 th percentile for gender, age and height on > 3 occasions. 16ypertensive crisis (BP > 50% above the 95th percentile for age; practical definition for children > 6 years old: BP systole > 180 mmHg or diastole > 120 mmHg or any stage of hypertension with encephalopathy, cardiac failure, or papilledema) is different from hypertensive urgency and emergency.Hypertensive urgency is defined as severe hypertension without evidence of end-organ involvement.Hypertensive emergency is defined as hypertension associated with end-organ dysfunction (brain, heart, eye, or kidney). 17cute hypertension in our case may be secondary to the HuS, with no target organ damage.In addition, fundoscopy revealed no hemorrhages, infarcts or papilledema.
Nifedipine, (a calcium channel blocker reducing peripheral vascular resistance but not affecting cardiac output) was administered sublingually to our patient.The nifedipine dosage for his hypertensive crisis was sublingual at 0.1 mg/kg, increased by 0.1 mg/kg every 5 minutes (for the first 30 minutes), with a maximal dose of 10 mg/dose.The effect was rapid --within 10 minutes.The patient also showed good response to IV furosemide.Furosemide dose was 2 x 1 mg/kg IV before he was switched to oral furosemide after improvement. 18CE inhibitors (captopril 0.3-2 mg/kg every 8-12 h) was given to maintain blood pressure and to reduce proteinuria.
Patients with D -HuS typically have frequent relapses and a higher risk of progression to end-stage renal disease (ESRD), 1 but our patient's prognosis was good since he responded well to supportive therapy (without hemodialysis) and achieved normal BP without prolonged use of anti-hypertensive drugs.He had normal urea and creatinine, and no further complications. | 2019-03-16T13:12:18.068Z | 2011-12-31T00:00:00.000 | {
"year": 2011,
"sha1": "74a425b05a285b0645ebd81f8c5ceea2bdbed33f",
"oa_license": "CCBYNCSA",
"oa_url": "https://paediatricaindonesiana.org/index.php/paediatrica-indonesiana/article/download/865/711",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "74a425b05a285b0645ebd81f8c5ceea2bdbed33f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
169879853 | pes2o/s2orc | v3-fos-license | Using ICT to Enhance the Management of the Natural & Cultural Heritage Resources of Old Oyo National Park for E-Tourism Development
This paper describes the conceptualization of Old Oyo National Park seeking an intervention for improve collaboration and communication internally across functional and departmental boundaries through the use of ICT. Eighteen natural and cultural heritage resources of ecotourism values were indentified in the Park, in addition to the nine art and craft practiced in these communities and eight annual cultural events. A detail assessment of the level of development of the cultural and natural features in the Old Oyo National Park was carried out based on the three component of tourist destination, viz; attraction, amenities and accessibility. The gridded map of the locations of these ecotourism features was also presented. An e-tourism designed for the park has the basic component of catalog of product, shopping cart, and check out, payment gateway (payment processing network), customer account, internet merchant account and business account. Customers can browse the catalog of tourism product and shops cart it. The gateway component accepts credit card details and sends to the payment gateway for authorization on the premise of adequate security using the SSL primitives. Funds are reserved into the customer account and later transfer to merchant account, and then to the business account of the park.
INTRODUCTION
The study of e-commerce in the tourism industry has emerged as a frontier area for information technology. E-commerce as defined in Turban, Lee, King & Chung, (2000); Bocij, Greasley and Hickie (2008) is the process of buying and selling or exchanging products, services and information via computer networks including the Internet. Tourism and e-commerce consists primarily of the distributing, buying, selling, marketing and servicing of products or services over electronic systems such as the Internet and other computer networks. It can sometimes involve electronic funds transfer (Donal, Michael & Hitesh, 2001), supply chain management, e-marketing, online marketing, online transaction processing, electronic data interchange (EDI), automated inventory management systems and automated data collection systems (Popescu Delia 2007). It typically uses electronic communication technology such as the Internet, extranet, e-mail, e-books, database, and mobile phones. The emergence of the Internet as a tool for the business-to-consumer aspect of e-commerce has far reaching ramifications. Most importantly, it has created opportunities for businesses to reach out to consumers in a very direct way and create electronic markets (Inge M. Kloppiing & Earl McKinney 2004).
Generally as presented in Deepthi (2008), the revolution in ICTs has profound implications for economic and social development. It has pervaded every aspect of human life whether it is health, education, economics, governance, entertainment etc. Dissemination, propagation and accessibility of these technologies are viewed to be integral to a country's development strategy. The most important benefit associated with the access to the new technologies is the increase in the supply of information. Secondly it reduces the cost of production. Thirdly it has overcome the constraints of distance and geography and fourthly it has led to more transparency (Deepthi 2008). The government of Nigeria is poised at integrating ICTs to all sectors and developmental activity. Tourism is one such potential areas.
Tourism is now being considered as the World's largest industry Nigerian Tourism Development Master Plan, (NTDMP, 2006). Besides export earnings, international tourism generates an increasingly significant share of government (national and local) tax revenues throughout the World. In addition, the development of tourism as a whole is usually accompanied by considerable investments in infrastructure such as airports, roads, water and sewage facilities, telecommunications and other public utilities. Such infrastructural improvements not only generate economic benefits to tourists but can also contribute to improving the living conditions of the local populations (Frederico, 2003). Old Oyo National Park like any other National Park in the World is a protected area with abundant natural resources of immensurable socio-economic, cultural and ecological values. Most protected areas in Nigeria are endowed with natural and cultural resources that if develop could support other tourism activities such as the development of cultural tourism, heritage tourism, cultural heritage tourism, creative tourism, agrotourism, aquatourism and other ecological tourism activities like game viewing, bird watching, Adventure/wilderness experience, Sport fishing tourism (Ormsby and Mannle 2006). However, these values may be elusive if handled or treated with impunity, indiscriminate and non-challant attitude. It is recorded in Oladeji et. al, (2011), that the ongoing effort at involving community in the management of natural resources in protected areas will go a long way towards achieving sustainable management of these resources thus ensure maximum benefits derivable. These desirable benefits could also be appreciated if information communication technology (ICT) is employed in the management. Many research scholars ( Ayodele 1988, Fadare 1989, Falade 1993, Adeyemo, 1993, Afolayan et. al. 1996, Adetoro 2002and Alarape 2001 had carried out studies on Old Oyo National Park since its inception as Upper Ogun Game reserves and National Park in 1991. The results findings had provided useful information on the ecological resources of the Park. Passage of time therefore has made it necessary for the appraisal, review, modification and update of some of the management tools and data generated in line with the current global practices in natural resources management and ecotourism development. In order for the ecotourism potentials of the Park to be maximized, there is need for the emergence of appropriate, reliable, detailed and accurate up to date data on the anthropological, anthropogenic, Natural and Historical Cultural heritage resources using the ICT. George and Reid (2005) express culture as those physical, intangible, abstract, social and psychological aspects that have traditionally held deep significance value and meaning to a community. Therefore, there is the need to develop a framework on using ICT to enhance the management of the natural and cultural heritage resources of Old Oyo National park for ecotourism development. This will involve the design and development of an ecommerce site and the presentation of gridded digital maps of the park using the Global Positioning System and Geographic Information System Techniques to assist tourists have virtual access to these resources from any part of the world (Fig. 2). A Database of the anthropological, anthropogenic, natural, cultural, archeological and historical heritage resources in and around Old Oyo National Park is being developed and will be useful to update the Ecological Management Plan of the Park and develop Ecotourism Management Plan, which will be of tremendous economic benefits to the management of the park, the host communities, National Park Service, Nigerian Tourism Development Corporation and Nigerian Government.
Background literature on Old Oyo National Park
Old Oyo National Park is geographically located between North latitudes 8 0 10' and 9 0 05', and East longitudes 3 0 35' and 4 0 21', and centered on North latitude 8 0 36' 00'' and East longitude 3 0 57' 05''. Politically, the park lies in Oyo State in the Southwest of Nigeria and borders Kwara State in the Northeast. It is surrounded by ten (10) Local Government Areas in Oyo State namely: Atiba, Atisbo, Irepo(Kisi), Iseyin, Itesiwaju (Otu), Olorunsogo(Igbeti), Oorelope( Igboho), Orire( Ikoyi), Oyo West and Shaki East, and Kaima Local Government Area in Kwara State. Figure 1 shows the location of Old Oyo National Park with the adjourning communities. Old Oyo National Park covers a land area of approximately 2,512 square kilometers making it the fourth largest national park in Nigeria. There are three watersheds in Old Oyo national Park, that of River Ogun and its numerous tributaries, that of River Tessi and its tributaries and that of River Iwa and its tributaries. Ogun River flows southwards to the Atlantic Ocean. Several tributaries notably Oopo, Iwawa, Oowe and Owu flow southwestwards and southeastwards to join it before its exit from the park. The Tessi River flows northwards to the River Niger. Three main tributaries including River Soro join it before its exist from the park. The Iwa River flows northeastwards to the River Niger. The construction of a dam at Ikere Gorge on the Ogun River about 4km south of the park holds a very large body of water reaching up to 10km or more upstream of Rivers Owu, Ogun and Oowe. Otherwise, all the rivers and streams in the park are seasonal and cease to flow during the dry season. However, the major rivers break into pools some quite large, but the Ogun River maintains a very low discharge rate during this period.
Inspite of the deep entrenchment of Christianity and Islam, the traditional region is still adhered to by a sizeable number of people and both Christians and Muslims also respect this. Thus, various traditional cultural festivals are celebrated at various times of the year. These include Egungun, Sango, Ogun, Oya, Obatala, Oro, Asabari , Antele festivals e.t.c.
Figure 1: Map of Oyo State showing location of Old Oyo National Park and adjourning communities Source: Computed From 2008/2009 Field Survey
Detail assessments of the level of development of the cultural and natural features in the Park revealed information on their developmental status. This was done based on the three components of tourist destination, viz; attraction, amenities and accessibility. Except for Agbaku cave, Python cave and Kosomunu Hill that were least developed, all others features in Oyo-Ile were regarded as less developed and the obvious reason is because they lack amenities , not attractive although they were accessible only during the dry season. Ikerre Gorge Dam Lake, Ibuya pool were regarded as developing because they were attractive, accessible with limited amenities. There were also those that were completely in ruins with no clear sign/demarcation of the features, these are regarded as Not developed. These resources can be cartegorised into four ecotourism features with their respective elements namely Hydrological formations includes Ikere Gorge dam Lake, Ibuya pool, Agbaku river course, River Ogun , River Iwa, river Tessi and their tributaries, Hand dug well and Water reservoir; Geological formations includes Agbaku cave (Plate 1), Mejiro cave, Python cave (Plate 2), Yemeso hill (suitable for telescopic viewing and climbing expedition) and Kosomonu hill (Plate 4) literally referred to as compass since it assist the inhabitants in locating their destinations in the olden days especially during expedition; Historical/cultural formations like the relics of the old buildings, relics of the Palace, Mejiro grinding site , Outer and inner defense wall, Akeasan market beacon, relics of town hall (Plate 3), Aganju ( King's resting point). Wildlife resources (fauna and flora). These caves were now being inhabited by wild animals such as Bats (Agbaku cave, Plate 5), python (python cave, Plate 6), Lion ( Mejiro cave Plate 7). Relics of Koso and Sango Royal Dynasty was located at a place over 20km away from Oyo-Ile range of the Park in Igbeti. According to Dan (2000), Cultural heritage tourism is being regarded as the fastest growing segments of the tourism industry simply because of its ability to offer tourists unique products that cannot be found elsewhere. Oladeji and Akintola (2010) observed that cultural heritage tourism is a labour intensive industry and creates many job opportunities especially for young people and part time workers. These researchers opined that the most direct economic benefits are the improvement in employment and income. The results obtained from the appraisal of the fauna and flora resources composition of the Park revealed that species distribution and composition are experiencing decrease and are being threatened by the increasing rate of anthropogenic activities of the host communities around the Park (Oladeji et.al. 2012). This support the findings of Falade (1993) that from the information collected from the officials of the Parks surveyed revealed that the fauna population in the parks were declining fast, Alarape (2001) also recalled that the park for a very long period in the past evidently suffered from mismanagement through destructive activities such as hunting, cattle grazing, logging and uncontrolled burning.
Moreover the flora and fauna resources of the park have been largely depleted leading to extermination of some species (Oladeji et.al. 2012) for instance fifteen species of the thirty eight animals were sighted by Ayodele, (1988), fourteen species were sighted by Afolayan,1996 and fourteen species were sighted by Alarape, (2001). (See Plate 8and 9) One of the importance uses of Geographical Information System Technique is that it assists in Mapping where things are, thereby let you find places that have the features one is looking for and to see where to take action. The need to make the description of the natural and cultural heritage features location specific with the use of Geographical Information System Technique has necessitated the presentation of a gridded digital map of these features; this will assist tourists from any part of the World to virtually access the identified cultural and historic heritage resources in the Park and those in the adjourning communities. Fig. 2 presents a gridded Map of Historical Sites location of Old Oyo National Park and adjourning communities. Depending on the taste of the tourists, the rate payable per night in these hotels in Oyo was found to range from N 1, 400-N 8,100. The rate was found to be lower in Iseyin N 1,300 -N3,500 per night.
LITERATURE REVIEW ON E-COMMERCE
One of the most important characteristics of electronic commerce is the opportunity and promise it holds for tourism to extend their capabilities and grow. In Chulwon (2004) the study involves the collection of the secondary data regarding e-commerce for the tourism industry. The study considered the challenges and opportunities faced by the tourism industry. It covered e-commerce activities, benefits, barriers and key success factors. According to his result, the main benefits of e-commerce for tourism enterprises are 'providing easy access to information on tourism services,' 'providing better information on tourism services,' and 'providing convenience for customers'. The result also reveal many other benefits of e-commerce, such as 'creating new markets,' 'improving customer services,' 'establishing interactive relationships with customers', 'reducing operating cost', 'interacting with other business partners', and 'founding new business partners'. It is also suggested that there are a number of barriers for tourism in adopting e-commerce. These barriers include 'limited knowledge of available technology,' 'lack of awareness,' 'cost of initial investment,' 'lack of confidence in the benefits of e-commerce,' and 'cost of system maintenance.' These barriers also include 'shortage of skilled human resources,' and 'resistance to adoption of e-commerce.' In terms of market situation, one might also mention 'insufficient e-commerce infrastructure,' and 'small e-commerce market size'. In Byron & Gagliardi (2005) an insight as to why the Internet is yet to be fully exploited for its developmental value in a number of developing economies such as tourism is discussed. The two main factors for conducting successful e-commerce are 'security of the e-commerce system' and 'user-friendly Web interface', thus recognising that building customer trust and convenience for customers are essential to succeed. 'Top management support,' 'IT infrastructure,' and 'customer acceptance' were also considered as an important factors. E-commerce is expected to benefit economic development in several ways, first as noted in Deepthi (2008), e-commerce allows business to reach a global audience. In Africa, for example, the tourism and handicrafts industries are realizing their ability to deliver their product information directly to consumers. Tourist lodges, hotels, and governments across the continent can maintain sophisticated websites advertising their unique features, handling booking order, and promoting specials to interested consumers. Similarly, small manufacturers of traditional handicrafts are discovering how ICTs can assist the marketing and distribution of their wares. Secondly opportunities created by e-commerce and its predecessor technologies is that ICTs can create digital market places to manage supply chains and automate transaction, increasing efficiency and opening previously closed markets to firms in developing countries. Thirdly, e-commerce is improving the culture of business. There are now better intra-firm communications, cost savings procedures, and reductions in the inventory costs leading to better management. The salient information management economizing features of Internet technology as explained in Wheatley, Buhr, and DiPietre. (2001) are highlighted as follows.
a. The Internet as a communication technology is able to significantly reduce the costs associated with the paper work of organizing trade. b. Through its open architecture and possibly intelligent software agents, the costs of looking for and gathering information on possible trading partners, is reduced. c. The need to find suppliers or buyers of proper size is reduced d. In terms of bilateral relationships between producers and processors, video data and other electronic measurement/monitoring devices, when coupled with the electronic media of the Internet, will reduce the costs of monitoring. e. There is a lowered informational cost of tracing the flow of products through the production system, i.e., from producer to processor. f. One could also raise the question as to whether the informational capacities of the Internet could substitute for storage and/or product inventories Milgrom, and John (1988)
E-Commerce for Tourism Design
The World Wide Web (web) is a client/server application layer on top of the internet that provides simple standard protocols for naming, linking and accessing virtually everything on the internet (Williams et. al 2003). The internet provides a set of interconnected networks for individuals and business to complete transaction electronically (Valacich and Schneider, 2010).
The key technological infrastructure components of the e-commerce includes, the web server hardware platform with the appropriate software is a key e-commerce infrastructure ingredient, the web server must run on an operating system and in addition to this, each e-commerce website must have web server software to perform fundamental service which may include the security and identification, retrieval and sending of web pages, websites tracking, website development and web page development, the e-commerce which supports five core tasks of catalogue management, product configuration, shopping cart facilities, and e-commerce transaction processing and web traffic data, a High Speed connection to Networks and Internet. Internet is the collection of all computers that can communicate using the Internet Protocol suit, with the Computers and Networks registered with the Internet Network Information Centre (InterNIC) (Weijia and Wanlei, 2005). The internet allows communication between millions of connected computers worldwide. Information is transmitted from client PCs whose users request services in response to requests. The internet is a large scale client/server system, the client PCs within homes and businesses are connected in the internet via local internet service providers (ISP) which in turn are linked to larger ISPs with connections to the major national and international infrastructures (Barry et. al, 2010). The e-tourism system has the basic component of catalog of product, shopping cart, check out, payment gateway (payment processing network), customer account, internet merchant account and business account. Customers can browse the catalog of tourism product and shop cart it. The gateway component accept credit card details and send to the payment gateway for authorization on the premise of adequate security using the SSL technology. Funds are reserved into the customer account and later transfer to merchant account, and then to the business account of the park. Figure 3 presents the conceptual diagram of e-commerce model for the park.
One way to pass logic from a web server to a browser is to write a set of macro like instructions called a script in a scripting language (java scripts). A script might be used to animate an image on a window, highlight an icon, or play an audio file when the mouse pointer moves over a spot on the client screen. Scripts are also used to validate the completeness and accuracy of the data input to a browser-based form. To add more interesting interactivity to a web page, applets, small programs executed from within another program such as a browser can be downloaded to a client. Figure 4 presents ecommerce software to surf for items to purchase from the etourism market place. Here, detailed specification of available products are displyed on a page where customres can make choices to the cart. There is the need for an intending customer to register with the site for user authentication and autorization of the system. The next presentation is on the basic features of the e-tourism system
The Electronic Commerce Order Fulfillment Process:
The ecommerce ordering fulfillment process starts when the order is received and there after verified, the seven other activities that take place which can be done simultaneously or sequentially including assurance of customer payment, check of in-stock availability, shipment arrangement, insurance, replenishment, in home production, contractor user, contact with customers and returns as exchange of item (Turban and Volonino, 2010). Firewall is necessary to mitigate against associated social and economic risk of e-tourism (Mark, 2009). Invoice and Shopping Cart: After a customer have chosen all he/she wants to purchase, then an invoive is processed to specify in detailed term what needs to be paid and for what items. Booking can also be included in this module. Electronic payment subsystem: Payments are an integral part of business, whether in the traditional way or online. The most common methods as discussed in Turban and Volonino (2010) for electronic payment system in E-commerce includes the Electric credit card, Electronic bill payments (Online banking, Biller direct, Bill consolidator), E-Wallets (digital wallets), Virtual credit cards, Payment using finger prints. Details discussion in Turban and Volonino (2010). The system employed the VISA and the Mastercard card that are recently available in most of the Nigerian Banks. In Mark (2009) there are several network security devices that can be used to protect the network from attacks. These include firewalls, proxy servers, honey pots, network intrusion detection systems, host and network intrusion prevention systems, protocol analyzers, internet content filter and integrated network system hardware. The next session presents the page content of the application. This includes Watersheds and Drainage Patterns frame, Culture act/cultural events frame, Art and Crafts, Calabash Carving, Leather works and Iron Smelting Layout of the e-commerce for Tourism Watersheds and Drainage Patterns frame. There are three watersheds in Old Oyo national Park, that of River Ogun and its numerous tributaries, that of River Tessi and its tributaries and that of River Iwa and its tributaries. Ogun River flows southwards to the Atlantic Ocean. Several tributaries notably Oopo, Iwawa, Oowe and Owu flow southwestwards and southeastwards to join it before its exit from the park. The Tessi River flows northwards to the River Niger. Three main tributaries including River Soro join it before its exist from the park. The Iwa River flows northeastwards to the River Niger (See Fig 5). Surface Water Frame. The construction of a dam at Ikere Gorge on the Ogun River is about 4km south of the park holds a very large body of water reaching up to 10km or more upstream of Rivers Owu, Ogun and Oowe. Otherwise, all the rivers and streams in the park are seasonal and cease to flow during the dry season. However, the major rivers break into pools some quite large, but the Ogun River maintains a very low discharge rate during this period. The Old Oyo National Park can be categorized into five ecotourism features as presented in Table 1. While Table 1 presents the list of Natural and Cultural Heritage Resources outside the Park with their GPS readings.
Culture acts / cultural Events Frame
Eight cultural festivals including Egungun, Yam festival, Sango, Ogun, Oro were identified to be celebrated at different part of year. In addition to these, Cultural acts practiced in these communities include, tattooing, tribal mark on faces , folklore among the children, dancing , acrobatic display , magicians and drumming in many occasions, proverbial talk among the elders , celebrities associated with group appearance or attire, Traditional mode of dressing, ewi/oration , greetings, Ayo-tita, taboo and Yam Festival in Tede (Plate 11).
Plate 12: Yam festival Cultural festival in Tede
Plate 13: Sculptural work Art and Crafts Frame Some of the notable Art and Craft in the neighbouring communities include Wood carving or sculptural work( Plate 13), Cloth weaving, leather work, Calabash carving, bid and cowries making.
Sculptural work:
The names of their products that could serve as souvenirs to the visiting tourists include symbols of Idols such as Sango (Plate 13), Oya, Ifa; Opon ifa (plate 14), Oroke ifa, decorating wooden frame, Twins wooden image and Wooden pen. Other products include Opon-Ifa, Opon ayo (Plate 15), Staff or walking stick.
Calabash carving Frame
Calabash carving is practiced in Oyo town than in any other parts of the communities. This has given credence to Oyo town as the home of calabash. Other town where calabash carving is practiced is Igbeti however the type of calabash carved were different from those in Oyo town. Hausa types of calabash (Plate 17a and 17b) were being carved in Igbeti while Yoruba type of calabash (Plate 18) were being carved in Oyo.This is a clear indication of ethnic diversity that occur across these communities with large concentration of Hausa-Fulani in the Northern part than the Southern part where majority are Yorubas . Some of the materials used for production in Oyo includes Ahan (small axe), Afinnan (Small knife), Ikoko (Small blade), Iregba (Knife), Ipa igba (small knife), Ita igba (small chisel).
Leather works Frame
Leather work is practiced in Oyo and Saki. However this practice was peculiar to Oyo town. List of names, type and leather work is presented in Table 3. Iron smelting is practiced in all towns and villages surrounding the Park, although the activity is peculiar to Saki where it was learnt that business originated from Ghana. It is significant to see the name SAKI boldly written on the Products. Different types of products were identified ranging from cooking utensils of various sizes to other domestic items (Plate 19a and 19b)..
CONCLUSION
Contemporary information society has made Toursim a highly information-intensive industry as ICT has a potential impact on tourism business. The role of ICT in tourism industry cannot be underestimated and it is a crucial driving force in the current information driven society. It has provided new tools and enabled new distribution channels, thus creating a new business environment. ICT tools have facilitated business transaction in the industry by networking with trading partners, distribution of product services and providing information to consumers across the globe. On the other hand consumers are also using online to obtain information and plan their trip and travel. Old Oyo National Park (OONP) is a unique Protected area containing natural and cultural heritage resources it could be referred to as a Mixed Heritage Site according to UNESCO. The International treaty called the Convention concerning the Protection of the World Cultural and Natural Heritage adopted by UNESCO in 1972 described Mixed Heritage sites as properties with both outstanding natural and cultural values. OONP is endowed with abundant natural resources of immensurable socio-economic, cultural and ecological values. However, these values may be harnessed with e-commerce. The prototype etourism using the Old Oyo National Park has the basic features of catalog of product, shopping cart, check out, payment gateway (payment processing network), customer account, internet merchant account and business account. Customers can browse the catalog of tourism product and shop cart it. The gateway component accepts credit card details and sends to the payment gateway for authorization on the premise of adequate security using the SSL technology. Funds are reserved into the customer account and later transfer to merchant account, and then to the business account of the park. | 2019-05-30T23:46:49.445Z | 2013-01-23T00:00:00.000 | {
"year": 2013,
"sha1": "c4440dbb79eb53ef8fa7af76884c22b541d94bfe",
"oa_license": "CCBY",
"oa_url": "https://cirworld.com/index.php/ijmit/article/download/1363/1329",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ed7e6c03112359142f14fc964ffc26c779b94e5c",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
233191950 | pes2o/s2orc | v3-fos-license | Active screening of patients with diabetes mellitus for pulmonary tuberculosis in a tertiary care hospital in Sri Lanka
End TB strategy by the WHO suggest active screening of high-risk populations for tuberculosis (TB) to improve case detection. Present study generates evidence for the effectiveness of screening patients with diabetes mellitus (DM) for Pulmonary TB (PTB). A study was conducted among 4548 systematically recruited patients over 45 years attending DM clinic at the National Hospital of Sri Lanka. The study units followed an algorithm specifying TB symptom and risk factor screening for all, followed by investigations and clinical assessments for those indicated. Bacteriologically confirmed or clinically diagnosed PTB were presented as proportions with 95% CI. Mean (SD) age was 62·5 (29·1) years. Among patients who completed all indicated steps of algorithm, 3500 (76·9%) were investigated and 127 (2·8%) underwent clinical assessment. Proportion of bacteriologically confirmed PTB patients was 0·1% (n = 6,95%CI = 0·0–0·3%). None were detected clinically. Analysis revealed PTB detection rates among males aged ≥60 years with HbA1c ≥ 8 to be 0·4% (n = 2, 95%CI = 0·0–1·4%). The study concludes that active screening for PTB among all DM patients at clinic settings in Sri Lanka, to be non-effective measure to enhance TB case finding. However, the sub-category of diabetic males with uncontrolled diabetics who are over 60 years of age is recommended as an option to consider for active screening for PTB.
Introduction
Tuberculosis (TB) remains the world's deadliest infectious disease. In 2018, TB killed 1.5 million people worldwide [1]. The World Health Organization's (WHO) End TB strategy aims to reach End TB targets by 2035 while sustainable development goal's target 3�3 aims at ending a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 the TB epidemic by 2030. Achieving these targets require a global annual decline in the incidence of TB to be 4-5%, although the current decline is 2% [2]. Implementing systematic screening among selected high-risk groups is one recommended intervention to accelerate the decline [3].
Sri Lanka has committed to WHO's End TB Strategy in 2014 [4]. Estimated burden of TB in Sri Lanka was 64 per 100,000 population in 2018, but the case notification rate for all forms of TB was 40�9 per 100,000 population [5]. This gap between the estimated and the reported TB incidence stands over 4000 [6]. Active screening of high-risk populations, namely prisoners and people living with Human Immuno-deficiency Virus (HIV)/ Acquired Immune Deficiency Syndrome (AIDS) has been made a national policy based on local studies which revealed a TB incidence of 1�7% among prisoners [7] and 9�7% among people living with HIV/ AIDS [8].
Research evidence indicate that a patient with diabetes mellitus (DM) is at a three-fold risk of developing active TB compared to a non-diabetic person [9,10]. This increased prevalence of TB in diabetes may be explained by multiple pathophysiological mechanisms. Phagocytes and lymphocytes are the most important effector cells for containment of TB. Diabetes is known to affect chemotaxis, phagocytosis activation and antigen presentation by phagocytes in response to mycobacterium tuberculosis [11]. Impaired chemotaxis of monocytes is evident in patients with diabetes which is not reversed with insulin treatment [11]. There is less activation of alveolar macrophages and decreased production of hydrogen peroxide in tuberculosis patients with diabetes. Furthermore, T cell growth function, proliferation, interferon gamma production is adversely affected in diabetes. Interferon gamma potentiate the nitric oxide dependent intracellular killing activity of macrophages which is important in reducing the bacterial burden of tuberculosis [12]. Higher risk for DM patients to develop TB is discussed in studies conducted among hospital populations [13,14], as well as among general population cohort studies [15,16]. Diabetes mellitus is also found to be associated with an increased risk of development, mortality, relapse, recurrence and reactivation of TB [17,18]. Further, it is associated with a 9-fold risk of treatment failure [19]. Recent evidence indicates that DM is a risk factor for multidrug resistant TB and for delayed sputum smear and culture conversion time as well [20,21].
Based on these evidence, international health agencies suggested active screening of DM patients for TB as an active case finding strategy [22]. At present, countries like China are actively screening DM patients for TB [23], while countries such as India [24] and Nigeria [25] are planning to introduce active screening based on the evidence of the pilot projects.
At present, Sri Lanka is recognized as a low-burden country for TB [26]. Nevertheless, the gap between the estimated and reported caseloads of TB has been stagnant around 4000 for the past decade [27]. Active screening of patients with DM for Pulmonary TB (PTB) is one among various interventions suggested to improve local case detection to close the gap [6]. The rising burden of DM in Sri Lanka, as evident by the national prevalence of 7.4% in 2015 [28] and of 14.7% in a survey conducted in a suburban district in 2018 [29] is a supporting factor to consider this at-risk population in the country for active screening for TB.
However, the effectiveness of this strategy seems to depend on many variables, including the prevalence of TB in the community. The authors of a systematic review which analyzed bidirectional screening for TB and DM for TB concluded that the yield of active screening of DM patients for TB vary. Hence researchers and experts highlight the need for context specific evidence [30]. Therefore, the objective of the present study therefore, was to generate local programmatic evidence on the proportion of TB among diabetics attending a public DM clinic in an urban setting, to guide the national policy decision on adopting active screening of DM patients for PTB.
Study design, setting, participants and the study size
The study was a hospital based cross-sectional study at the diabetes clinic at the National Hospital of Sri Lanka (NHSL), the largest state hospital in the country. The NHSL diabetic clinic cater for approximately 3600 patients per month from the most urbanized as well as most densely populated district of Colombo.
A registered patient at the DM who was above 45 years was considered as a study unit. Selection of age 45 was based on the age patterns of PTB patients, as PTB is not common among less than 45 years in Sri Lanka. The age pattern of DM clinic attendees also indicated 45 years to be the lower margin of the ages pattern of clinic attendees. Hence, 45 years was taken as the lower age limit to increase the efficiency of the design to detect the PTB patients among DM clinic attendees. There was no upper age limit. Pregnant women, patients who were having difficulties in mobility (as they needed to be transported to a nearby hospital for further investigations) and comprehension were excluded from the study. Size of the sample of study units to be included was calculated based on the number required to estimate the proportion of diabetic patients who are expected to have PTB, using the Lwanga and Lemeshow formula [31] for cross-sectional studies. The estimated proportion of diabetic patients who are expected to have PTB among diabetic patients was considered as 642 case rate per 100,000 patients (0.64%) as reported in an Indian study [23]. The precision of the estimate was taken as 0.015%. The final sample size calculated was 4400. Eligible patients attending the clinic from August 15 to December 14 in 2019 were consecutively recruited to the study after obtaining informed written consent, while data collection continued until March 2020.
Study variables and data source
The algorithm (Fig 1) to be used to detect PTB among the attendees of DM clinic was a stepwise process to direct study units into different care pathways according to pathophysiologybased risk factors for TB among the DM patients. Considering the pathophysiology and complex interactions of TB and DM, we designed the algorithm that any study units with a specified combination of minimum set of risk factors would be directed to investigations to exclude PTB. In addition, the study units were inquired for past history of TB, TB symptoms and risk factors and self-reported information were considered as opposed to requiring documentary evidence. In the absence of robust medial record system with data linkages across health institute to retrieve past medical history of patients, this it was purposely designed so to include all who are likely to have the risk factors being directed to the next step in the algorithm of undergoing investigations to exclude PTB.
This initial screening selected a group of patients who complained of cough more than one week or at least one of the checked risk factors or a symptom to undergo chest x-ray (CXRay), Xpert MTB/RIF testing (a type of molecular testing for TB), culture and clinical assessments based on objective criteria. The algorithms led the study units into three pathways.
Pathway A. Those who had been diagnosed with PTB prior to the study and already on treatment for PTB at the time of recruitment. They were not subjected to further investigations but were included as a study unit.
Pathway B. Those who had productive cough for more than one week on symptom screening were directed to digital CXRay and to provide an on-the-spot sputum sample for the Xpert MTB/RIF cartridge molecular assay to detect PTB. Xpert MTB/RIF positives were taken as bacteriologically confirmed PTB and were referred for routine care for PTB. Among others, those with CXRay with defined features and those with at least one other self-reported symptom of TB and/or one risk factor were subjected to clinical evaluation and further investigations by a Respiratory Physicians (RP) to rule out clinical PTB.
Pathway C. Among patients who had no productive cough at the initial screening but were having at least one self-reported symptom of TB (except productive cough for more than one week, where the patients were directed to pathway B) and/or one risk factor were subjected to CXRay, which is considered as a good screening tool for TB [32]. Those who had a positive CXRay were subjected to clinical assessment and further investigations by a RP to rule out clinical PTB.
The TB symptom and risk factor screening were designed as an electronic interviewerbased questionnaire and was administered by the treating Medical officers in the DM clinic. Participants of Pathway A were inquired into the continuation of treatment and were not subjected to further investigations. Participants of pathways B and C were offered a transport service and were accompanied by a RA to a nearby private hospital for investigations namely digital CXRay and on-the-spot sputum sample for the Xpert MTB/RIF assay. Though the investigation facilities were available in the study setting (NHSL), a private hospital was chosen considering the delays that may occur in getting the investigations and the additional cost to the state institution. Study investigators did random visits to the private hospital to supervise the quality of process and procedures.
Xpert MTB/RIF assay was performed in the Central Chest Clinic-Colombo and in the NHSL microbiology laboratory by the Microbiologists, while cultures were performed at the National Tuberculosis Reference Laboratory at Welisara by consultant microbiologists. Digital CXRay were evaluated by a team of two radiologists independently and blindly. The reporting format and a scoring system was developed by a group of radiologists and respiratory physicians to classify a CXRay as showing evidence of a presence of defined features that may reflect past or active PTB. In the study design we planned to resolve any discrepancies in the reporting between the two radiologists, through discussions to generate a consensus between the two consultant radiologists but there were no discrepancies. DM patients with positive CXRay (with pre-defined features), and those with at least one other self-reported symptom of TB and/or one risk factor of the patients directed to Pathway B and C, were subjected to clinical assessment and further investigations by a RP to rule out clinical PTB. Other relevant investigations and sputum culture of required patients were performed at the National Tuberculosis Reference Laboratory.
Those who reported a positive Xpert MTB/RIF test were considered as bacteriologically confirmed PTB and were referred for routine care and treatment of PTB.
Outcomes and statistical analysis
Those who were already on treatment for PTB, those who were detected by the present study by a positive Xpert MTB/RIF test or a positive culture were classified as "bacteriologically confirmed PTB" while those with no bacteriological evidence but suggestive clinical features were classified as "clinically diagnosed PTB". Detection rates of PTB among the total sample as well as among sub-categories were calculated and presented as percentages and respective 95% Confidence Intervals.
Ethical approval
Ethical approval was obtained from the Ethics Committee, Post Graduate Institute of Medicine, University of Colombo (Registration number ERC/PGIM/2019/134). Informed written consent was obtained from all individual participants included in the study.
Results
The study recruited 5159 of which, 4548 (88�1%) completed all the relevant steps in the algorithm. The participants were lost along the algorithm at the points of presenting to the interview by Medical Officers (n = 170), not attending for CXRay and/or sputum collection (n = 416), and not presenting for RP assessment (n = 12).
Basic characteristics of the study units
Mean (SD) age of the sample was 62�5 (29�1) years. Majority (n = 3415; 75�1%) of study participants were between 50-69 years of age, while two thirds (n = 3102; 68�2%) were females. More than half of the study sample (n = 2478; 54�5%) had their last recorded fasting blood sugar levels under controlled levels of less than 130mg/dl (Table 1). At the NHSL DM clinic, HbA1c is done as a routine clinic investigation. HbA1c reports older than 2 months were not considered and recorded as too old data. Reports were available for 4114 (90.4%). However, the last recorded HbA1c levels indicated good glycaemic control (< 8) only among 31�9% (n = 1451). Comparison of the proportions of males and females with good glycemic control (HbA1c < 8) (males 41�2%; females 69�6%) and the very poor glycaemic control (HbA1c�11) (males 10�0%; females 20�9%) indicated that the females were having worse DM control compared to males.
Results of screening for symptoms of TB and risk factors
The symptom screening tool inquired study units on the presence of productive cough of at least one-week duration. Cough for any duration was present among 431 (9�5%) of the 4548 study participants, with 315 (6�9%) complaining of cough more than one-week duration. Of all 4548 study units, approximately one fourth (n = 1123, 24�7%) complained of at least one symptom which could be related to PTB. The common symptoms were on and off difficulty in breathing (n = 514), loss of appetite (n = 456), and night sweats (n = 393).
Most females (42�8%) were in the overweight category with a BMI 25�0-29�0, while most males (46�9%) had recommended BMI of 18�5-24�9. Among the other risk factors inquired, past history of TB was reported by 141 (3.1%), while history of close contact with a TB patient within past two years was reported by 106 (2.3%).
Results by pathways of the algorithm
Of the 4548 study participants, 13 (0�3%) had been diagnosed prior to the study and were already on treatment for PTB at the time of recruitment.
As shown in Table 2, Pathway B was for those who had productive cough for more than one week (n = 315). Of the 315 eligible for MTB/RIF testing and digital CXRay, 292 (92�7%) underwent MTB/RIF testing and 289 (91�7%) underwent digital CXRay.
Pathway C was for those who had no productive cough but had either at least one other symptom and/or one risk factor (n = 3124). Of the 3124, 3023 (96�8%) were subjected to digital CXRay.
All in all, of the 4548 study units, 3500 (76�9%) were subjected to further investigations. Among the study units, the number underwent Xpert MTB/RIF at any point of either pathway was 317 (6�9%) and the corresponding number who underwent CXRay was 3312 (72�8%). Out of the 3312 CXRay, only 74 (2�2%) were reported as having defined features by the consultant radiologist according to the laid down criteria.
The number of study units eligible for RP assessment from both pathways B and C was 128, of which 12 defaulted. The indication for the CRP referral is shown in Table 2. As part of the RP clinical evaluation, Xpert MTB/RIF was performed on 25 study units while culture was performed on 30 study units ( Table 2).
Detection of PTB cases among the study population. Of all 4548 study participants, six (06) patients were detected to have PTB as a result of active screening by the present study giving a proportion of PTB among diabetes clinic attendees as 0�001 (6/4548). As indicated above the proportion of study units who had been diagnosed prior to the study and was on treatment for PTB at the time of recruitment was 0�003 (13/4548). All of them were included to the category of bacteriologically confirmed PTB. The proportions of PTB patients among selected subcategories of the study sample were also analyzed ( Table 3).
The male: female ratio was 2:1 among the PTB patients detected by the present study. Four (66.7%) of the patients were in their fifties. All six patients (100.0%) had poor glycaemic control indicated by HbA1c � 8. Five of them (83.3%) had body mass index within the normal rage while the other patient (16.7%) was overweight (Table 4). Further analysis of data revealed a percentage of PTB patients (0.3%, 95% CI = 0�10%-0�70%) among the males. Among females it was 0�06% (95% CI = 0�00% -0�20%). The same percentage among sub-categories of � 60 years of age and HbA1c� 8 were 0.002 each. Males � 60 years of age with HbA1C � 8 reported the highest percentage of 0.4% (95% CI = 0�05%-1�40%). Table 4 illustrates the socio-demographic and illness-related characteristics of patients detected through active screening. Four out of six patients were males, and all of them were from economically disadvantaged families. The glycaemic control, as depicted by the HbA1c values, were poor among of all them.
Interpretation
Although active screening of DM patients for PTB is proposed as a measure to close the gap between estimated and reported PTB cases [22], the effectiveness of this strategy depends on many factors. The prevalence of PTB among DM is one such important factor. The number of diabetics needed to screen to find one extra case of PTB is directly related to the local TB prevalence. The yield of screening increases with the prevalence of TB in the locality [30]. However, even countries with high burden of TB report different results for the number of TB patients detected actively through screening of DM patients. This can be owed to the programmatic issues related to implementing the programme of screening. China and Marshall Islands are countries where both TB and DM are highly prevalent and have reported high rates of detection of TB patients through active screening among DM. In China, one study reported a TB prevalence of 342.7 per 100,000 persons with DM [33] while another study reported the TB prevalence among DM as 102 per 100,000 [34]. These are higher rates when compared with the TB prevalence among the local general population of 42.8 per 100,000 persons [34]. Similarly, a study from the Republic of the Marshall Islands reported detection of 11 new TB cases after actively screening 353 DM patients, at a rate of 3116 per 100,000 DM patients [35] which is much higher than the prevalence of 483 of TB among general population.
On the other hand, India, also a country with high caseloads of TB and DM reports detecting lower rates of TB through active screening of DM patients. One study revealed only 18 patients with TB through active screening of a group of 11,691 DM patients [24] with an incidence rate of 153.9 per 100,000 DM patients, whereas the incidence rate of TB among the general population in India is 199 per 100,000 [36]. Similarly, another Indian study reported not being able to detect a single case of TB through active screening of 630 DM patients, whose median age was 60 years and the median HbA1c level was 8.7% [37]. It is interesting to note that all these studies have used similar screening methods with initial symptom checklists followed by sputum examination of Xpert MTB/RIF and acid-fast bacilli (AFB) testing and CXRay to diagnose TB patients. Sri Lanka is a country with low prevalence of PTB. Its prevalence is estimated by WHO as 0�06% (95% CI = 0.05%-0�08%) [5]. Sri Lanka records a prevalence of DM of 7.3 [38] which is comparable to the other Asian countries [39]. The present study showed that the prevalence of screened PTB among the DM population to be 1�7 times higher than the PTB prevalence in the general population. When considering the subgroup of diabetic males >60 years of age with HbA1C >8 the prevalence of screened PTB was seven times higher than the PTB prevalence in the general population. At present, two specific populations are recommended for active screening of PTB in Sri Lanka. They are prison inmates and people living with HIV/ AIDS. The proportion of PTB patients detected through active screening of prison inmates is 1�6% (95%CI = 1�4%-2.1) [7] and of people living with HIV/AIDS is 9.7 (95% CI = 6�9%-12�9%) [8]. Accordingly, the screened PTB was 26 and 161 times higher than the PTB prevalence in the general population among prison inmates and among people living with HIV/ AIDS, respectively. Accordingly, the highest prevalence of screened PTB (0.4%) is among the sub-category of diabetic males with poor glycaemic control. Male sex [27,40] and poor glycemic control [41] are well-evident factors associated with the risk of developing TB among DM patients. Authors of a review article [10] recommended active screening among uncontrolled diabetics and diabetic children with recent exposure to a TB patient, rather than mass screening of all DM patients. Similarly, an Australian nation-wide cohort study [16] concluded that DM alone does not warrant active screening of patients for TB.
Strengths and limitations of the study
The present study used an algorithm designed to direct study units into different care pathways based on pathophysiologically explainable risk factors for TB among the DM patients. Considering the pathophysiology and complex interactions of TB and DM, we designed the algorithm that any study units with a specified combination of minimum set of risk factors would be directed to investigations to exclude PTB. For instance, after filtering out the patients who are currently on treatment for TB, the study units were inquired about having productive cough for a duration of one week or more, rather than two weeks which is typical for TB. Thereafter, study units with even a single pathophysiology-based risk factor for TB were directed into different care pathways considering the pathophysiology and complex interactions of TB and DM. As a result, even a study unit with a minimum set of risk factors would be directed to investigations to exclude PTB. In addition, DM patients above 45 years were recruited as TB is more prevalent among >45 years in Sri Lanka [27]. The age pattern of DM clinic attendees also indicated 45 years to be the lower margin of the ages pattern of clinic attendees. Hence, 45 years was taken as the lower age limit to increase the efficiency of the design to detect the PTB patients among DM clinic attendees.
The main limitation of the study is that it was conducted only in one DM clinic which caters for a group of patients belong to middle and lower socio-economic groups in the country. However, considering the fact that PTB is known to be common among such socio-economic groups in Sri Lanka [42], the estimate of the proportion of PTB among DM is not likely to be an underestimate. In addition, majority of the study sample was females, whereas PTB is commoner among males in Sri Lanka [27]. Anyway, the majority of attendees of any public DM clinic in the country would comprise females [43,44]. Hence it could be safely assumed the estimations of this pragmatic study to be realistic, if active screening was to be conducted in real settings. DM patients also having chronic NCDs such as hypertension and ischemic heart disease may attend general medical clinics at the NHSL rather than the DM clinic. Unfortunately, the proportion of such patients cannot be calculated, as government hospital clinics in Sri Lanka do not possess a information system that allows estimation of this parameter. This can be considered as a limitation of this study.
Conclusions
Active screening for PTB among all DM patients at clinic settings in Sri Lanka, a country with low burden for TB, is found to be non-effective measure to enhance TB case finding, given the very low prevalence rates of PTB among DM clinic attendees. However, the sub-category of diabetic males with uncontrolled diabetics who are over 60 years of age is an option to consider for active screening for TB. This requires further studies capturing different local settings such as DM clinics at peripheral hospitals, to arrive at a conclusion on the effectiveness of actively screening DM patients to close the gap in TB estimated and reported numbers.
Supporting information S1 Dataset. (XLS) S1 Annex. Questionnaire to assess the proportion of patients with pulmonary tuberculosis among patients attending the diabetes clinic at National Hospital of Sri Lanka. (DOCX) | 2021-04-10T06:16:43.821Z | 2021-04-08T00:00:00.000 | {
"year": 2021,
"sha1": "42ade5e07e765480e949ba039bee3dc0e2c1d701",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0249787&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3fdd63b2808385e80366aa48a37afbd00b1814e2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250667188 | pes2o/s2orc | v3-fos-license | Investigation of Specificity of Mechanical Properties of Hard Materials on Nanoscale with Use of SPM- Nanohardness Tester
Specifisities of deformation on nanoscale of hard brittle materials with the hardness exceeding 10 GP by means of scanning probe microscope - nanohardness tester "NanoScan" are investigated. It is found, that pile-up is forming at scratching of sample surface with use of diamond indenter. Heigh of this pile-up depends on hardness and elastic modulus of the material. Definition of the contact area without taking into account height of pile-up leads to an overestimation of hardness values. At scratching of silicon carbide surface a transition from plastic flow to fracture is found out. The results received allowed to estimate fracture toughness KIC for silicon carbide.
Introduction
Mechanical properties are of great importance for characterization of hard and superhard materials such as minerals, ceramics, and glasses. Hard and superhard materials concern to substances with covalent bonding. Large lattice resistance to movement of dislocations in these materials is the reason of their brittleness at a room temperature [1]. In practice measurement of microhardness of brittle hard materials by standard methods of indentations and sclerometry (scratching at a constant indenter load) is accompanied by formation of system of cracks on a surface or near to it. However, reduction of scale of indentation and scratching up to a submicronic level does possible to observe essentually plastic flow without formation of cracks [2].
According to the quantitative analysis of this effect lead Lawn and Fuller [3], transition from plastic flow to fracture occurs at increase in the indentation size up to size a, given by 2 3 2 / tg H K a IC (1) where H is hardness, K IC is fracture toughness of the material, is indenter half angle. Before such transition was observed at scratching of surface of glasses [2] and silicon [4], the width of the scratch was about some microns.
For measurements of mechanical characteristics on submicronic and nanoscale nanoindentation is usually used. Oliver-Pharr method is standard for definition H and E on a "load-penetration depth" curve. However in case of pile-up formation this method may gives the underestimated value of the contact area, that leads to an overestimation of H and E values.
Alternative method of measurement of mechanical characteristics is nanoindentation and nanoscratching by means of atomic-force microscope (AFM) [5]. Indentation with use of AFM has advantage in comparison with conventional nanoindentation, because a direct supervision of a residual indent and groove is possible. The purpose of the present work is to research mechanisms and laws of pile-up formation, and also influence of pile-up on measurement of mechanical characteristics of hard and superhard materials with use of scanning probe microscope (SPM). Besides especial interest represents an opportunity of observation of transition from plastic flow to fructure of traditionally brittle hard materials at nanoindentation and nanoscratching.
Experiments
The NanoScan (NS) measuring system based upon principles of SPM, is used in the present study. The NS developed in TISNCM, is an effective tool for studying mechanical properties of hard and superhard materials on submicronic and nanoscale [6]. The NS differs from other similar devices with high stiffness of a probe (~10 4 N/m), representing piezoceramic resonator in the form of a bimorph cantilever beam. The tip for surface scanning is simultaneously the indenter for scratching and indentation. Diamond trihedral pyramids close to Berkovich pyramid, synthesized in TISNCM are used as tips. Work of the probe in a resonant mode allows to control tip-surface contact on change of amplitude and frequency of the probe oscillations. Such mode allows not only to receive the information about topography of surface scanned, but also to estimate the elastic modulus in various points of the surface during scanning (to build a map of the elastic modulus). The construction of the probe gives possibility to put indenter load up to 100 mN, that enables to measure hardness of hard and superhard materials by indentation and sclerometry method. In NS measurement system the original method of elastic modulus measurement by means of approach curves is applied [7]. This method allows to measure elastic modulus on scale less than 100 nm for a wide range of objects. The method is based on dependence of frequency of the probe oscillations which is being in contact to the surface, on penetration of the tip into the surface at loading. Application of NS for materials with hardness 1 100 GPa and the elastic modulus 10 1000 GP is most effective.
The technique of measurement of hardness by means of NS consists in the following: in SPM mode the surface of the sample is investigated and the site with a minimal inclination and a roughness gets out. On the chosen site the same tip does a scratch or indent with the designated indenter load. Then the same site is scanned in SPM mode again. The received image is used for definition of character of deformations and the width of the scratch. Microhardness of various materials, obtained by sclerometry and indentation methods, have close values in that case when the tip at scratching plastically deforms the material. It corresponds to a case "indenter edge forward" scratching. Thus deformation has a character of a plastic expression from the groove, similar to expression at Vickers test. However, the plastic part of the total deformation at scratching is more, than at indentation [8]. Thus hardness H is defined as: where k -factor of the form of indenter, P -load on indenter, b -width of the residual scratch [3]. The shape of the indenter is a very important parameter for submicrometre hardness tests, but in practice it is difficult to make indenters having a repeatable geometry. So an individual indenter calibration on reference materials with known hardness is required. For this purpose process of measurement of hardness on the reference sample is carried out at different loadings and, accordingly, different values of scratch width. Then if the scratch width on the sample tested b X is close to that on standard sample b S , hardness H X of the sample tested according to the formula (1) is defined as where P X and P S -load on the indenter at scratching the sample and the standard with known hardness H S correspondingly. The reference sample should be homogeneous on volume and isotropic as much as possible, with a smooth surface. It is necessary to measure hardness of the reference sample by the standard certified devices preliminary. As it was mentioned earlier, Vickers hardness test is the most close to the method described. Thus, conversion from micro-to nanohardness is possible.
In addition hardness of the samples investigated was controlled by means of standard microhardness tester, and also of commercial "NanoHardness Tester" (dynamic nanoindentation method). Monocrystals of hard and superhard materials were chosen as samples in this study. Surfaces of samples were prepared by careful mechanical polishing. Nanocratching of samples was carried out under the scheme "by an edge forward" and it was not accompanied by formation of radial cracks. Thus formation of pile-up on both sides of the scratch was observed. Distances S1, S2 and S3 ( Figure 1) were used as width of the scratch and indent in different works [4,9,10]. Therefore it was important to find out what of these distances should be considered at definition of hardness by a sclerometry method on nanoscale. Glass was chosen as the test sample. Results of measurements were shown that hardness measured in view of distance S1 is equivalent to microhardness. The choice of distances S2 and S3 leads to the underestimated or overestimated values.
Results and discussion
It is known, that there is an elastic recovery of depth of the scratch after removal of loading [11], thus the distance between tops of pile-up (S1) does not vary. Therefore in case of an estimation of width of scratch on distance S2 the mistake in definition of hardness arises owing to a various degree of elastic recovery at different materials. The mistake in determining H at definition of width of scratch as S3 arises owing to a difference in a configuration of pile-up for various substances. At increase in loading and, consequently, width of scratche up to 1,5 µm a mistake in measurement of the width increases, connected with outcrops of dislocations in the pile-up area (Figure 2).
In work [5] a significant difference in hardness values obtained by Oliver-Pharr procedure and from direct measurement of the size of the indent by means of AFM for fused silica and silicon was observed. A conclusion has been drawn that Berkovich and Vickers indenters are blunt and, therefore, not suitable for hardness test by nanoindentation method because of significant elastic recovery of the residual indent. Thus special tips with an included angle of approximately 60 are necessary for nanohardness measurements, especially when depth of penetration does not exceed 50 nanometers. For Berkovich indenter this size is about 140 . In the present work Berkovich indenters with included angles from 120 up to 160 were used for nanoscratching. However, a distinction in the hardness values obtained for indenters with different angles was not marked. It is known, the sclerometry method implies a larger plastic deformation in comparison with the indentation method [8]. So the impression is easier to form by scratching, then by indentation. Thus, an advantage of sclerometry method is an opportunity of using of standard Berkovich pyramids, and also more blunt indenters for nanohardness measurements. It is especially important for hardness tests on thin films and coatings with thickness not exceeding some tens nanometers.
Hardness values were defined not less than on ten scratchs, made in various crystallographic directions. Results of hardness measurements for investigated samples are listed in Table 1. Hardness measured by means of NS, coincides with microhardness, whereas dynamic indentation method gives the overestimated values (H (dynamic nanoind.)). Therefore microindentation of hard and superhard materials can be substituted for equivalent by results, but less destroying nanosclerometry. Thus for revealing of crystals anisotropy the sclerometry method appears more effective. Hardness measurement by a sclerometry method is influenced by the relation between the direction of the hardness track and the underlying crystal geometry. Anisotropy of hardness is shown only in change of width of the scratch. Indentation with use of Berkovich and Vickers pyramids appears less effective because at presence of anisotropy a print of a pyramid get the deformed form. That complicates measurement of their diagonals and, hence, definition of value of hardness.
Jadret et al. [10] investigated scratching process of elastic-plastic materials in a wide range of physical-mechanical properties from metals up to polymers, width of residual scratch was in range from 1 to 30 µm. It is revealed, that the height of pile-up depends on a ratio of elastic and plastic parts in the total deformation, and this dependence is the general for all materials investigated. Bucaille et al. [11] carried out a numerical study of the behavior of elastic-plastic materials during a scratch test. Simulations were performed with a three-dimensional finite element modeling, the results showed a good correlation with experimental study [10]. According to [11], the value of height of pile-up h a to full depth of a scratch h (Figure 1) depends on a reologic factor 0 / cot E X , where E -Young's modulus, 0 -yield stress, and -indenter half angle. For brittle materials 0 2 H [12]. When an interaction of the indenter and surface is elastic -plastic, ha/h value is given by following: at scratching using Berkovich indenter by "an edge forward" scheme.
Dependence ha/h for the materials investigated in our study is represented on Figure 3. Diamond indenter with half angle about 120 was used for the scratching. The result of calculation under (5) is represented with a continuous line. The good agreement of data of the present work with (4) for all samples investigated, except for silicon, confirms plastic flow in these materials, thus formations of system of cracks do not occur in all range of the load applied. Similar research was carried out for silicon carbide, included angle of the indenter used was 140 . The height of pile-up around of the scratch approximately corresponded to the size calculated under (4). However, silicon is exception, because pile-up is not formed at the scratching. At the same time asperities arises at the bottom of the scratch groove, which are absent at the bottom of a scratch on surface of other sample. The mode I fracture toughness K IC determined at indentation of silicon on submicronic scale is about 0.6 MPa m [13]. According to (1), transition from plastic deformation to fracture in silicon should occur at excess of scratch width of value ~300nm. Therefore asperities observed may be consided as median cracks arising at scratching by pyramidal indenter. Presence of system of median cracks testifies that a critical pressure for them in silicon below, than for radial cracks. Besides at penetration of indenter into the surface of silicon the phase transition induced by contact pressure and accompanied change of density of the material [13] is possible. Such phase transition also can influence nanoscratching of silicon surface.
The study shown, that significant pile-up is formed at scratching of ZrO 2 , sapphire, ruby and silicon carbide, for which hardness measured by dynamic nanoindentation method with use of "NanoHardness Tester", exceeds values received by nanosclerometry and microindentation methods. According to [10], for the same samples pile-up at nanoindentation should be formed. In that case heigh of pile-up should be included into the penetration depth of indenter. However, penetration depth in Oliver-Pharr procedure is counted from a level of an initial surface. Therefore the projected contact area appears underestimated, that leads to the overestimated value of hardness calculated. Thus, it is possible to explain the difference of hardness values by mistake in definition of the contact area using dynamic nanoindentation.
At scratching of 15R-SiC monocrystal surface a transition from plastic flow to fracture with formation of radial cracks was observed at increase in scratch width till 800 nm, that corresponded to loading 8 mN (Figure 4). Radial and median cracks appeared at scratching under the scheme "indenter edge forward", and under the scheme "indenter face forward". However scratching the sample "face forward" is much more destructive in comparison with scratching "an edge forward" at the same load. At indentation of SiC formation of radial cracks was not observed.
The results received allowed to estimate fracture toughness K IC for silicon carbide. As scratching by the "edge forward" method is similar to impression in Vickers hardness test, we used a model for indentation by Vickers pyramid [12]. Thus K IC is given by where s -half of the scratch width, L -length of a radial crack, C=2 for brittle materials [12]. The estimation according to this model gives value K IC close to 1MPa m. The literature data is 2.8 MPa m. Underestimated value K IC received in the present work, probably, is connected with features of the sample studied, and also with use of model for indentation, extended to nanoscale. Substitution H and E obtained in the present work to the formula (1) gives value a 500 nanometers. That corresponds to the scratch width at which median cracks appear. As radial cracks arise at increase in width of a scratch up to 800 nanometer, it is possible to draw a conclusion, that in given sample SiC critical loading for median cracks is less, than for radial.
Conclusions
It was shown that nanoscratching is an effective method of investigation of plastic deformation of brittle hard and superhard materials on nanoscale. Definition of contact width of a residual scratch as a distance between tops of pile-up on sides of the scratch groove allows to avoid mistakes in hardness measurement, connected with elastic recovery, and also with difference in pile-up configuration. Nanohardness measured by sclerometry does not depend on included angle of diamond indenter in the angle range of 120 160 . Height of pile-up formed at scratching depends on rheologic factor of material. Observation of transition from plastic flow to fracture with increase in load on indenter gives possibility to study nucleation of nanocracks of various types. Moreover it is possible to determine fracture characteristics on submicronic and nanoscale. | 2022-06-28T02:52:58.108Z | 2007-01-01T00:00:00.000 | {
"year": 2007,
"sha1": "6014dd029dd8e3366773cbc81ddf8caa743d29c1",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/61/1/145",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6014dd029dd8e3366773cbc81ddf8caa743d29c1",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
247253561 | pes2o/s2orc | v3-fos-license | Optical tracking in team sports
Sports analysis has gained paramount importance for coaches, scouts, and fans. Recently, computer vision researchers have taken on the challenge of collecting the necessary data by proposing several methods of automatic player and ball tracking. Building on the gathered tracking data, data miners are able to perform quantitative analysis on the performance of players and teams. With this survey, our goal is to provide a basic understanding for quantitative data analysts about the process of creating the input data and the characteristics thereof. Thus, we summarize the recent methods of optical tracking by providing a comprehensive taxonomy of conventional and deep learning methods, separately. Moreover, we discuss the preprocessing steps of tracking, the most common challenges in this domain, and the application of tracking data to sports teams. Finally, we compare the methods by their cost and limitations, and conclude the work by highlighting potential future research directions.
choose their suitable method depending on the task at hand. Furthermore, understanding all these methods requires deep knowledge of computer vision for quantitative analysts in sports, which is not realistic. Therefore, in this paper, we have the following goals: to provide a robust classification of methods for the two tasks of detection and tracking and to give insights about the applied computer vision techniques of extracting trajectories to the quantitative analysts in sports.
Several papers made attempts to present the myriad of state-of-the-art object tracking algorithms. A broad description of object tracking methods was given in Alper et al. [2006], and a more recent one in Reddy et al. [2015]. Moreover, Dhenuka et al. [2018] presented a survey on Multiple Object Tracking (MOT) methods, while a survey for solving occlusion problems was published in Lee et al. [2014]. The first survey on the application of deep learning models in MOT is presented in Ciaparrone et al. [2019]. All these surveys cover the description of tracking methods of generic objects, such as humans or vehicles. It was in Manafifard et al. [2017a] where the authors summarized the state-of-the-art player tracking methods focusing on soccer videos. Although, these surveys show the following shortcomings. Most of these papers are not dedicated to team sports and survey all kinds of object tracking algorithms. On the other side, the sport dedicated survey like Manafifard et al. [2017a], is too technical, suitable only for computer vision analysts, and dedicated to tracking.
This survey contributes to the state-of-the-art player and ball tracking methods as follows. First, the methods in detection and tracking tasks are classified separately. Second, this paper is not only listing the methods but also gives an insight about the computer vision techniques to the quantitative analysts in sports, who need the extracted trajectories for their quantitative models. Third, the application of deep learning in team sports is surveyed for the first time in the literature. Fourth, we provide a cost analysis of the methods according to their computational and infrastructure requirements. This paper is organized as follows. In Section 2 we explain our paper collection process and the camera setup requirements of the published works. We list the methods for the player and ball detection in Section 3, and the player and ball tracking in Section 4. We evaluate the categorized techniques in terms of their applied theoretical methods and analyze their cost in Section 5, and finally, we conclude the work in Section 6.
Eligibility and data collection
This survey is conducted to help quantitative sports analysts choose the best method to create their own tracking data from sports videos. For this task, the eligible papers are collected from Science Direct, Google Scholar, Scopus databases, and ACM, IEEE, Springer digital libraries using the following keywords for filtering papers and minimizing bias: "Sports analytics", "soccer", "player tracking", "ball tracking", "player detection", "ball detection", "deep learning for tracking", "fixed camera", "moving camera", "broadcast sports video". In the first round of collection, 125 papers have been identified and we carefully inspected their contributions in terms of 1) detection or tracking, 2) camera setup, and 3) deep learning-based or traditional methodologies. In order to make the best structure of this survey, we excluded the papers in which tracking was not the main focus. An example is a method called DeepQB in American football proposed by Burke [2019]. This paper proposes a deep learning approach applied to player tracking data to evaluate quarterback decisions, which is clearly not a direct contribution in player tracking methods. As a result of filtering those papers and focusing on player or ball detection and tracking, 50 papers were eligible for this survey. Furthermore, we also classified eligible papers according to their camera setup as follows.
One of the most important criteria for the evaluation of the methods in this work is the required camera setup. Depending on the camera setup, the frame extraction methods are different. Several studies in sports video analytics are limited to a single fixed camera. In these methods preprocessing steps are simpler and faster, as they do not require time and location synchronization. However, as they need to cover the whole playfield, the frames are mostly blurry and difficult to use for detection Needham and Boyle [2001], Rodriguez-Canosa et al. [2012], Sabirin et al. [2015], Arbues et al. [2019]. An alternative setting to improve resolution and accuracy is to use multiple fixed cameras. In these videos occlusion problems can be handled easily, as the occluded player or ball in one frame can be recognized with the frame captured by another camera from other angles Ren et al. [2008Ren et al. [ , 2009, Wu [2008], Yazdi and Bouwmans [2018]. Another option is to use multiple moving cameras, which makes the video processing more complex, but it provides more flexibility in the analysis. These types of video require significant synchronization effort, but finally, they produce longer trajectories, as the cameras try to follow ball controllers Xu et al. [2004], Agelet Ruiz [2010, Mondal [2014], Alavi [2017]. In this paper, we classify each of the citetd papers according to their required video inputs in terms of the cameras being fixed or moving, and of their cardinality in the arena.
Player and ball detection
Tracking data, i.e., the exact location of the players and the ball on the field at each moment of the match, is the most important data for a quantitative model developer. Player and ball detection methods are computer vision techniques that allow the analyst to identify and locate players and the ball in a frame of a sports video. Detection methods provide the input to tracking, which would be a simple task if all players and the ball were totally visible in each frame and there were no occlusion. However, in real-world videos, most frames are blurry and continuous tracking fails due to e.g., occlusion, poor light, or posture changes. Therefore, the detection task should be combined with an appropriate tracking method to accurately track the players and the ball (See Figure 1).
In this section, we focus on detection methods that aim to find the bounding box of the players and the ball, and to localize the different detection features inside each bounding box. Bounding boxes are imaginary boxes around players and the ball (see Figure 1) that are used to separate each player and ball from other objects in a video frame. We classify detection methods into the categories of traditional and deep learning-based methods. As Figure 2 shows, while in the traditional methods the features of the input objects need to be described and extracted by the analyzer and depend on the detection algorithms, a deep learning method performs this process automatically through the layers of a neural network. Therefore, data quality, computational power, domain expertise, training time, and required accuracy specify the selection of the suitable choice of method to apply. We briefly describe each group of methods separately, and give a summary of published research papers, along with their important attributes, in Table 1.
Traditional methods for detection
In the traditional methods of detection, the features of players, ball, and playfield must be precisely described and extracted by the analyzer. In this section, we classify the methods according to their description of the features, and their extraction types.
Histogram of Oriented Gradients
Histogram of Oriented Gradients (HOG) is a feature descriptor and is essentially used to detect multiple objects in an image by building histograms of pixel gradients through different parts of the image. HOG considers these oriented gradients as features. An example of a calculating histogram of gradients is illustrated in Figure 3. As the first step, the frame is divided into 8 × 8 cells. For each cell, the gradient magnitude (arrows' length) and gradient direction (arrows' direction) will be identified. Consequently, the histogram containing 9 bins corresponding to angles 0, 20, 40, . . . , 160 is calculated. This feature vector can be used to classify objects into different classes, e.g., player, background, and ball. This method is used by Mackowiak et al. [2010] and Cheshire et al. [2015].
In these methods, the court lines can be detected with Hough transform, another feature extraction technique that searches for the presence of straight lines in an image. This algorithm fits a set of line segments to a set of image pixels.
Background modelling
Background modeling is another method for detecting players and the ball, and is a complex task as the background in sports videos frequently changes due to camera movement, shadows of players, etc. Most of the methods in the background modeling domain consider image pixel values as the features of the input objects. In the domains of player and ball detection, the following two methods are proposed by researchers for background modeling: Gaussian Mixture Model (GMM) and Pixel energy evaluation..
Gaussian Mixture Model (GMM): GMM is proposed by where playfield detection is performed first by taking the peak values of RGB histograms through the frames. This is because they assume the playfield is the largest area in the frames. Then each of these extracted background pixels is modelled by k Gaussian distributions; Thus, the probability of a pixel having value X t can be calculated as: where ω i is the weight for the i th component (all summing to 1), and η(X t ) is a normal distribution density function. Based on these probabilities and by setting arbitrary thresholds on the value of the pixels, the background pixels can be subtracted and the players or the ball will be detected. This algorithm cannot recognize players in shadows.
Pixel energy evaluation: Another background model is proposed by Mazzeo et al. [2008]. In this method, the energy information of each point is analyzed in a small window: first, the information, i.e, mean and standard deviation, of the pixels at each frame is calculated. Then, by subtracting the information of the first image of the window and each subsequent image, the energy information of each point can be identified. Consequently, the slower energy points (static ones) represent the background, and higher energy points (moving ones) represent the players or the ball.
Edge detection
Edge detection is a method for detecting the boundaries of objects within frames as the features. This method works by detecting discontinuities in brightness. The researchers who choose this method for players and ball detection, mostly utilize the following 2 operators: Canny edge detector, and Soble filtering. Figure 4 demonstrates the edge detection methods on a sample frame of a player.
Canny edge detection: is a popular method in OpenCV for binary edge detection. (Figure 4(b)). Direkoglu et al. [2018] proposed using the Canny edge detection method for extracting image data and features. However, there might be missing or disconnected edges, and it does not provide shape information of the players and the ball. Thus, given a set of binary edges, they solve a particular heat equation to generate a shape information image ( Figure 4(c, d)). In mathematics, the heat equation is a partial differential equation that demonstrates the evolution of a quantity like heat (here heat is considered as binary edges) over time. The solution of this equation is filling the inside object shape. This information image removes the appearance variation of the object, e.g., color or texture, while preserving the information of the shape. The result is the unique shape information for each player, which can be used for identification. This method works only for videos recorded with fixed cameras.
Sobel filtering: In the method by Naushad Ali et al. [2012] and Rao and Pati [2015], the Sobel gradient algorithm is used to detect horizontal and vertical edges ( Figure 4(e, f)). The gradient is the vector with the components of (x,y) and the direction is calculated as tan −1 (∆y/∆x). Due to the similar color of the ball and the court lines, if the Sobel gradient algorithm is applied for background elimination instead of color segmentation, overlapping of the ball and court lines will not be a problem. However, general overlapping problems, e.g., player occlusion, cannot be handled with this method.
Supervised learning
In many proposed methods a robust classifier is trained to distinguish positive samples, i.e., players and/or ball, and negative samples, i.e., other objects or parts of the playfield. Any classification method, such as Support Vector Machine or Adaboost algorithms, can be trained for accurate detection of the players. Some examples of positive and negative sample frames are given in Figure 5. Figure 4: Edge detection methods: (a) original frame, (b) binary edges with Canny method, (c) shape information image Direkoglu et al. [2018], (d) colored shape information image Direkoglu et al. [2018], (e) horizontal Sobel operator, (f) vertical Sobel operator Support Vector Machine: Several related works state that the advantages of SVM compared to other classifiers include better prediction, unique optimal solution, fewer parameters, and lower complexity. In the method of Zhu et al. [2006], the playfield is subtracted with a GMM. The results of background subtraction are thousands of objects, which SVM can help to classify into player and not player objects. However, in this method, the training dataset is manually labelled, which is time-consuming. In order to solve this problem, Chengjun [2018] proposed fuzzy decision making for automatic labelling of the training dataset.
Adaboost algorithm: Adaboost, short for Adaptive Boosting is used to make a strong classifier by a linear combination of many weak classifiers to improve detection accuracy. The main idea is to set the weights of the weak classifiers and to train on the data sample in each iteration until it can accurately classify the unknown objects. Markoski et al. [2015] used this algorithm for basketball players' face and body parts recognition. Although, they concluded that Adaboost is not accurate enough for object detection in sports events. Furthermore, Lehuger et al. [2007] showed that deep learning methods outperform the Adaboost algorithm for player detection.
Deep learning methods for detection
In the task of player detection, researchers usually use deep learning to recognize and localize jersey numbers. Most of the works in this area use a Convolutional Neural Network (CNN) which is a deep learning model. The general architecture of CNN for digit recognition is illustrated in Figure 6. As the first step, players' bounding boxes should be detected. Then digits inside each bounding box should be accurately localized. These localized digits will be the input of CNN. Several convolution layers in CNN will assign importance to various features of the digits. Consequently, the neurons in the last layer will classify the digits from 0 to 9 classes. In this area, different works propose the following methods for improving the performance of detection: 1) how to localize digits inside each frame, 2) how to recognize multiple digits, 3) how to automatically label the training dataset, i.e, which benchmark dataset to use.
The first CNN-based approach for automatically recognizing jersey numbers from soccer videos was proposed by GerkeKarsten and Schäfer [2015]. However, this method cannot recognize numbers in case of perspective distortion of the camera. To solve this problem, Li et al. [2018] used a Spatial Transformer Network (STN) to localize jersey numbers more precisely. STN helps to crop and normalize the appropriate region of the numbers and improves the performance of classification. Another digit localization technique is Region Proposal Network (RPN), which is a convolutional network that has a classifier and a regressor, and is trained end-to-end to generate high-quality region proposals for digits. RPN is used by Liu and Bhanu [2019] for classification and bounding-box regression over the background, person, and digits. While these methods can be more accurate than some traditional methods for player detection and they eliminate the necessities of manual feature description and extraction, they are also more expensive Figure 6: Neural network architecture for digit localization and detection due to more computation and training time. Most of these methods require special versions of GPUs to be applied. Moreover, training and testing CNNs might be more time-consuming than running traditional methods.
Player tracking
Detection methods calculate the location of each player and the ball at each frame of the videos. There are always some frames for which the detection fails due to the blurriness of the frame, poor light conditions, occlusions, etc. In these cases, the detection methods cannot provide the location of the same player and ball in consecutive frames to construct continuous trajectories. Therefore, a player tracking method is needed to associate the partial trajectories, and to provide long tracking information of each of the players and the ball (see Figure 1). Player tracking involves the design of a tracker that can robustly match each observation to the trajectory of a specific player. This tracker can be designed for a single object or for multiple objects. The biggest challenge in tracking is the overlapping of players, namely the occlusion. Several studies suggested solutions for making a unique, continuous trajectory for each player by solving the occlusion problem. Those methods mostly follow filtering and data association. However, each method follows a different description for interest points (features) for filtering, and data association depends on the custom definition of probabilistic distributions. In this section, we survey the tracking methods classified by whether they are based on traditional or deep learning models.
Traditional methods for tracking
Same as the previously mentioned traditional detection models, the traditional tracking algorithms also require manual extraction and description of the player and ball features. The main categories of tracking methods in the literature of sports analytics are the following: point tracking, contour tracking, silhouette tracking, graph-based tracking, and data association methods.
Point tracking
The methods using point tracking mostly consider some points in the shape of the player and ball as the features, and choose the right algorithm (e.g., Point Distribution Model, Kalman filter, Particle filter) to associate those points through consecutive frames (see Figure 7).
Point Distribution Model:
In these methods, the idea is to describe the statistical models of the shape of players and ball, called Point Distribution Model (PDM). This method is used by several studies such as Mathes and Piater [2006], Hayet et al. [2005], Li and Flierl [2012]. The shape is interpreted as the geometric information of the player, which is the residue once location and scaling are removed. As the first step, they extract the vector of features using 2 methods: Harris detector, or Scale Invariant Feature Transform (SIFT). Harris detector is the corner detection operator to extract corners and infer features of an image. Example results of Harris detector are shown as some points in Figure 7. SIFT is a feature detector algorithm to describe local features in images. These extracted features are detectable even under modifications in scale, noise, and illumination. Then, by learning the spatial relationships between these points, they construct the PDM to concatenate all feature vectors, i.e., interest points, of players ( Figure 8). We provide a review and comparison of point tracking methods in Table 2.
Particle filter: All particle filter tracking systems aim to estimate the state of a system (x t ), given a set of noisy observations (z 1:t ). Thus the goal is to estimate P (x t |z 1:t ). If we consider this problem as a Markov process, the solution can be found if the system is assumed to be linear and each conditional probability distribution is being modeled as a Gaussian. However, these assumptions cannot be made, as they decrease the accuracy of prediction. Particle filtering can help to eliminate the necessity of extra assumptions. This method approximates the probability distribution with a weighted set of N samples: where ω i is the weight of the sample x i . Now the questions are how to assign the weights, and how to sample the particles. Several studies suggested different methods for these questions.
In the methods by Kataoka and Aoki [2011], Manafifard et al. [2017b], particles are players' positions. Linear uniform motion is used to model the movement of particles, and the Bhattacharyya coefficient is applied for assigning weights, i.e., likelihood to each particle. In statistics, Bhattacharyya coefficient (BC) is a measure for the amount of overlap between two statistical samples (p, q) over the same domain x, and is calculated as BC(p, q) = x p(x)q(x). In the works by Petsas and Kaimakis [2016], Yang and Li [2017], each particle is estimated by the updated location of the player, knowing the last location plus a noise: x k = x k−1 + v k , which noise v k is assumed to be i.i.d. following a zero-mean Gaussian distribution. Moreover, in Yang and Li [2017], particles are created based on color and edge features of players, and the weight of each particle is computed by contrast to the similarity between the particles and targets. Dearden et al. [2006] introduced Sample Importance Resampling to show that the shape of a player can be represented by a set of particles, e.g., edge, center of mass, and color pixels. Also, those points can represent a probabilistic distribution of the state of the player (Figure 9). Another method is proposed by de Pádua et al. [2015], in which players are detected by adaptive background subtraction method based on a mixture of Gaussians, and each detected player is automatically tracked by a separate particle filter and weighted average of particles. We show the above-mentioned methods for particle filtering in Table 3.
Kalman filter: (KF) method is mostly used in systems with the state-space format. In the state-space models, we have a set of states evolving over time. However, the observations of these states are noisy and we are sometimes unable to directly observe the states. Thus, state-space models help to infer information of the states, given the observations, as new information arrives. In player and ball tracking, the observations of two inputs, i.e., time and noisy position measurements, continuously update the tracker. The role of KF is to estimate the x t , given the initial estimate of x 0 , and time-series of measurements (observations), z 1 , z 2 , ..., z t . The KF process defines the evolution of state from time t − 1 to t as: Can only track the non-rigid but textured objects in crowded scenes; Occlusion can be handled by tracking sparse sets of local features (a) Set of 500 particles for P (xt, yt|yt) of a player (b) Posterior probability distribution function given the current state of a particle P (yt|xt = x). Darker points represent higher probability Figure 9: Particle filtering from Dearden et al. [2006] where F is the transition matrix for state vector x t−1 , B is the control-input matrix for control vector u t−1 , and ω t−1 is the noise following a zero-mean Gaussian distribution. A typical KF process is shown in Figure 10. As we can see, the Kalman filter and particle filter are both recursively updating an estimate of the state, given a set of noisy observations. Kalman filter performs this task by linear projections (3), while the Particle filter does so by estimating the probability distribution (2).
The following studies use Kalman filter for player and ball tracking: Makandar and Mulimani [2018], Kim and Kim [2009], . We summarize the KF methods in Table 4. Contour tracking for dynamic sports videos provides basic data, such as orientation and position of the players, and is used when we have deforming objects, i.e., players and ball, over consecutive frames. Figure 11 shows some examples of such contours. Many methods have been proposed to track these contours. In an easy approach, the centroid of these contours plus the bounding box of players will be obtained, and the player can be traced Hanzra and Rossi [2013], Beetz et al. [2007]. Researchers in this area tried to propose several methods for assigning a suitable contour to the players and the ball. Patil et al. [2018] find player's contours as curves, joining all the continuous points (along the boundary), having the same color or intensity. So they could track these contours and decide whether the player is in an offside Lefèvre et al. [2000Lefèvre et al. [ , 2002, Lin [2018] suggests snake or active contour tracking, which does not include any position prediction. In such methods, the algorithm fits open or close splines (i.e., a special function defined piecewise by polynomial) to lines or edges of the players. An active contour can be represented as a curve: [x t , y t ], t ∈ [0, 1] segmenting players from the rest of the image, which can be closed or not. Then this curve should be iteratively deformed and converged to target contour ( Figure 12) to minimize an energy function and to fit the curve to the lines or edges of the players. The energy function is presented as physical properties of the contours, i.e., the shape of the contour, plus the gradient and intensity of the pixels in the contour. A review of contour representation of the above-mentioned tracking methods is in Table 5. When the information provided by contour and simple geometric shapes are not enough for the tracking algorithm, extracting the silhouette of the players and of the ball can provide extra information on the appearance of the object in consecutive frames. Unlike contours, the silhouette of a player is not a curved shape. Thus, it does not require deformation and convergence to the target shape of players and the ball. Instead, this method proposes some aspect ratios to describe the invariant shape. An example of this shape extraction for a specific player is illustrated in Figure 13. In such cases, shape analysis can help the tracking process as follows.
Figure 13: Silhouette tracking
Shape matching: In the literature, the shape of an object is defined by its local features not determined or altered by additive contextual effects, e.g., location, scale and rotation. This method is mostly used for ball tracking. The problem in this area is that the shape of the ball varies significantly in each frame, and does not look like a circle at all (Figure 14). Different studies suggest some aspect ratios, i.e, shape descriptors, to get the near-circular ball images. Chakraborty and Meher [2013] suggest using the degree of compaction C d which is the ratio of the square of the perimeter of the given shape to the area of the given shape: C d = P 2 /4πA. Therefore, if C d > 50%, the shape can be filtered as a ball. Another shape descriptor is eccentricity, proposed by Naidoo and Tapamo [2006], and it is defined as the ratio of the longest diameter to the shortest diameter of a shape. The form factor indicates how circular an object is, and if the result is between [0.2,0.65] they will consider it as a ball. Besides these shape descriptors, Huang et al. [2007] proposed using skeletons to separate a shape's topological properties from its geometries. To extract the skeleton for every foreground blob, they use the Euclidean distance transform. Table 6 shows a review of shape analysis in player and ball tracking methods.
.4 Graph-based tracking
Some works explore graph-based multiple-hypothesis to perform player tracking. In these cases, a graph is constructed that shows all the possible trajectories of players, and it models their positions along with their transition between frames. The correct trajectory is found with the help of, e.g., similarity measure, linear programming, multi-commodity network flow, or the problem is modeled as a minimum edge cover problem. An example of graph tracking in consecutive frames is shown in Figure 15. The method shown by Figueroa et al. [2004] builds the graph in such a way that nodes represent blobs and edges represent the distance between these blobs. Then tracking of each player is performed by searching the shortest path in the graph. However, occlusion is difficult to be handled with this method. Authors of Pallavi et al. [2008] used dynamic programming to find the optimal trajectory of each player in the graph. The proposed method by Xing et al. [2011] builds an undirected graph to model the occlusion relationships between different players. In Chen et al. [2017], the method constructs a layered graph for detected players, which includes all probable trajectories. Each layer corresponds to a frame and each node represents a player. Two nodes of adjacent layers are linked by an edge if their distance is less than a pre-defined threshold. Finally, the authors used the Viterbi algorithm in dynamic programming to extract the shortest path of the graph. Ball tracking with graphs was proposed in Maksai et al. [2015], where they build a ball graph to formulate the Mixed Integer Programming model, and each node is associated with a state, i.e., location of the ball at a time instance. Table 7 shows a review of node and edge representation, along with tracking methods defined on the graph.
Data association methods
Simulation-based approaches, including Monte Carlo methods and joint probabilistic data association, are usually used for solving multitarget tracking problems, as these methods perform well for nonlinear and non-Gaussian data models.
Markov Chain Monte Carlo data association (MCMC): Septier et al. [2011] compared several MCMC methods, such as 1) sequential importance resampling algorithm, 2) resample-move, 3) MCMC-based particle method. The difference between these methods stems from the sampling strategy from posterior by using previous samples. Simulations show that the MCMC-based Particle approach exhibits better tracking performance and thus clearly represents interesting alternatives to Sequential Monte Carlo methods. The authors of Liu et al. [2009] designed a Metropolis-Hastings sampler for MCMC, which increased the efficiency of the method.
Joint probabilistic data association (JPDA): The JPDA method can be used when the mapping from tracks to observations is not clear, and we do not know which observations are valid and which are just noise. In these cases, JPDA implements a probabilistic assignment. Abbott and Williams [2009] used JPDA to assign the probability of association between each observation and each track.
Deep learning-based tracking
Despite the effectiveness of traditional methods, they fail in many real-world scenarios, e.g., occlusion, and processing videos from several viewpoints. On the other hand, deep learning models benefit from the learning ability of neural networks on large and complex datasets, and they eliminate the necessities of features extraction by the human/expert. Therefore, deep learning-based trackers are recently getting much attention in computer vision. These trackers are categorized into online and offline methods: online trackers are trained from scratch during the test and are not taking advantage of already annotated videos for improving performance, while offline trackers train on offline data.
Several recent studies have attempted to assess the performance of deep learning methods in sports analytics. The core idea of all methods is to use CNN. However, each study proposes a different structure of the network and training method for increasing the performance. In this section, we summarize the state-of-the-art networks and their application in sports analytics. Table 8 After the classification of the players and ball, the metric called Intersection Over Union (IOU) is used to track them. IOU is the ratio of intersection of the ground truth bounding box from the previous frame (BB A ), and predicted bounding box in the current frame (BB B ), and it is calculated as in (4): where ∩ and ∪ are intersection and union in terms of the number of pixels. Thus, if the intersection is non-zero between consecutive frames, the player or ball can be traced.
Cascade-CNN: is a novel deep learning architecture consisting of multiple CNNs. This network is trained on labeled image patches and classifies the detected objects into the two classes of player and non-player. Football and basketball player tracking using this method is suggested by Lu et al. [2017]. The illustrated pipeline in Figure 17 shows the classification process and a dilation strategy for accurate player tracking with the help of IOU metric. SiamCNN: In this network, there are sister network 1 and sister network 2 with the same network structure, parameters, and weights. The structure looks like VGG-M except for the adjustment of the sizes of each layer. The inputs of SiamCNN are 3-color channels (R,G,B) from frames, and the output is the Euclidean distance between the characteristics/features of the inputs. Long [2019] used this network to extract players' characteristics through trajectories. Then they compare the similarities between search areas and a target template, so players can be tracked. The structure of this network is given in Figure 19.
Evaluation and model selection
If a clean set of tracking information is not provided to a sports analyzer who is developing a quantitative model, his/her core task is to choose the most suitable method for tracking players and the ball, and construct the required dataset for further analysis. In the detection and tracking domains, model selection, i.e., DL or traditional, heavily depends on the task at hand. The selection will be difficult by merely reviewing the performance metrics of the methods, as the tracking performance relies on the specific task at hand and the quality of the videos. However, there are some concrete criteria in this domain, which can help the analyst to rapidly choose the desired tracking method. Figure 20 compares the number of publications in detection and tracking domains categorized by team sports. Note that 74% of methods are applied on football videos, whereas deep learning methods (i.e., CNN, VGG, Cascade, Siam, Yolo) are covering only Figure 19: SiamCNN network structure for player tracking 20% of all publications. In this section, we review the benefits and drawbacks of each method, and compare them in terms of their estimated costs.
Deep learning-based vs. traditional methods
In general, traditional methods are domain-specific, thus the analyzer must specifically describe and select the features (e.g., edge, color, points, etc.) of the ball, football player, basketball player, background, etc. in detail. Therefore, the performance of the traditional models depends on the analyzer's expertise and how accurate the features are defined. DL methods, on the other hand, demonstrate superior flexibility and automation in detection and tracking tasks, as they can be trained offline on a huge dataset, and then automatically extract features of any object type. In this case, the necessities of manual feature extraction are eliminated, and consequently, DL requires less expertise from the analyzer. In another point of view, DL models are more like a black box on the detection tasks. On the contrary, traditional methods provide more visibility and interpretability to the analyzer on how the developed algorithm can be performed in different situations such as sports types, lighting conditions, cameras, video quality, etc. So, traditional models can give a better opportunity to improve the tracker accuracy, when the system components are visible. Also in the case of failure, system debugging are more straightforward in traditional models than DL-based ones.
In addition to the pros and cons that are listed in this survey for each method, few criteria can help sports analysts to choose their desired method. Table 9 lists these criteria that can help analyzers to choose the suitable detection and tracking methods in the direction of DL-based or traditional ones.
Cost analysis
The cost of the method is one of the most important characteristics of model selection for researchers and analysts: they are looking for a method with maximum accuracy and reasonable cost. Here we give an insight into the cost of the state-of-the-art methods, both for infrastructure and computation, and classify them into 3 categories: high, medium, low. The classification is based on the following facts. In the computational aspect, deep learning methods which require GPUs are more expensive than traditional methods with only CPUs. From an infrastructure perspective, different methods require different sets of camera settings to record the sports video. Methods that require a set of moving or stationary camera(s) to be set up in the arena are more expensive than the methods that can trace players and the ball on broadcast video. Table 10 shows the cost approximation of all methods along with their most significant limitations.
Conclusion and future research directions
According to a large number of citetd papers in this survey, computer vision researchers intensively investigate robust methods of optical tracking in sports. In this survey, we have categorized the literature according to the applied methods and video type they build on. Moreover, we elaborated on the detection phase, as a necessary preprocessing step for tracking by conventional and deep learning methods. We believe that this survey can significantly help quantitative analysts in sports to choose the most accurate, while cost-effective tracking method suitable for their analysis. Furthermore, the combination of traditional and deep learning methods can be rarely seen in the literature. Traditional models are time-consuming and require domain expertise due to some manual feature extraction tasks, while deep learning models are quite expensive to run in terms of computing resources. As possible future work, research may aim to combine those methods to increase the performance of tracking systems, along with the robust quantitative evaluation of the games. Another avenue for future work might be to minimize the computational costs of tracking systems with the aid of sophisticated data processing methods. We hope that this survey can give an insight to sports analytics researchers to recognize the gaps of state-of-the-art methods, and come up with novel solutions of tracking and quantitative analysis. | 2022-03-08T14:09:43.120Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "0461d422894df339435b32a7d1476e77ac87fe98",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "03560b22ace66c7047081c67e4482a6cb2c67262",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
86752428 | pes2o/s2orc | v3-fos-license | Descriptions of Two New Species of the Platylomia spinosa Species Group (Hemiptera: Cicadidae)
Two new species, Platylomia constanti Lee, sp. nov. and Platylomia maxima Lee, sp. nov., of the Platylomia spinosa species group are described from Luzon, Philippines. These new species are distinguished from their congeners by the combination of the following characters: larger body, long operculum, wide abdomen, presence of infuscation only at bases of apical cells 2 and 3 on the fore wing, blackish ground color of the dorsal and ventral parts of the body, and the different shape of the uncal lobe. The 2 new species are different from each other not only in collection altitude and season but also in morphology such as the shape of the dorsal beak, shape of the male operculum, shapes of some markings on the body, and the length of the basal vein of the apical cell 1.
The P .spinosa species group is characterized by the head wider than the mesonotum, the male opercula widely apart from each other, the male abdomen longer than twice as long as maximum width of the mesonotum, the sternite VII narrowed posteriorly, and the uncus with a distinct ridge from one uncal lobe to the other, encircling the area where the uncal lobes meet medially (Beuk 1999).This paper presents the description of 2 new species of the Platylomia spinosa species group from Luzon, Philippines, which were found in the collections of the Institut royal des Sciences naturelles de Belgique, Brussels, Belgium (IRSN).Morphological measurements were made with a Mitutoyo TM vernier caliper.Morphological terminology follows that of Moulds (2005).
Diagnosis .This new species is closely allied to Platylomia meyeri from northern Sulawesi in having long operculum, wide abdomen, and the infuscation only at the bases of the apical cells 2 and 3 on the fore wing but is distinguished by blackish coloration of the dorsal and ventral parts of the body and the uncal lobe in a different shape.This species is also allied to Platylomia nigra in having long operculum, black ground color of the dorsal and ventral parts of the abdomen, and the uncal lobe in a similar shape but is distinguished by much larger body, wide abdomen, and the lack of a series of tiny spots on the subapical margin of the fore wing.
Description of Male (Fig. 1A, C).Ratio of body length to head width about 3.63.Head greenish ochraceous to ochraceous with the following black to fuscous markings: a median large spot enclosing ocelli, of which anterior end reaching frontoclypeal suture and anterolateral angles extending to posterior margins of supra-antennal plates; and a pair of irregular-shaped large spots on both sides of the median spot, which are slightly smaller than the median spot, laterally reaching compound eyes.Distance between lateral ocelli and compound eyes about as wide as twice distance between two lateral ocelli.Postclypeus much swollen.Antennae fuscous.Ventral part of head ochraceous with black to fuscous markings.Postclypeus with a large median spot on posterior 2/3 of postclypeus and fasciae along ridges between transverse grooves.Anteclypeus with a large irregular but symmetrical marking.Rostrum brown to ochraceous with fuscous apical part; not reaching posterior margin of hind coxae.Lorum mostly fuscous except ochraceous inner margin.Gena with a broad transverse fascia between antenna and compound eye.
Pronotum greenish ochraceous to ochraceous.Inner area of pronotum with following black to fuscous markings: a pair of central longitudinal fasciae, extending from anterior margin of pronotum to pronotal collar and dilated both anteriorly and posteriorly, which are discontinued before posterior end; a pair of short, oblique branches from middle of the central longitudinal fasciae along paramedian fissures; a pair of obliquely longitudinal fasciae between median parts of paramedian fissures and posterior ends of lateral fissures, which are connected to the short branches; a pair of fasciae along lateral fissures; a pair of curved fasciae along lateral margin of inner area; and other small irregular spots.Pronotal collar without distinct markings but narrowly margined with fuscous except anterolateral part.Anterolateral pronotal collar slightly developed and dentate.
Mesonotum black with following ochraceous markings: a pair of delicate longitudinal fasciae along inner margins of submedian sigilla; a pair of longitudinal fasciae on lateral sides of parapsidal sutures, which distinctly extending to posterior margin of mesonotum, divided into two branches; a pair of fasciae along lateral margins of mesonotum; a pair of small spots on anterior margins of submedian sigilla; and a pair of small spots on anterior submargins of lateral sigilla.Anterior margins of parapsidal sutures and posterior medial margin of mesonotum densely covered with white polinosity.Cruciform elevation greenish ochraceous with black to fuscous anterior subapical parts and posterior margin.Ventral part of thorax greenish ochraceous.
Fore leg mostly black to fuscous with an ochraceous marking on coxa, trochanter, femur, and pretarsal claw.Fore femur with a small subapical spine as well as primary and secondary spines.Mid and hind coxae ochraceous with a large fuscous paramedian spot.Mid and hind trochanters mostly black.Mid and hind femora ochraceous to brown with a fascia along ventral side of femur.Fascia on mid femur discontinued in middle.Mid tibia and tarsus black.Hind tibia and tarsus mostly ochraceous with irregular fuscous markings.Mid and hind pretarsal claw fuscous apically.
Wings hyaline, fore wing with an infuscation on radial and radiomedial crossveins.Venation dark brown in both fore wing and hind wing.Basal cell slightly tinged with ochraceous and partly with light jade green.Basal membrane light jade green.Hind wing jugum whitish but partly light jade green at base.Basal vein of apical cell 1 longer than 1/3 of longitudinal vein of apical cell 1.
Operculum (Fig. 2C) ochraceous, partly but irregularly tinged with green, with very small fuscous areas on anterior margin and lateral base; long, passing posterior margin of sternite V; slightly concave at middle of both inner and lateral margins, with round apex.Lateral margin of operculum weakly sinuate at base.Two opercula widely apart from each other, of which gap about as wide as operculum.
Abdomen obconical, considerably longer than distance from head to cruciform elevation; black with transversely arranged ochraceous irregular markings on tergites 2-8; irregularly covered with white pollinosity on tergites 2-8; densely covered with silvery hairs.Posterior margin of tergite 3 wider than anterior margin of mesonotum.Timbal cover fuscous; quarter round.Timbal concealed with timbal cover in dorsal view.Ventral part of abdomen fuscous except ochraceous sternite VIII.
Male genitalia (Fig. 2A, B).Pygofer nearly spherical in ventral view.Upper lobes of pygofer absent.Dorsal beak long and slender with acute tip.Uncal lobe broad with medial margin nearly straight, distal margin nearly straight and slightly oblique toward inner side, and lateral margin slightly convex.Aedeagus thin, protruding from venter of uncus.Basal lobe of pygofer narrow in ventral view, rounded in lateral view.
Platylomia maxima Lee, sp.nov., (Figs.1B, D, 3) Type Material .Holotype: male (Fig. 1B, D Etymology .The specific name, maxima , means 'largest' in reference to the fact that this species has larger body size than any of the known species of the Platylomia spinosa species group. Measurements of Types (in mm, 1 males).Length of body: 61.4; length of fore wing: 65.9; width of fore wing: 20.5; length of head: 6.7; width of head including eyes: 16.7; width of pronotum: 18.5; width of mesonotum: 15.8; wing span: 147.0.
Diagnosis .This species is closely allied to P .constanti but is distinguished by the triangular dorsal beak (long and slender in P .constanti ), the different shape of the male operculum, the shapes of some markings on the body, and the very short basal vein of the apical cell 1 (Fig. 1B).This species occurs in low altitude area (collected at 50 m, not as high as 1690 m as in P .constanti ) in wet season (collected in early Sep, not mid Apr (dry season) as in P .constanti ).
Description of Male (Fig. 1B, D).Ratio of body length to head width about 3.68.Head ochraceous with the following black to fuscous markings: a median large spot enclosing ocelli, of which anterior end reaching frontoclypeal suture and anterolateral angles extending to posterior margins of supra-antennal plates; and a pair of irregular-shaped large spots on both sides of the median spot, which are slightly smaller than the median spot, laterally reaching compound eyes.Distance between lateral ocelli and compound eyes about as wide as twice distance between 2 lateral ocelli.Postclypeus much swollen.Antennae fuscous.Ventral part of head ochraceous with black to fuscous markings.Postclypeus with a large median spot on posterior 2/3 of postclypeus and fasciae along ridges between transverse grooves.Anteclypeus with a large irregular but symmetrical marking.Rostrum fuscous except about anterior 1/4 brown to ochraceous medially; not reaching posterior margin of hind coxae.Lorum mostly fuscous except ochraceous inner margin.Gena with a broad transverse fascia between antenna and compound eye.Pronotum ochraceous except lateral corner tinged with green.Inner area of pronotum with following black to fuscous markings: a pair of central longitudinal fasciae, extending from anterior margin of pronotum to pronotal collar and dilated both anteriorly and posteriorly; a pair of short, oblique branches from middle of the central longitudinal fasciae along paramedian fissures; a pair of obliquely longitudinal fasciae between median parts of paramedian fissures and posterior ends of lateral fissures, which are connected to the short branches; a pair of fasciae along lateral fissures; a pair of curved fasciae along lateral margin of inner area; and other small irregular spots.Pronotal collar without distinct markings but narrowly margined with fuscous except lateral part.Anterolateral pronotal collar slightly developed and dentate.
Mesonotum black with following ochraceous markings: a pair of delicate longitudinal fasciae along inner margins of submedian sigilla; a pair of longitudinal fasciae on lateral sides of parapsidal sutures, which indistinctly extending to posterior margin of mesonotum, divided into two branches; a pair of fasciae along lateral margins of mesonotum; a pair of small spots on anterior margins of submedian sigilla; and a pair of small spots on anterior submargins of lateral sigilla.Posterior medial margin of mesonotum sparsely covered with white polinosity.Cruciform elevation ochraceous with black to fuscous anterior subapical parts and posterior margin.Ventral part of thorax ochraceous.
Fore leg mostly black to fuscous with an ochraceous marking on coxa, trochanter, femur, and pretarsal claw.Fore femur with a small subapical spine as well as primary and secondary spines.Mid and hind coxae ochraceous with a large fuscous paramedian spot.Mid and hind trochanters mostly black.Mid and hind femora ochraceous to brown with a fascia along ventral side of femur.Mid tibia and tarsus black.Hind tibia and tarsus mostly ochraceous with irregular fuscous markings.Mid and hind pretarsal claw fuscous apically.
Wings hyaline, fore wing with a infuscation on radial and radiomedial crossveins.Venation dark brown in both fore wing and hind wing.Basal cell slightly tinged with ochraceous and partly with light jade green.Basal membrane smoky gray.Hind wing jugum whitish but partly light jade green and partly smoky gray.Basal vein of apical cell 1 very short, about 1/4 of longitudinal vein of apical cell 1.
Operculum (Fig. 3C) ochraceous, partly but irregularly tinged with green, with very small fuscous area on anterior margin and lateral base and narrowly margined with fuscous on about anterior 1/3 of lateral margin; long, more slender than P .constanti , passing posterior margin of sternite V; slightly concave at middle of both inner and lat-eral margins, with roundish apex, more narrowly rounded than P. constanti.Lateral margin of operculum weakly sinuate at base.Two opercula widely apart from each other, of which gap about as wide as operculum.
Abdomen obconical, considerably longer than distance from head to cruciform elevation; black with transversely arranged ochraceous irregular markings on tergites 2-6; irregularly covered with white pollinosity on tergites 3-5; very sparsely covered with silvery hairs.Posterior margin of tergite 3 wider than anterior margin of mesonotum.Timbal cover fuscous; quarter round.Timbal concealed with timbal cover in dorsal view.Ventral part of abdomen fuscous except ochraceous posterior margin of sternite VII and mostly ochraceous sternite VIII.Sternite VIII with a delicate median longitudinal fuscous fascia and margined with fuscous.
Male genitalia (Fig. 3A, B).Pygofer nearly spherical in ventral view.Upper lobes of pygofer absent.Dorsal beak short, triangular, nearly shaped as a regular triangle.Uncal lobe broad with medial margin nearly straight, distal margin nearly straight and slightly oblique toward inner side, and lateral margin convex.Aedeagus thin, protruding from venter of uncus.Basal lobe of pygofer narrow in ventral view, rounded in lateral view. | 2019-03-28T13:41:55.583Z | 2009-07-01T00:00:00.000 | {
"year": 2009,
"sha1": "c33b06d076ddae44228f8693f5d0ac8fe205995e",
"oa_license": "CCBYNC",
"oa_url": "https://bioone.org/journals/Florida-Entomologist/volume-92/issue-2/024.092.0219/Descriptions-of-Two-New-Species-of-the-iPlatylomia-spinosa-i/10.1653/024.092.0219.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5138c2ef981e8fbb0a514aa6a6763fe82e76a41c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
232114965 | pes2o/s2orc | v3-fos-license | Early versus deferred anti-SARS-CoV-2 convalescent plasma in patients admitted for COVID-19: A randomized phase II clinical trial
Background Convalescent plasma (CP), despite limited evidence on its efficacy, is being widely used as a compassionate therapy for hospitalized patients with COVID-19. We aimed to evaluate the efficacy and safety of early CP therapy in COVID-19 progression. Methods and findings The study was an open-label, single-center randomized clinical trial performed in an academic medical center in Santiago, Chile, from May 10, 2020, to July 18, 2020, with final follow-up until August 17, 2020. The trial included patients hospitalized within the first 7 days of COVID-19 symptom onset, presenting risk factors for illness progression and not on mechanical ventilation. The intervention consisted of immediate CP (early plasma group) versus no CP unless developing prespecified criteria of deterioration (deferred plasma group). Additional standard treatment was allowed in both arms. The primary outcome was a composite of mechanical ventilation, hospitalization for >14 days, or death. The key secondary outcomes included time to respiratory failure, days of mechanical ventilation, hospital length of stay, mortality at 30 days, and SARS-CoV-2 real-time PCR clearance rate. Of 58 randomized patients (mean age, 65.8 years; 50% male), 57 (98.3%) completed the trial. A total of 13 (43.3%) participants from the deferred group received plasma based on clinical aggravation. We failed to find benefit in the primary outcome (32.1% versus 33.3%, odds ratio [OR] 0.95, 95% CI 0.32–2.84, p > 0.999) in the early versus deferred CP group. The in-hospital mortality rate was 17.9% versus 6.7% (OR 3.04, 95% CI 0.54–17.17 p = 0.246), mechanical ventilation 17.9% versus 6.7% (OR 3.04, 95% CI 0.54–17.17, p = 0.246), and prolonged hospitalization 21.4% versus 30.0% (OR 0.64, 95% CI, 0.19–2.10, p = 0.554) in the early versus deferred CP group, respectively. The viral clearance rate on day 3 (26% versus 8%, p = 0.204) and day 7 (38% versus 19%, p = 0.374) did not differ between groups. Two patients experienced serious adverse events within 6 hours after plasma transfusion. The main limitation of this study is the lack of statistical power to detect a smaller but clinically relevant therapeutic effect of CP, as well as not having confirmed neutralizing antibodies in donor before plasma infusion. Conclusions In the present study, we failed to find evidence of benefit in mortality, length of hospitalization, or mechanical ventilation requirement by immediate addition of CP therapy in the early stages of COVID-19 compared to its use only in case of patient deterioration. Trial registration NCT04375098.
versus no CP unless developing prespecified criteria of deterioration (deferred plasma group). Additional standard treatment was allowed in both arms. The primary outcome was a composite of mechanical ventilation, hospitalization for >14 days, or death. The key secondary outcomes included time to respiratory failure, days of mechanical ventilation, hospital length of stay, mortality at 30 days, and SARS-CoV-2 real-time PCR clearance rate. Of 58 randomized patients (mean age, 65.8 years; 50% male), 57 (98.3%) completed the trial. A total of 13 (43.3%) participants from the deferred group received plasma based on clinical aggravation. We failed to find benefit in the primary outcome (32.1% versus 33.3%, odds ratio [OR] 0.95, 95% CI 0.32-2.84, p > 0.999) in the early versus deferred CP group. The inhospital mortality rate was 17.9% versus 6.7% (OR 3.04, 95% CI 0.54-17.17 p = 0.246), mechanical ventilation 17.9% versus 6.7% (OR 3.04, 95% CI 0.54-17.17, p = 0.246), and prolonged hospitalization 21.4% versus 30.0% (OR 0.64, 95% CI, 0.19-2.10, p = 0.554) in the early versus deferred CP group, respectively. The viral clearance rate on day 3 (26% versus 8%, p = 0.204) and day 7 (38% versus 19%, p = 0.374) did not differ between groups. Two patients experienced serious adverse events within 6 hours after plasma transfusion. The main limitation of this study is the lack of statistical power to detect a smaller but clinically relevant therapeutic effect of CP, as well as not having confirmed neutralizing antibodies in donor before plasma infusion.
Conclusions
In the present study, we failed to find evidence of benefit in mortality, length of hospitalization, or mechanical ventilation requirement by immediate addition of CP therapy in the early stages of COVID-19 compared to its use only in case of patient deterioration.
Author summary
Why was this study done?
• The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has become a matter of worldwide concern, and except for corticosteroids, no other validated treatment against SARS-CoV-2 has been found so far.
• Plasma from convalescent patients containing antibodies against the virus is being widely used as a treatment alternative against this virus, but few randomized clinical trials have been carried out to show any clinical benefit for patients with COVID-19.
What did the researchers do and find?
• We conducted a randomized clinical trial. Fifty-eight hospitalized patients in the early stages of COVID-19 (�7 days of symptoms) and with a high risk of progression into respiratory failure were recruited.
Introduction
The SARS-CoV-2 pandemic resulted in over 24 million infections and 833,000 deaths by August 29, 2020 [1]. During the early months of the pandemic, case series and cohorts from China and the United States analyzed demographic and outcome data for hundreds of inpatients admitted for COVID-19. These showed an intensive care unit (ICU) admission rate between 5% and 26%, and overall mortality from 1.4% to 28.3% [2,3]. Older age, male sex, and preexisting hypertension and/or diabetes rapidly stood out among risk factors correlating with case fatality rate, in the first large case series of sequentially hospitalized patients with confirmed COVID-19 in the US [4]. The scientific community is desperate to find effective treatments and immunization against SARS-CoV-2, and so far, dexamethasone is the only drug that has shown a survival benefit, among those patients who are receiving either invasive mechanical ventilation or oxygen alone at randomization [5]. The antiviral remdesivir has shown a shorter time to recovery in adults hospitalized with COVID-19 and with evidence of lower respiratory tract infection, but its effect on overall mortality remains controversial [6]. An additional promising therapeutic alternative is immune plasma from convalescent patients [7]. This strategy has been used with some success in other viral diseases with significant lethality such as hantavirus, influenza, SARS-CoV, and MERS-CoV infections [8][9][10][11]. The use of convalescent plasma for COVID-19 was reported early in this pandemic. The initial case series studies suggested faster clinical recovery, viral clearance, and radiological improvement, although the lack of a control group limited the accurate interpretation of these results [12][13][14]. Subsequently, a preliminary report of a matched controlled study showed that convalescent plasma improved survival for non-intubated patients [15]. However, the first 2 randomized controlled trials showed no clear clinical benefit, and, furthermore, 1 of these trials was stopped early due to concerns based on finding high preexisting SARS-CoV-2 neutralizing antibody (NAb) titers in patients before transfusion [16,17].
Considering that COVID-19 likely involves at least 2 phases-an early phase in which viral replication is a component of tissue injury and a later phase in which a dysregulated and pro-inflammatory immune response leads to the damage-the most useful therapeutic window for convalescent plasma administration is currently unknown [18]. Indeed, the lack of efficacy in previous studies has been attributed to a late timing of plasma administration in the disease's course. This hypothesis is consistent with the recent finding of lower mortality for patients receiving convalescent plasma within the first 3 days after COVID-19 diagnosis in a large uncontrolled study [19].
The objective of this study was to assess the efficacy and safety of convalescent plasma therapy in reducing disease progression, complications, and death in patients in the early phase of COVID-19.
Methods
This study consisted of a randomized, controlled, open-label phase II trial done in a single Chilean academic medical center in Santiago, Chile. Patients were randomized from May 10, 2020, to July 18, 2020, with follow-up until August 17, 2020.
Inclusion criterion number 2 considered initially only SARS-CoV-2 confirmed PCR positive infections. Based on the 24-to 48-hour delays for PCR results in the peak of the pandemic, this criterion was modified after the trial initiation to allow the inclusion of patients with pending PCR test results. All the patients enrolled with pending PCR results (n = 2) had subsequently confirmed real-time PCR SARS-CoV-2 infection.
Convalescent plasma donation protocol
Plasma was obtained from volunteer participants who had recovered from SARS-CoV-2 infection, having been asymptomatic for at least 28 days, with a negative SARS-CoV-2 real-time PCR both in nasopharyngeal swab and in plasma, and anti-SARS-CoV-2 (S1) IgG titer � 1:400. Donors were males, females who had never been pregnant, or females who had been tested for anti-HLA antibodies. Most of the donors (91%) had a history of symptomatic COVID-19, of which 5% had been hospitalized, but none with severe disease. Plasma collection occurred between 33 and 73 days after symptoms resolved (mean of 44 days). Donor plasma was tested for standard infectious diseases before administration, and extracted plasma was immediately frozen at −20˚C according to standard national safety measures [21].
Given that for the Euroimmun SARS-CoV-2 IgG ELISA, a positive assay (defined as a ratio of sample optical density [OD]/calibrator OD � 1.1) is determined-as per the providerwith a basal dilution of 1:100, we decided to further semi-quantify the IgG in donor plasma with an additional fourth fold dilution, and established the 1:400 cutoff as the requirement for our plasma donors (considering again an OD ratio � 1.1 as a positive result for that new dilution).
Randomization and intervention
Eligible patients were randomly assigned via computer-generated numbering by a block randomization sequence into 2 groups: early or deferred plasma transfusion. Randomization was done by an independent researcher, and the sequence was concealed from study investigators.
The early plasma group received the first plasma unit at enrollment. The deferred plasma group received convalescent plasma only if a prespecified worsening respiratory function criterion was met during hospitalization (PaO 2 /FiO 2 < 200) or if the patient still required hospitalization for symptomatic COVID-19 >7 days after enrollment.
Transfusions consisted of a total of 400 ml of ABO compatible convalescent plasma, infused as two 200-ml units, each separated by 24 hours. In both groups, cointerventions, including antibiotics, antivirals, heparin thromboprophylaxis, and immunomodulators, were allowed based on the hospital protocols.
Outcomes
The primary outcome was a composite of mechanical ventilation, hospitalization > 14 days, or in-hospital death.
Secondary outcomes included the following: days of mechanical ventilation, days of highflow nasal cannula (HFNC), days of oxygen requirement, time to respiratory failure development (PaO 2 /FiO 2 < 200), the severity of multiple organ dysfunction (by Sequential Organ Failure Assessment [SOFA] score) at day 3 and 7, days in ICU or intermediate care unit, hospital length of stay, and mortality at 30 days. The kinetics of inflammatory biomarkers, including total lymphocyte count, C-reactive protein (CRP), procalcitonin, LDH, D-dimer, ferritin, IL-6, pro-B type natriuretic peptide (pro-BNP), and troponin T, were determined on days 0, 3, and 7, and SARS-CoV-2 real-time PCR in nasopharyngeal swab on days 3 and 7.
Radiological outcomes included the comparison of infiltrate progression on chest CT scans at enrollment and day 5, based on COVID-19 pneumonia severity scores [22][23][24][25]. For the combined analysis with portable chest X-rays, a blinded thoracic radiologist expert categorized images as "progression" versus "stable or improved." Also, preplanned analyses of NAb titers and anti-SARS-CoV-2 IgG titers were conducted in participants from the early plasma group at baseline and in the subset of participants from the deferred plasma group who had not yet received plasma on days 0, 3, and 7.
Analysis of the primary outcome and clinical secondary outcomes was performed by intention to treat (ITT). Laboratory and radiology secondary outcomes were analyzed by modified ITT, excluding a patient who withdrew consent before any intervention. Safety outcomes were evaluated in all participants.
Anti-SARS-CoV-2 IgG ELISA
For specific IgG enzyme-linked immunosorbent assays (ELISA), we used the commercial kit CE-marked Euroimmun (Lübeck, Germany, # EI 2606-9601 G), which uses the S1-domain of spike protein of SARS-CoV-2 as antigen. Fresh or thawed serum samples were first diluted at 1:100, immunoreactivity was measured at an OD of 450 nm, and results were expressed according to the manufacturer, with a positive result as an OD ratio (patient/calibrator) � 1.1. Additionally, 2-fold serial dilutions were done until 1:6,400, and the endpoint dilution for each sample was determined as the final dilution where the OD ratio (patient/calibrator) was � 1.1. Seroconversion was defined as seronegative at baseline and seropositive after 3 or 7 days, or a 4-fold increase in endpoint dilution titer from the baseline.
NAb titer assay
Anti-SARS-CoV-2 NAbs were measured in serum samples using an HIV-1 backbone expressing firefly luciferase as a reporter gene and pseudotyped with the SARS-CoV-2 spike glycoprotein [26,27]. Samples with a neutralizing activity of at least 50% at a 1:160 dilution were considered positive and used to perform titration curves and 50% inhibitory dose (ID50) neutralization titer calculations [28]. Determination of the ID50 was performed using a 4-parameter nonlinear regression curve fit measured as the percent of neutralization determined by the difference in average relative light units (RLUs) between test wells and pseudotyped virus controls. In order to perform the ID50 calculations, the lack of fit test had to have a p-value > 0.1.
The top values were constrained to 100, and the bottom values were set to 0.
Statistical analysis
Sample size was calculated a priori, with a power of 80% and a statistical significance of 5% for an expected outcome of 54.8% of the patients in the control group and 20% in the intervention group experiencing the composite primary outcome (absolute risk reduction of 35%), based on a previous report of convalescent plasma administration in the early stage of AH1N1 influenza [29]. The final calculated sample size was 29 individuals per group (total n = 58).
The primary and secondary binary outcomes were assessed through Fisher's exact test, and odds ratios (ORs) are presented together with 95% CIs and p-values. Results of all main analyses are presented as crude analyses. In addition, we adjusted for age and SOFA score at enrollment, as fixed (individual-level) effects, using logistic regression. Numerical variables of secondary outcomes were examined using generalized linear models with log link function and gamma family function. For those variables with a high number of zeros, we used a zeroinflated negative binomial model because it showed better goodness of fit compared with other zero-inflated models according to the Akaike information criterion. Treatment effect estimates, crude and adjusted by age and SOFA, are presented as exponentiated coefficients, i.e., ORs and incidence rate ratios (IRRs), respectively, with their corresponding 95% CIs. In those cases where asymptotic assumptions did not hold, crude estimates were analyzed with Fisher´s exact test for categorical variables and Wilcoxon rank-sum test for continuous variables. To test differences between Kaplan-Meier estimates in survival analysis, we used the log-rank test.
For paired CT scan score analysis, we used Wilcoxon matched-pairs signed-rank test.
For the primary endpoint, statistical significance was defined using a 2-sided significance level of α = 0.05. The statistical analysis of secondary endpoints should be considered exploratory only. The statistical analysis was performed by an investigator who was blind to the study group allocation. Analyses were done with R version 3.6.3, and figures with GraphPad Prism version 8.4.3 software.
CONSORT guidelines for reporting randomized controlled trials were followed [30]. The trial protocol (S1 Text) and CONSORT checklist (S1 Table) are included for reference.
Ethics
This study was approved by the institutional review board of the Pontificia Universidad Católica de Chile. Written informed consent was solicited from all patients or their legal representatives.
Study population
Of the 245 patients diagnosed with COVID-19 and evaluated for eligibility, a total of 58 patients were enrolled, and 57 (98.3%) completed the trial (1 patient withdrew consent). All patients were included in the ITT analysis (Fig 1). The mean age was 65.8 years (range: 27-92), and 50% were women. The median interval between symptom onset and randomization was 6 days (IQR 4-7). All patients had SARS-CoV-2 infection confirmed by real-time PCR in nasopharyngeal swab. Baseline characteristics of participants are described in Table 1.
All participants (n = 28) from the early plasma group received a first plasma unit on the day of enrollment, and 24 (86%) received a second unit 24 hours later. Reasons for not receiving the second unit were death (n = 2) or a serious adverse event (SAE) after the first plasma unit administration (n = 2).
A total of 13 participants (43.3%) from the deferred plasma group received plasma, at a median time of 3 days from enrollment (IQR 1-5), based on respiratory failure development (n = 12) or persistent symptomatic COVID-19 beyond 7 days after enrollment (n = 1).
Primary outcome
There was no significant difference between the early and deferred plasma group in the composite primary outcome: 32.1% (9/28) in the early plasma group versus 33.3% (10/30)
Secondary outcomes
A total of 46.4% of early plasma group participants progressed to severe respiratory failure (PaO 2 /FiO 2 < 200) compared to 40% of patients from the deferred plasma group (OR 1.30, 95% CI 0.48-3.56), at a median time of 2.0 and 2.5 days from enrollment, respectively. No significant differences were noted in any of the other clinical secondary outcomes ( Table 2). In the adjusted models, the total number of days on mechanical ventilation was higher in the early plasma than in the deferred plasma group (IRR 4.78, 95% CI 2.20-10.40). Time to death and time to severe respiratory failure did not differ between study groups (Fig 2). No significant differences were found for CRP, IL-6, ferritin, LDH, D-dimer, pro-BNP, troponin T, procalcitonin, and lymphocyte count levels on day 3 and 7 between study groups (S2 Table). Similarly, the rate of SARS-CoV-2 negative real-time PCR in nasopharyngeal swabs did not differ between the early and deferred plasma groups on day 3 (26% versus 8%, p = 0.204) nor on day 7 (38% versus 19%, p = 0.374) (Fig 3A). As a post hoc analysis, we determined the changes in SARS-CoV-2 PCR cycle thresholds for early plasma group and the subset of patients from the deferred plasma group that did not receive plasma, and we also did not find significant differences (Fig 3B).
The progression in the COVID-19 pneumonia (chest CT) severity scores from baseline to day 5 was higher in the deferred than in the early plasma group (S1 Fig). However, when the analysis also included the patients who had a chest X-ray instead of CT on the same scheduled days, the proportion of participants with progression in lung infiltrates did not differ between groups (OR 1.3, 95% CI 0.41-3.89) (S3 Table).
Immune response subgroup analysis
From a total of 232 potential plasma donors with baseline positive SARS-CoV-2 IgG detection, 129 candidates (55.6%) achieved the further positive cutoff at the 1:400 dilution. The median SARS-CoV-2 IgG OD ratio at the standard basal dilution (1:100) for all donors (n = 41) whose plasma was administered to the patients in this clinical trial was 5.73 (IQR 4.73-6.51). Additionally, in 18 of the 41 (44%) plasma donors, the virus neutralizing capacity was measured, and the median titer of NAb ID50 was 449 (range: 147-5,610). The baseline SARS-CoV-2 IgG ratio in all donors (n = 28) whose plasma was given to the early plasma group patients, versus IgG ratio in all donors (n = 13) whose plasma was administered to the deferred plasma group patients, was not different (median IgG OD ratio-at standard basal 1:100 dilution-of 5.77 and 5.73, respectively, p = 0.808). SARS-CoV-2 IgG levels were determined in patients who received early plasma and in the subset of patients from the deferred plasma group who had not yet received plasma, at baseline, day 3, and day 7. No significant differences were observed in SARS-CoV-2 IgG seropositive rate at any of the 3 timepoints (Fig 4A). Regarding IgG titers at enrollment, 7/26 (27%) of patients who subsequently received plasma had a positive SARS-CoV-2 IgG assay, with a median IgG titer of 400 (range: 100-800), compared to 5/20 (25%) of those patients who did not receive plasma, with a median IgG titer of 400 (range: 100-3,200), a non-significant difference in titers (p = 0.548). On day 3, 19/26 (73%) of patients who received plasma had a positive SARS-CoV-2 IgG assay, with a median IgG titer of 400 (range: 100-3,200), compared to 10/20 (50%) of those who had not yet received plasma, with a median IgG titer of 400 (range: 100-3,200), a non-significant difference in titers (p = 0.962). Also, no significant differences were observed in IgG seroconversion rates between those who received plasma and those who received no plasma at day 3 (69% versus 40%, p = 0.073) or at day 7 (87% versus 83%, p = 1.00) (Fig 4B).
Safety
Among the 41 patients receiving plasma in this study, there were 4 possibly related adverse events (3 cases of fever, 1 rash) and 3 SAEs (7.3%). Two patients developed severe respiratory deterioration within 6 hours after plasma infusion and were categorized as having possible transfusion-associated acute lung injury (TRALI) type II [32]. One of these patients additionally developed severe thrombocytopenia within 48 hours after plasma transfusion, with megakaryocytic hyperplasia in the bone marrow analysis. Platelet antibody testing in the recipient was negative, as well as in the donor plasma, ruling out passive alloimmune thrombocytopenia. Platelet count remained low in the following weeks, despite platelet transfusions, steroids, and immunoglobulin therapy, with the patient requiring splenectomy, rituximab, and eltrombopag before slow stabilization. This event was diagnosed as a complication possibly related to COVID-19.
Discussion
This randomized clinical trial of symptomatic COVID-19 patients admitted early failed to find significant differences in the composite primary outcome of death, mechanical ventilation, or prolonged hospitalization between administering immediate convalescent plasma and administering plasma only in case of clinical worsening.
The rate of SARS-CoV-2 PCR clearance in nasopharyngeal swabs did not differ between study arms either, suggesting that the provision of convalescent plasma in this study did not provide enough antiviral activity in patients with COVID-19 at this stage. In accordance with this finding, transfused patients did not present a significant rise of SARS-CoV-2 IgG levels on ; white columns represent the patients from the deferred plasma group who did not receive plasma (n = 20 samples available on day 0, n = 20 on day 3, and n = 12 on day 7). Above each column, the percentage of seropositivity is indicated. (B) IgG seroconversion was considered to have occurred if a patient had a negative sample at 1:100 dilution at baseline but increased to any positive dilution after 72 hours or 7 days, or if a 4-fold increase in endpoint dilution titer from baseline was reached. Dashed columns represent patients who received CP; white columns represent patients from the deferred plasma group who did not receive plasma. Above each column, the percentage of seroconversion is indicated. (C) Neutralizing antibody (NAb) titer measured by 50% inhibitory dose (ID50) quantified at D0. The total number of patients reaching every dilution titer interval is indicated above each column. � ID50 titer � 1:159 or no neutralization observed. (D) NAb titers showed by days since COVID-19 symptom onset. Each column represents the number of days after onset of symptoms; above each column are the number of individuals. Summary statistics above represent the number of individuals with NAb titers � 1:160 from 2 groups: those enrolled in the first 5 days since symptom onset and those enrolled 6 or 7 days since symptom onset. � ID50 titer � 1:159 or no neutralization observed.
https://doi.org/10.1371/journal.pmed.1003415.g004 days 3 and 7 compared to the natural increase in IgG titers in non-infused patients, which could explain a possible lack of effect. Furthermore, almost 30% of patients were still not seropositive at 72 hours after the infusion, which suggests that the volume of infused plasma or its antibody concentration may have been insufficient.
We actively selected patients at high risk of developing complications-based on CALL score-and indeed, over 40% of our participants developed severe respiratory failure. The failure to find clinical benefit from convalescent plasma therapy in these patients may be explained by several reasons. First, humoral immunity may not play a major role in the subset of patients who have already initiated a highly pro-inflammatory response and in whom inflammation and coagulopathy may be more important than viral replication in disease progression [33]. We do not know whether preselection of plasma units with a very high concentration of NAbs or a larger volume of plasma could have succeeded in blunting this dysregulated inflammatory response. Additionally, an early adaptive immune response might be necessary to drive more effective infection control. Indeed, different cellular and humoral responses are generated in mild versus severe COVID-19 cases, and it has been reported that a specific cellular response can be detected early in the course of non-severe COVID-19 [34,35]. Second, the possible lack of efficacy may relate to a too late administration of plasma in the course of the disease, in which a dysregulated immune response predominates and is independent of the virus cell entry blockade achieved by immunoglobulins [7,18]. Previous randomized trials of convalescent plasma for COVID-19 included patients who had a longer time gap between symptom onset and transfusion as well as more severe disease at enrollment [16,17]. Despite setting a strict �7 days of symptoms inclusion criterion, in our study, over 96% of participants had already established pneumonia on CT scan at enrollment. Hence, it is possible that some patients had a more rapid or aggressive course, or, particularly for older adults, true COVID-19 symptom onset went unnoticed until several days into the disease course. Nonetheless, that the study population reflected early-stage COVID-19 was well supported by the fact that at enrollment over 74% of our participants did not have detectable SARS-CoV-2 IgG, and about 60% did not have significant NAb capacity. Third, given the design of our study, plasma administration in the deferred plasma group may have prevented the primary outcome from developing. However, the probability and time to progression to respiratory failure did not differ between the study groups. Since respiratory failure was one of the prespecified criteria for plasma administration in the deferred plasma group, this secondary outcome allowed us to compare early plasma versus no plasma, further supporting a possible lack of efficacy.
Plasma transfusion is not exempted from adverse events such as allergic reactions, infection transmission, and-very rarely-volume overload or TRALI [36]. In spite of the fact that the majority of clinical trials of convalescent plasma in COVID-19 were still ongoing, convalescent plasma received emergency use authorization by the US Food and Drug Administration for the treatment of hospitalized patients with COVID-19 [37]. Reassuringly, in a recent report of 20,000 hospitalized patients receiving convalescent plasma for COVID-19, the incidence of related SAEs in the first 4 hours after infusion was <0.5% [38]. Nonetheless, in the present study, 2 participants developed acute respiratory failure after transfusion. Given that the patients were, according to the known evolution of COVID-19, in the peak of their inflammatory phase, it was challenging to determine if the respiratory failure corresponded to a TRALI [39].
Our study presents some limitations. First, NAbs were not determined in donor plasma before the patient's transfusion, and we could not select the plasma units with the highest neutralizing activity. Additionally, there is a critical knowledge gap regarding the dose of convalescent plasma needed to effectively increase the pool of antibodies required to neutralize the virus in the blood and in other compartments, and in the present study, the non-significant change of antibody titers suggests that the convalescent plasma dose may have been insufficient. Second, the study was not powered to detect a risk reduction smaller than 35% in the primary endpoint, and therefore we cannot exclude that convalescent plasma may show smaller but clinically relevant effects in a future larger clinical trial. Third, as this was an openlabel study, cointerventions such as steroid use may have unintendedly influenced outcomes [5]. Such management was not standardized, although alternative drug therapies were equally distributed in both study arms.
Regarding applicability, we found it difficult to find patients admitted to hospital in the early stages of COVID-19. Other large case series have reported that the median time from symptom onset to hospital admission was 7 days in the US and 6 days in Madrid, Spain [40,41]. Thus, a considerable proportion of patients will inevitably have passed the 7-day symptom window when admitted. This implies that new strategies such as outpatient plasma administration in newly diagnosed SARS-CoV-2 infections in selected patients at higher risk of COVID-19 complications-such as older individuals with comorbidities-should be explored. However, before proceeding with further large clinical trials, convalescent plasma dosage (volume and antibody titer levels) critically requires optimization; this could be studied safely in healthy volunteers or in post-exposure prophylaxis studies.
In conclusion, the present clinical trial of convalescent plasma administered in patients hospitalized in the early stage of COVID-19, compared to giving plasma only at clinical deterioration, failed to demonstrate improvement in clinical outcomes. Newer research strategies are needed to find the optimal use and timing of convalescent plasma in COVID-19. | 2021-03-05T06:22:59.166Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "ec933d38cc48a55ac72b69ceeef9cfe2cf374689",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.1003415&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fed07baab2f65b46b299a85b65d81f2bff065ccf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5957869 | pes2o/s2orc | v3-fos-license | Detection of miRNA in Cell Cultures by Using Microchip Electrophoresis with a Fluorescence-Labeled Riboprobe
The analysis of a microRNA (miRNA), miR-222 isolated from the PC12 cell line, was performed by use of the ribonuclease (RNase) protection assay, cyanine 5 (Cy5)-labeled miR-222 riboprobe, and a Hitachi SV1210 microchip electrophoresis system, which can be used to evaluate the integrity of total RNA. The fluorescence intensity corresponding to the protected RNA fragment increased in a dose-dependent manner with respect to the complementary-strand RNA. More highly sensitive detection of miRNA by microchip electrophoresis than by conventional method using fluorescence-labeled riboprobe could be obtained in 180 s. An obvious increase in miR-222 expression induced by nerve growth factor in PC12 cells could be observed. These results clearly indicate the potential of microchip electrophoresis for the analysis of miRNA using RNase protection assay with a fluorescence-labeled riboprobe.
Introduction
The miRNAs are single-stranded RNA molecules of about 21~23 nucleotides in length that are both ancient and highly conserved [1,2]. They serve as regulators of gene expression by modulating the translation and/or stability of messenger RNA targets and are crucial for the cellular changes that are necessary for cell development, tumorigenesis, apoptosis, and metabolism [3][4][5][6][7]. Several studies have implicated the aberrant expression of miRNAs in numerous diseases including some kinds of cancers and heart disease [8]. So, the importance of analysis for the expression of miRNAs is immensely increased in the field of not only cell biology but also that of clinical medicine.
Hybridization-based approaches, such as primer-extension, Northern blotting, miRNA microarray profiling, in situ hybridization, reverse-transcription PCR, and ribonuclease (RNase) protection are frequently used for the identification and quantification of known miRNAs [2]. The RNase protection assay is employed for the quantitative analysis of particular RNA expression in a heterogeneous RNA sample extracted from cells, and it can identify RNA molecules of known sequence even at a low concentration in a sample [9]. This method involves hybridization of test RNAs to complementary RNA probes (riboprobes), followed by digestion of the nonhybridized sequences with an RNase that specifically cleaves only single-stranded RNA but does not have activity against double-stranded RNA, i.e., the hybridized RNA fragment. For analysis of hybridized RNA fragments, polyacrylamide gel or agarose gel electrophoresis is performed, followed by transfer to a membrane and measurement of the signal intensity of the labeled RNA [10,11]. Just like the RNase protection assay, the conventional methods are manual and time-consuming. Therefore, a more convenient, sensitive, and accurate method for the analysis of miRNA expression is desirable.
Microchip electrophoresis has received considerable interest in analytical chemistry due to its intrinsic characteristics of high speed, high throughput, low consumption of samples and reagents, miniaturization, and automation [12]. We recently reported that the RNase protection assay and cyanine (Cy5)-labeled riboprobe for the analysis of a particular mRNA in the total RNA extracted from cells can be performed by using a commercial instrument, the Hitachi SV1210 microchip electrophoresis system, which is used to evaluate the integrity of total RNA [9].
In the present study, we evaluated the ability of the Hitachi SV1210 to analyze miRNA expression in PC12 cells assessed by the RNase protection assay. The potential of microchip electrophoresis for the analysis of miRNA expression was shown, offering highly sensitive detection and ease of operation in a short time.
Reagents
Dulbecco's modified Eagle's medium (DMEM), fetal calf serum (FCS), horse serum (HS), penicillin, and streptomycin were obtained from Life Technology, Co., (Carlsbad, CA, USA). Nerve growth factor (NGF) was obtained from Sigma-Aldrich Japan (Tokyo, Japan). SYBR Green II (concentration not given) was obtained from Molecular Probes, Inc. (Eugene, OR, USA). Dyna Marker RNA High and Low II (Biodynamics Laboratory Inc., Tokyo, Japan) were employed as a molecular weight marker for agarose gel electrophoresis and polyacrylamide gel electrophoresis, respectively.
PC12 Cell Cultures and RNA Preparation
The cell strain PC12 was obtained from the Japanese Collection of Research Bioresource Cell Bank. The cells were maintained in DMEM supplemented with 10% (v/v) FBS, 5% (v/v) HS, 100 unit/mL penicillin, and 100 μg/mL streptomycin as final concentrations in an atmosphere of 5% CO 2 and 100% relative humidity at 37 °C. Cells were cultured according to the modified method described by Terasawa et al. [6]. For the isolation of RNA from NGF-activated PC12 cells, the cells were seeded onto culture dishes in a low concentration of serum (1% HS) for 24 h prior to treatment with 100 ng/mL NGF, and then differentiation was induced 48 h thereafter. For the isolation of RNA from non-activated PC 12 cells, the cells were seeded onto culture dishes containing medium with a low concentration of serum and no NGF.
Total RNA was extracted and purified from PC 12 cells with a Clontech Micro-Scale Total RNA Separator kit (Clontech Laboratories, Inc., Palo Alto, CA, USA). The concentration of purified total RNA was determined with a NanoDrop ND-100 apparatus (NanoDrop Technologies, Inc., Wilmington, DE, USA). For the evaluation of the integrity of the RNA, samples were separated by 1.0% agarose gel electrophoresis containing 13% formaldehyde, followed by SYBR Green II staining.
Synthesis of Riboprobe
The several types of the 21-bps synthetic oligonucleotides of miR-222 and 22-bps synthetic oligonucleotides of U 6 were purchased from Sigma-Aldrich Japan (Tokyo, Japan). The base sequences of each oligonucleotide and labeled types are shown in Table 1.
RNase Protection Assay
Twenty-microgram amounts of total RNA isolated from PC12 cell cultures or 0.039-40 ng of synthesized sense riboprobes, each with 10, 20, or 40 ng of Cy5-labeled riboprobe for microchip electrophoresis or 10, 20, or 40 ng of Cy3-labeled riboprobes for polyacrylamide gel electrophoresis, were used for the hybridization by use of an RPA III kit (Ambion Inc., Austin, TX, USA). The mixture was denatured at 98 °C for 3 min and subsequently hybridized at 80 °C for 1 h. After the hybridization, the single-stranded RNAs (non-protected RNA) were digested with RNase A/T1 (1/1,000 dilution) at 37 °C for 30 min. Thereafter the protected RNAs were purified and dissolved in 10 µL of ddw or 10 µL of sample buffer.
For the conventional RNase protection assay, the mixture of 2.0 µL of 6× formaldehyde gel-loading buffer and protected RNA sample with Cy3-miR-222 or Cy3-U6 riboprobes in 10 µL of ddw was denatured by heating at 65 °C for 15 min and standing on ice for 5 min. Samples were run on a denaturing polyacrylamide gel (12% polyacrylamide, 8 M urea) in 1× TAE buffer; and electrophoresis was performed at 250 V for 40 min, followed by staining with SYBR Green II. Thereafter the gels were scanned by using a BioRad Phoros FX system (BioRad Laboratories, Inc., CA, USA) for the quantitative analysis of fluorescence intensities.
For the RNase protection assay using microchip electrophoresis, the protected RNA samples with Cy5-miR-222 riboprobe or Cy5-U6 riboprobes in 10 µL of sample buffer were denatured as described above, and then subjected to microchip electrophoresis on the Hitachi SV1210 (Hitachi-Technologies Co., Ltd. Tokyo, Japan).
Microchip Preparation and Separation
A disposal i-chip 3 (Hitachi Chemical Co., Ltd., Tokyo, Japan), fabricated from polymethylmethacrylate (PMMA), and comprising an interconnected network of fluid reservoirs and microchannels, was used for all the experiments (Figure 1(A)). Three samples could be analyzed on one of these chips at one time. For analysis of total RNA, i-chips were prepared according to the manufacturer's instructions supplied with the i-RNA 12 kit (Hitachi Chemical Co., Ltd.), which included a gel containing a RNA-staining fluorescent dye. For the RNase protection assay or for analysis with the Cy5-labeled riboprobe, the i-RNA 12 kit was employed without the gel containing the RNA-staining dye because of the use of Cy5 for the detection of the protected RNA fragments. The loading gel was infused from the analysis reservoir (AR) well into the microchannels on the i-chip 3 by using a syringe, and the sample waste (SW) well and the buffer reservoir (BR) well were filled with 10 µL of gel by use of a pipette. RNA samples dissolved in 10 µL of sample buffer were added to the sample reservoir (SR). Samples were loaded by electrokinetic injection, which was achieved by applying 300 V for 60 s to the SW well while grounding the other wells. The separation procedure was performed by applying a fixed 130 V to both the SR and SW wells, while the BR well was kept grounded. Simultaneously, 750 V was applied to the AR well to produce a suitable electric field in the separation channel. The semiconductor laser Charge Coupled Detector (CCD) detector (excitation at 635 nm and measurement of fluorescence at 660 nm) point was located 30 mm from the cross-section point. Each sample could be analyzed in parallel within 4 min.
Instrumentation
Experiments were performed by using a Hitachi SV1210 microchip electrophoresis instrument (Hitachi High-Technologies Co., Ltd.) equipped with a semiconductor laser CCD detector. The instrument consists of a bench-top device (chip reader) that is connected to a personal computer. The SV1210 software (version 1.6.1) includes data collection, presentation, and interpretation functions. The data are displayed as both simulated gel images and electropherograms. Electropherograms of the total RNA, i.e., 18S and 28S ribosomal RNA fragments, are displayed in Figure 1(C).
Analysis of Cy5-miR-222 and Total RNA Isolated from a PC12 Cell Culture
As shown in Figure 1(B), the integrity of the Cy5-miR-222 riboprobe (lane 2, arrow) and that of the total RNA isolated from a PC12 cell culture (lane 4) were confirmed by denaturing polyacrylamide gel (12% polyacrylamide, 8 M urea) electrophoresis and 1.0% agarose gel electrophoresis (gel containing 13% formaldehyde), respectively, followed by SYBR Green II staining. By microchip electrophoresis, the integrity of the Cy5-miR-222 riboprobe was confirmed by using gel without the original dye for RNA staining, and the integrity of total RNA isolated from culture of PC12 was also confirmed by using the original gel containing the fluorescent dye for RNA staining (Figure 1(C)). Although a microgram quantity of RNA is needed for the examination of the integrity of RNAs by the conventional electrophoresis, only a nanogram amount of RNA is required in the microchip electrophoresis. Especially, a distinct single peak corresponding to the Cy5-miR-222 riboprobe was observed with just a 100 femtogram amount of RNA. Remarkably sensitive detection of the Cy5-labeled riboprobe was obtained, as the peak for the riboprobe was much higher than the peaks obtained for total RNA analysis with the original gel containing the RNA-staining dye. The semiconductor laser CCD detector of the Hitachi SV1210 (excitation at 635 nm and measurement of fluorescence at 670 nm) is used for the detection of RNA and was suitable for the detection of Cy5. Furthermore, the riboprobe was directly labeled with Cy5, so the background level was very low. These must contribute to the highly sensitive detection of the Cy5-labeled riboprobe without the use of the original gel containing RNA-staining dye. As shown in Figure 1(C), the single peak corresponding to the Cy5-labeled riboprobe was observed with a migration time of 79 s on the microchip electrophoretogram. To evaluate the reproducibility of the migration time in the different channels, we examined the total RNA in the original gel containing RNA-staining dye and Cy5-labeled riboprobe in the gel without the RNA-staining dye ( Table 2). The relative standard deviations (RSD) for 5 different channels as to the migration time of 18S, 28S, and Cy5-labeled riboprobe were 0.36, 0.29, and 0.24%, respectively. These results indicate the reproducibility of the electrophoresis even in different channels with Cy5 labeling, and show that Cy5 was suitable for RNA labeling for the analysis by this microchip electrophoresis method.
miRNA Analysis Using Synthesized Riboprobes
miRNA analysis using 10 ng of Cy3-miR-222 riboprobe and 10 ng of the sense riboprobe for the conventional method using polyacrylamide gel was performed (Figure 2(A)). A single band corresponding to the Cy3-miR-222 riboprobe was observed (Figure 2(A), lane 1), and this band disappeared by RNase treatment (Figure 2(A), lane 2). By the hybridization of the Cy3-miR-222 riboprobe with the miR-222 sense riboprobe, single bands corresponding to the hybridized RNA fragment (double-stranded RNA) were observed with or without RNase treatment (Figure 2(A), lane 3 and 4). The band disappeared completely in the presence of 100 ng of non-labeled miR-222 as a competitor (Figure 2(A), lane 5). Cy3 was employed for antisense riboprobe labeling for use in the conventional gel electrophoresis, because we employed the BioRad Phoros FX system for gel scanning (excitation at 532 nm and measurement of fluorescence at 588 nm). miRNA analysis using 10 ng of the Cy5-miR-222 riboprobe and 10 ng of miR-222 sense riboprobe was then performed by using microchip electrophoresis (Figure 2(B)). A single peak corresponding to the Cy5-miR-222 riboprobe was observed (trace 1), and this peak completely disappeared by RNase treatment (trace 2). By the hybridization of the Cy5-miR-222 riboprobe with miR-222 sense riboprobe, single peaks corresponding to the hybridized RNA fragment and with similar fluorescent intensity were observed in the presence (trace 3) or absence (trace 4) of RNase. These results clearly indicate that protection of the double-stranded RNA from RNase digestion occurred when microchip electrophoresis using the Cy5-miR-222 riboprobe was performed, as in the case of the conventional RNase protection assay using the Cy3-miR-222 riboprobe for the analysis of miRNA. The peak corresponding to the Cy5-miR-222 riboprobe was observed at 79 s (trace 1), and single peaks corresponding to the hybridized RNA were observed at 81 s (trace 3 to 5). This difference in the migration time must be due to the slower migration of the double-stranded RNA fragment than that of the single-stranded one. In the presence of 100 ng of a competitor, the fluorescence intensity of the peak corresponding to the hybridized RNA was apparently lower than that in the absence of the competitor (trace 5). Although signal intensity corresponding to the hybridized RNA was hardly detectable by using the same amount of competitor for polyacrylamide gel electrophoresis (Figure 2(A), lane 5), significant fluorescent intensity could be detected by the microchip electrophoresis. This difference may have been due to the more highly sensitive detection of Cy5-labeled RNA by microchip electrophoresis. The dose response of the test RNA (sense riboprobe) was examined by conventional miRNA analysis using polyacrylamide gel electrophoresis with the Cy3-miR-222 riboprobe (Figure 2(C)) and by microchip electrophoresis with the Cy5-miR-222 riboprobe (Figure 2(D)). The fluorescence intensity by each analysis increased in a sense riboprobe dose-dependent manner. Although a very weak band was observed in the hybridization with 5.0 ng of sense riboprobe in the conventional miRNA analysis (Figure 2(C), lane 4), an apparently single peak corresponding to the hybridized RNA was observed when 0.078 ng of the sense riboprobe was used for microchip electrophoresis (Figure 2(D), trace 10). These results indicate that miRNA analysis with the RNA protection assay using Cy5-antisense riboprobe and microchip electrophoresis afforded a highly sensitive detection.
In the present study, we employed Cy3 for riboprobe labeling in the conventional RNase protection assay for miRNA analysis. Digoxygenin (DIG) labeling of riboprobes for the RNase protection assay using the conventional method is frequently employed [13]. In an earlier study, we found the detection limit of RNase protection assay using the DIG-labeled riboprobe to be 5.0 ng [9]. The detection limit using fluorescence labeling in this study thus compared favorably with that using DIG labeling.
Increase in miR-222 Expression in PC12 Cells in Response to NGF Treatment
Terasawa et al. reported that expression of miR-222 is induced by NGF stimulation of PC12 cells, an established model of neuronal growth and differentiation, and that this expression contributes to NGF-dependent cell survival in the PC12 cell line [6]. By the conventional miRNA analysis, an obvious increase in miR-222 expression was observed after NGF treatment for 24 h (Figure 3(A)). On the other hand, U6 expression, as a control, was not affected by the NGF treatment. By miRNA analysis using microchip electrophoresis, a similar increase in the expression of miR-222 was observed (Figure 3(B)). The expression of U6 was also not changed by NGF treatment (Figure 3(C)), as in the case of the conventional method.
Conclusions/Outlook
We have shown the potential of microchip electrophoresis for rapid and highly sensitive analysis of miRNA expression in cells by using a Cy5-antisense riboprobe in the RNase protection assay. In previous work, expression of 248 bp mRNA was analyzed by using microchip electrophoresis with Cy5-labeled 248 bp antisense RNA probe [9]. Analysis of miRNA expression could be performed on microchip electrophoresis by using suitable length riboprobe, in this study. Analysis of miRNA expression is one of the most basic and frequently used procedures in cell biology and molecular biology. This application, i.e., the analysis of miRNA expression by use of the RNase protection assay and microchip electrophoresis, will be useful for obtaining much information for a better understanding of cell development and functions. | 2014-10-01T00:00:00.000Z | 2012-06-07T00:00:00.000 | {
"year": 2012,
"sha1": "b79e9852e444c8a979bdf29659b3cfa94c2f64fb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/12/6/7576/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b79e9852e444c8a979bdf29659b3cfa94c2f64fb",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Computer Science",
"Medicine"
]
} |
160683 | pes2o/s2orc | v3-fos-license | Nano-structured magnetic metamaterial with enhanced nonlinear properties
Nano-structuring can significantly modify the properties of materials. We demonstrate that size-dependent modification of the spin-wave spectra in magnetic nano-particles can affect not only linear, but also nonlinear magnetic response. The discretization of the spectrum removes the frequency degeneracy between the main excitation mode of a nano-particle and the higher spin-wave modes, having the lowest magnetic damping, and reduces the strength of multi-magnon relaxation processes. This reduction of magnon-magnon relaxation for the main excitation mode leads to a dramatic increase of its lifetime and amplitude, resulting in the intensification of all the nonlinear processes involving this mode. We demonstrate this experimentally on a two-dimensional array of permalloy nano-dots for the example of parametric generation of a sub-harmonic of an external microwave signal. The characteristic lifetime of this sub-harmonic is increased by two orders of magnitude compared to the case of a continuous magnetic film, where magnon-magnon relaxation limits the lifetime.
W hen the lateral sizes of a magnetic particle decrease below a micron, its properties are modified due to geometrical confinement and size effects 1,2 . In particular, the ground-state magnetization distribution can be either spatially uniform or vortex-like, depending on the particle aspect ratio 3 . The frequency spectrum of spin-wave excitations in a magnetic particle can also be drastically changed. The modification of the spin wave excitation spectra due to the boundary conditions imposed by the edges of magnetic nano-particles leads to a spectral quantization and elimination of excitations that have half-wavelengths larger than the particle size. The quantization of the excitation spectrum of small magnetic particles was observed experimentally using different techniques 4 . The discrete values of the spin wave eigenfrequencies are mainly determined by the magnetostatic interaction and depend on the particle lateral sizes and magnetization static configuration (the ground state) 5,6 . The discretization of the spin-wave spectrum related to the reduction of the particle sizes can reduce and even remove the frequency degeneracy between the main excitation mode (spatially quasi-uniform ferromagnetic resonance (FMR) mode) and the spin-wave modes with higher values of the in-plane wave vector 7 and, therefore, can substantially reduce the strength of various multi-magnon relaxation processes related to this degeneracy. The critical issue is the removal of degeneracy of the FMR mode with spin wave modes having the wavenumber of the order of 10 4 cm 21 and the lowest magnetic damping (lower than the damping of the FMR mode 6 ). The suppression of the magnon-magnon relaxation for the pumped ferromagnetic resonance mode leads to a dramatic increase of its life-time, amplitude, and, consequently, to an increase of the intensity of all the nonlinear processes involving this mode [6][7][8] .
In this work we demonstrate that nano-structuring of a magnetic material leads to a drastic increase of a lifetime of the main ferromagnetic resonance mode parametrically excited by an external microwave pumping signal. The effect is demonstrated experimentally in a two-dimensional array of permalloy nano-dots subjected to the action of a spatially uniform microwave pumping field having the frequency that is twice larger than the frequency of the ferromagnetic resonance mode in an individual magnetic nano-dot. Figure 1 (a, b, c) demonstrates the qualitative modification of the spin wave spectrum of a finite-size magnetic element (in particular, of a cylindrical magnetic dot of the thickness L and radius R 5,9 ) when the element size is reduced. Figure 1 (d) provides an example of the numerical calculation of the spin wave eigenfrequencies for a Permalloy dot or the L 5 10 nm and R 5 100 nm performed in 5 . The spectral modifications similar to the ones illustrated in Fig. 1 (a,b,c) will take place for magnetic elements of any shape made from both ferromagnetic metals and dielectrics. The frequency of the ferromagnetic resonance (FMR) v 0 , corresponding to the spatially uniform precession of magnetization with wave number k50 for most magnetic elements can be approximately evaluated using the model of an equivalent ellipsoid, for which the effective demagnetization factors N x , N y , N z are determined by the aspect ratio of the particle (R=L in the case of a cylindrical dot) 10 .
In the following we will consider a thin cylindrical magnetic dot of radius R and thickness L=R . For a thin dot (L= R) in the xz plane, magnetized to saturation along the z axis by the bias magnetic field H 0 , the FMR frequency is 6 : where c is the gyromagnetic ratio, and M 0 is the saturation magnetization. The approximation of a disk-shaped particle by an ellipsoid is quantitatively correct only in the limit L= R. In a real situation the internal, static magnetic field in the disk becomes non-uniform and the spatial distribution of the FMR mode becomes quasi-uniform, which leads to a slight increase of its frequency v 0 with decreasing R 11,12 . A decreasing dot radius has a strong effect on the spatially nonuniform, higher spin-wave modes having wave numbers kw0. Due to the influence of the boundary conditions at the dot lateral edges the long-wavelength part of the spin wave spectrum is depleted, since the dot can only support modes having half-wavelengths l=2ƒR. Thus, the spin waves with wave number k~2p=lƒp=R will be eliminated from the dot spectrum (see Fig. 1 b,c). Also, due to the confinement of the dot size along the three Cartesian coordinates, the spectrum becomes discrete. The exact calculation of discrete spin wave eigenfrequencies of a magnetic dot has been performed 5,9 . An example of such a discrete spectrum for an in-plane magnetized (H 0 5300 Oe) permalloy (Py) dot calculated in 5 is presented in Fig. 1d for R 5100 nm and L 510 nm (denoted 100310), showing the positions of all the eigen-modes within +500 MHz of v 0 . It can be seen in Fig. 1d that the frequency degeneracy between the FMR mode and higher-k spin wave modes can be eliminated, and that there are only five modes present.
Each of these features (the depletion of the long-wave part of the spectrum, the spectrum discretization, and the lifting of the frequency degeneracy) in the dots can modify the nonlinear dynamic response on a nano-structured magnetic material compared to that of a bulk material. In natural bulk magnetic materials the amplitude of the FMR mode is usually limited by four-wave (2 nd order) parametric nonlinear processes involving the higher spin-wave modes degenerate in frequency with the FMR mode 6,8 . Nano-structuring of the magnetic material and the related modification of the spin wave spectrum can remove the frequency degeneracy between the FMR mode and the higher spin wave modes, and can lead to a substantial increase in the lifetime and amplitude of the main FMR mode.
To prove this experimentally, we studied the process of parametric generation of a sub-harmonic of an external microwave signal with the frequency v p (such that v 0~vp =2) in an artificial magnetic meta-material formed by a planar array of non-interacting (interdot distance d ?L, patterned area S, 1 mm 2 ) cylindrical Py nano-dots (see Fig. 2). The parametric excitation of spin waves and oscillations in the array occurs via the method of parallel field pumping 6,8 , where the magnetic field h p of the external microwave pumping signal h p~h cos (v p t) is applied parallel to the direction of the bias magnetic field H 0 . When the value of the pumping field amplitude h exceeds a mode-dependent threshold value h (k) th 6 , the amplitude of the spin wave mode, having the sub-harmonic frequency v k~vp =2, starts to increase exponentially. The rate of this exponential growth is proportional to the super-criticality f~h=h (k) th 8,13 . Thus, the mode has the largest growth rate and, therefore, the largest amplitude. This dominant spin wave mode, through four-wave (2nd order) nonlinear interaction processes, starts to suppress all the other pumped modes and ultimately is the only one to survive 8 . All the other modes for which h (k) th wh min th decay exponentially until they are completely suppressed 14 . In bulk magnets and continuous films 6,15 the minimum parametric threshold corresponds to spin waves having k,10 4 cm 21 . It is these waves that suppress all the other spin waves and oscillations, including the quasi-uniform FMR mode that has h (0) th wh min th . It is clear from Fig. 1b that when the radius of a dot is reduced to R, 1 mm, the waves with k,10 4 cm 21 are eliminated from the spectrum and the FMR mode becomes the dominant mode having the lowest threshold of parametric excitation. This statement is supported by the results of the recent experiment performed by means of the Brillouin light scattering spectroscopy 11 : the quasi-uniform FMR mode, indeed, has the threshold of parametric excitation that is 3-10 dB lower than the threshold of excitation of higher-k spin-wave modes (see Fig. 2 in Ref. 11 ). In such a case only the dominant FMR mode will survive in the parametric excitation process, and this mode will suppress all other modes via the four-wave processes of nonlinear spin-wave interaction.
Previous experiments of parametric sub-harmonic generation in magnetic films 15 have shown that the amplitude of the FMR mode of frequency v 0~vp =2, that initially grew under the action of parametric pumping, decayed exponentially as soon as the amplitudes of low-threshold spin waves with k,10 4 cm 21 growing from the thermal level became sufficiently large 15 . As a result of this nonlinear suppression of the FMR mode, the electromagnetic radiation at the sub-harmonic frequency caused by the FMR is observed only during a short time interval (,200 ns) after the pumping is switched on. The dominant spin waves with k,10 4 cm 21 excited by parametric pumping at the same frequency v k~vp =2 do not contribute to the sub-harmonic electromagnetic radiation due to the large wave number mismatch between these waves and the electromagnetic waves having the same sub-harmonic frequency. Thus, in a nanostructured magnetic material (e.g. in an array of magnetic dots having Rƒ 1 mm), where the low-threshold spin waves with k,10 4 cm 21 are eliminated from the spectrum, one can expect a significant increase of the time interval in which the FMR mode creates electromagnetic radiation at the sub-harmonic frequency.
Results
To prove these ideas experimentally we developed the set-up shown in Fig. 3 for the investigation of parametrically induced subharmonic generation in both continuous and patterned films. The sample (1) (either a 2D array of nano-dots or a continuous film) on the dielectric substrate (4) is placed inside an open dielectric resonator (2) made of ceramic with dielectric constant e>80. The external microwave pumping field has the frequency v p =2p59.4 GHz. The microwave magnetic field h p created in the dielectric resonator was oriented along the plane of the sample and was parallel to the in-plane bias magnetic field H 0 (the geometry of ''parallel'' parametric pumping 6,8 ). The short-circuited antenna (3) made of 50-mm diameter Cu wire was used to supply to the experimental sample (1) a short, synchronizing external signal of the power P in and to receive an output signal P out of electromagnetic radiation from the sample. Both the signals are at 4.7 GHz (half of the pumping frequency) and were separated using a Y-circulator (see Fig. 3). The external synchronizing signal P in guaranteed the same initial phase for the FMR sub-harmonic oscillations parametrically excited by pumping in all the magnetic dots, thus creating the constructive interference of all oscillations, resulting in a coherent macroscopic output electromagnetic signal of the power P out . The samples were 2D arrays of cylindrical non-interacting Py dots (see Fig. 2) having the same radius R 5 1000 nm, the same distance d51000 nm between the dot lateral edges, and two different thicknesses: L 1 5100 nm (dot array #1) and L 2 512 nm (dot array #2) formed on a non-conductive GaAs substrate of thickness 0.5 mm (see Methods). As a control we used an unpatterned, continuous Py film of the thickness 100 nm on the same GaAs substrate.
The samples were subjected at t~0 to the simultaneous action of a long (t p 5 9 ms) and powerful (the power P p ,1-100 W) pulse of microwave parallel pumping field, and a short (t in 530 ns) and relatively weak (P in ,10 mW) pulse of synchronizing microwave signal (see details in Methods). The P out signal was received by the antenna (3). As expected, the output signal at the antenna (3) appeared only when the pumping power exceeded the threshold P th p of parametric excitation of the FMR mode, which in both the dot arrays was around P th p %20 W. The output power P out increased with increase of the pumping power from the threshold value to the maximum available value of P p~1 00 W. The maximum value of P out was obtained by tuning the bias magnetic field H 0 to achieve the resonance condition of the FMR mode v 0 with the pumping sub-harmonic v p =2: v 0~vp =2.
The experimentally measured time dependences of the power of microwave radiation P out (t) with the sub-harmonic frequency v 0~vp =2 for all three samples are presented in Fig. 4. It is seen from Fig. 4, that when the parametric pumping field is switched on at t~0, the output power P out , proportional to the intensity of the radiated sub-harmonic microwave signal, starts to increase exponentially. The theory of parametric excitation 8,13 predicts an exponential increase of the output power with time increasing described by the expression P out ,exp 2C 0 ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi t (C 0 is the relaxation frequency of the FMR mode) until this power reaches a maximum level determined by the four-magnon phase mechanism of power limitation 8 . In the experiment (Fig. 4) this level is reached after t~200-300 ns. Also, we see that the output power in a continuous film grows faster than in both the dot arrays investigated. This is due to the higher threshold of parametric excitation of the spin waves in the dot arrays caused by the increase of the relaxation frequency in the dot arrays due to patterning. The temporal evolution of the output power in a continuous film and arrays of nano-dots is very different. As explained above, in a continuous film, the influence of the low-threshold spin waves with k,10 4 cm 21 leads to the rapid exponential decrease the output power P out after the time interval t<200 ns. In contrast, in the case of 2D arrays of magnetic dots of the thickness L 5 100 nm the decrease of the output power with time is approximately two orders of magnitude slower than in the case of a continuous magnetic film (see Fig. 4). Also from Fig. 4, we see that with decrease of the dot thickness to L 5 12 nm, the decrease of the output power with time gets even slower. These are the main experimental results obtained in our paper. Similar results -a slow decrease of the power with time -were obtained on the dot arrays with the sizes R3L of 900350, 1000340, and 1000312 (all the sizes are in nm).
We attribute the dramatic difference in the temporal evolution of the parametrically excited, sub-harmonic radiation in the patterned dot arrays vs. continuous films to the elimination from the dot spectrum of spin-waves degenerate in frequency with the FMR mode and having kƒ10 4 cm 21 . The threshold of parametric excitation of the quasi-uniform FMR mode then becomes lower than that of the other modes. In such a case the theory of parametric excitation 8 predicts that in the stationary regime the amplitude of the parametrically excited FMR mode, and, therefore, the output power P out , should remain constant during the action of the pumping pulse.
In the experiment shown in Fig. 4 we see that for the dot array 1000312 (sample #2) the output power is, indeed, nearly constant in the whole temporal interval of the microwave pumping action, while in the array 10003100 (sample #1) the output power slowly decreases with time increasing, characterized by a time constant of the order of t,10 ms, which is 2-3 orders of magnitude longer than typical times of the magnon-magnon relaxation 6,16 . We believe that the slow decrease of the power P out results from the heating of the dots caused by the absorption of pumped microwave power. In such a case, the temperature change would be proportional to the dot thickness, and the corresponding time constant would be of the order of t,10 ms 17-19 (see Methods), in agreement with the experimental data shown in Fig. 4.
Discussion
In this article we have studied the influence of magnetic particle sizes on its non-linear dynamic properties. In bulk samples and continuous magnetic films the amplitude of the FMR mode is limited at a rather low level by the four-wave magnon-magnon interaction processes involving spin wave modes having large magnitudes of the wave number and frequencies that are close to the frequency of the FMR mode. We demonstrated above that in sufficiently small magnetic particles it should be possible to completely eliminate the frequency degeneracy of the FMR mode with short-wave spin wave modes and, therefore, to substantially increase the possible amplitude and life-time of the FMR mode excited in a magnetic particle. As a result, in a small magnetic particle the efficiency of all the nonlinear processes, such as the frequency multiplication, rectification, parametric amplification or/and generation, parametric wave front reversal, and Brillouin inelastic light scattering should be substantially increased.
The above results allow us to conclude, that for both 2D arrays of magnetic nano-dots studied in our experiments the nano-structuring of the magnetic material lead to the exclusion of the majority of fourwave processes of magnon-magnon relaxation limiting the amplitude of the quasi-uniform FMR mode in a continuous magnetic film and bulk magnetic samples. This exclusion resulted in a substantial enhancement of the nonlinear properties of the magnetic dot array at the frequency of the main FMR mode. In particular, this nanostructuring resulted in the drastic increase of the characteristic time of parametrically induced subharmonic radiation from the array by two orders of magnitude in comparison with the case of a continuous magnetic film of a similar thickness. The slow decrease of the Fig. 4 is caused by the microwave heating of magnetic dots and can be substantially reduced by reducing the dot thickness (compare curves for the dot arrays of the thickness L 5 100 nm and L 5 12 nm shown in Fig. 4).
In summary, we have proven experimentally that nanostructuring of a magnetic material can substantially enhance the nonlinear dynamic properties of the material. Thus, using the nanostructuring, it is possible to develop novel artificial metamaterial with nonlinear microwave properties that are superior to that of magnetic films and traditional bulk magnetic materials. These novel patterned meta-materials can be useful for applications in reciprocal (filters, oscillators), non-reciprocal (isolators, circulators) and nonlinear (detectors, frequency multipliers) microwave signal processing devices operating at high levels of microwave power.
Methods
Microfabrication. Permalloy (Fe 20 Ni 80 ) disks were defined on an undoped ,100. GaAs wafer by mean of photolithography and electron-beam evaporation techniques. The process starts with spin coating of positive tone S1813 photoresist (Shipley Co) at 3,000 rpm for 60 sec; followed by soft baking on a hot plate at 115uC for 90 sec. After exposure to 365-nm light, the sample was developed using a 155 mix of Microposit 351 (Microresist Technology GmbH) and de-ionized water. Then, the electron beam evaporation of Py was performed at room temperature at a base pressure of 1310 28 Torr, with a deposition rate of 0.2 A/sec. The Py layer was topped with 2 nm of Ti in situ to prevent oxidation of the samples. Finally, an ultrasound-assisted lift-off in acetone completes the process.
Measurements. The dielectric resonator together with the experimental sample (2D array of Py dots or a continuous Py film) was placed in the maximum of the magnetic field h p of microwave pumping (h p jj H 0 ) inside a hollow metallic waveguide (wavelength l 53 cm) carrying the H 10 mode (see Figs. 2,3). The plane of the sample was parallel to the wider wall of the waveguide. The impedance matching between the waveguide and the dielectric resonator, and the fine-tuning of the resonator frequency, was done by means of a piston short-circuiting the waveguide. The resonator was tuned to the frequency v p of the microwave pumping by minimizing the reflection from the resonator. The bias magnetic field H 0 was chosen to make the FMR frequency in the sample equal to v p =2. The input synchronizing signal of power P in , duration 30 ns, and carrier frequency v p =2, was supplied to the wire antenna (see ''3'' in Fig. 3) through the circulator and a coaxial cable. This signal was only weakly absorbed by the experimental sample (absorption , 1%) and was nearly totally reflected, forming an image of the input synchronizing pulse on the oscilloscope. Simultaneous with the input signal, the microwave pumping pulse of the carrier frequency v p and duration t p~9 ms was supplied to the resonator. When the pumping power P p was lower than the threshold P th of parametric generation of a pumping sub-harmonic, the only signal on the oscilloscope was the input synchronizing pulse. When the pumping power exceeded the threshold P p wP th , an additional delayed signal, caused by the parametric radiation of the pumping subharmonic v p =2 from the sample, appeared on the oscilloscope. In the following experiments this additional signal was obtained by subtraction of the input synchronizing pulse from the total output signal.
Microwave absorption and heat exchange in Py dot formed on a non-conductive substrate. In the case of a thin cylindrical magnetic dot (with radius R that is substantially larger than the thickness L, LvvR) formed on a solid, non-conductive substrate, the equation describing the dot heating due to the absorption of the external microwave pumping field can be derived from the first law of thermodynamics (or conservation of energy) which for the constant volume V of the dot can be written as the equation of a heat balance: where the change of the internal energy DU~cVDT of the dot is equal to the difference between the amount of heat DQ z absorbed from the microwave pumping and the amount of heat DQ { radiated into the substrate, c is the volume heat capacity of the dot material, and DT is the change in the dot temperature. Taking a time derivative of the equation of the heat balance (1), the following equation describes the temporal evolution of the dot temperature: Here the power P abs absorbedby the dot is determined by the equation: where x'' is the imaginary part of the dimensionless dot magnetic susceptibility, v p is the angular frequency of microwave pumping, and h p is the amplitude of the pumping microwave magnetic field that can be considered constant along the dot thickness L, if this thickness in much smaller than the skin depth at the pumping frequency d(v p ).
The power P rad radiated by the dot is proportional to the area of the dot base A 5 V/L and, in accordance with the Newton's law 17 , to the change of the dot temperature DT caused by the microwave heating: where b is the coefficient of heat exchange (heat transfer coefficient) between the dot and the substrate measured in W/(K . m 2 ). Solving equations (2-4) the following expression is obtained for the temperature change of the magnetic dot: where the characteristic time t T of the dot heating is given by: It is clear from the solution of (5) and (6) that the DTinduced by the microwave heating of the dot, and the characteristic time t T of this heating, are proportional to the dot thickness L. For the typical value of the specific heat of Permalloy c , 4 . 10 6 J/(K . m 3 ) and the value of the heat exchange coefficient between the dot and the substrate of b , 4 . 10 4 W/(K . m 2 ) 19 we get the characteristic time t T , 10 ms for the dot of the thickness L 5 100 nm, which agrees reasonably well with the characteristic time of power decrease in the dots of this thickness shown in Fig. 4. Since the change of the dot temperature (5) due to the microwave heating is proportional to the dot thickness L, the heating-related decrease of the power of sub-harmonic radiation is much less pronounced for the dots of smaller (L 512 nm) thickness (see upper curve in Fig. 4). | 2018-02-07T18:15:28.699Z | 2012-06-28T00:00:00.000 | {
"year": 2012,
"sha1": "3218129afa4f16f3942fb11ebd79f6497453636b",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.nature.com/articles/srep00478.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3218129afa4f16f3942fb11ebd79f6497453636b",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
225017930 | pes2o/s2orc | v3-fos-license | Evaluation of DC-DC converter using renewable energy sources
Received Dec 19, 2019 Revised Feb 4, 2020 Accepted Jul 29, 2020 This work analyses and evaluates the performance of renewable energy sources-based converters using Intelligent techniques. The objective of the research is to maintain the reliability of the converters such that it decreases the switching losses, high duty cycle and recycles the leakage energy. To accomplish the high output voltage gain converters are designed with different intelligent methods. Due to heavy demands, the cost of fossil fuel has gone up.so the need for the time is identifying and developing renewable energy sources along with developing new technologies for energy saving in renewable energy system to fight the issues plaguing the environment.
INTRODUCTION
Owing to features such as zero carbon footprint, enormous energy generation potential, being renewable and sustainable in nature, wind energy augurs well to be the most promising source of sustainable energy with immense futuristic scope. The great extent of technological enhancement has seen multi Mega Watt of energy being produced in a single wind turbine system from a mere few tens of kilowatts of power being produced in 1980's. Factors such as air density and wind velocity plays a vital role in harnessing energy from wind and Numerous intelligent techniques based wind energy system have been presented in literature [1].Owing to the wind speed's stochastic nature ,the deterministic reliability prediction methods which are conventionally used in Industrial application is not a viable option for wind turbine applications. These algorithms despite being robust and intelligent as assumed by authors in [3] are used in applications that allows only a minimum variation in wind speed and also requires a constant air density assumption which the authors in [2] have assumed.
The model has various inherent advantages such as reduced simulation time, eliminates the negative aspects associated with pulse width modulation switching and also reduces the thermal stress by employing the controller tuning methodology. The paper presents an overall perspective of the SEPIC based wind energy conversion system and gives an insight into the frame work of evaluation of the converters.
Distributed wind generation units like DFIG makes use of the positives in the converters and the various modes of control being made use of at the point of common coupling for supporting the grid with additional reactive power. The DG interconnection with the distribution network makes use of IEEE 1547 standards and based on them, DGs need to have a constant PF close to unity at PCC and to achieve it multiple switched capacitor bank support is required. [4]. Despite active voltage regulation is 1919 prohibited by these standards to a certain extent, voltage regulation can be performed if there is a consensus amongst DG owners and the utility providers [5]. The size of the DG will decide whether to operate it in power factor control mode, voltage control mode or voltage regulation mode [6]. Of late Power factor control mode is used for controlling the smaller units while voltage control mode suits the larger DG units provided ,the power factor being maintained at unity PCC in terms of reducing the THD value [7] in the power factor control mode by modeling it as a PQ bus with negative current injections [8]. Figure 1 shows the block diagram of wind energy system used. Generator is very difficult to connect with grid directly, Converter is an interface between that two sections [9]. DFIG being a variable speed generator possess a distinct advantage over the fixed speed generators with regards to its variable speed capability. Reduced power conversion paves the way for reduced losses, improved efficiency, enhanced power factor and in turn provides reactive power to the grid [10]. The necessity of a gearbox in order to increase speed coupled with the wind turbinegenerator combination is the basic draw back in the case of DFIG while the PMSG in order to achieve direct drive control is machined with sufficient number of poles in accordance with the requirement [11].
PROPOSED SYSTEM
In the proposed system converter performance has been enhanced by means of reducing their power consumption, reduced switching and power losses [12]. Here the Total Harmonic Distortion (THD) values were reduced to the prescribed limit in terms of inserting the LC filter along with the required output. Both the values of direct AC and converted AC have been compared and their THD values are calculated [13]. In order to achieve lower voltage level with regards to the DC-DC converter the synchronous converter stands out as a perfect option due to the lower conduction loss in the diode [14]. The soft switching technique is employed in the high side MOSFET so that the switching losses are eliminated. In addition to that the resonant secondary circuit is designed to avoid the switching losses [15]. This method as suggested above accomplishes an efficient convertion by employing an advanced design technique and in this regard the converter performance has been analyzed [16].
MODES OF OPERATION 4.1. Mode I: DC-AC converterd input
A power converter generally employed for conversion of electrical energys being an electrical or electro mechanical device could be constructed as a simple transformer to convert the voltage of AC power or it can also be made into a complex system [17]. This is the technical term which refers to a class of electrical machine which is used in accomplishing the conversion from one frequency component of alternating current in to another frequency component [18]. DC supply is converted in to AC by means of rectification and then it is fed to the converter [19].Owing to the voltage step-up/step-down capability a SEPIC Converter is used for the wind energy system as well as its un-inverted output(as distinguished by conventional buck-boost converter) [20].Output Voltage has been obtained and its THD value also been determined. Figure 2 displays the simulated DC-AC Converted input [21].
Mode II: Direct ac input
Electrical drives for direct drive renewable energy systems, but the most important process of high frequency switching results in negative effects [22]. Here a rectifier is often included to convert the AC power in the mains, galvanic isolation between the input and output modules of converter [23]. AC voltage is directly fed to the converter section [24]. Here also output voltage has been obtained with their relevant THD values. Figure 3 shows the simulation diagram of direct AC input [25]. In this mode, supply from wind energy is directly fed to the converter to reduce the losses when used to apply the direct ac input [26]. Figure 5 shows the Input voltage waveform of giving inverted DC supply. Ripple contents are eliminated interms of providing filter and the converted AC is apllied to the switching circuit. Figure 6 shows the outpiut voltage waveform of the same case which is to be in the form of sinusoidal signal,the harmonics contents are eliminated by means of providing filter in the output side,the THD values are considerably reduced in the application of filter which is mentioned in the table.1.By means of evaluating the converter performance the THD value can be concluded in to the permissible limit as well as switching losses and heating effect can also be reduced in this performance configuration. Figure 7 shows the input voltage waveform when applying direct DC supply. Figure 8 and Figure 9 shows the input cuurent and output voltage waveform of applying direct DC suppy, here LC filter is used to reduce the harmonics in the output side.Due to the filter concentration THD values are reduced in to the level. 1923 Figure 10 and Figure 11 shows the input voltage and input current waveform of direct wind and Figure 12 shows the output voltage waveform when applying direct supply from direct wind energy. Table.3 shows the comparison value of Voltage and THD value of above three modes of operation. THD value is almost same for direct AC input and Wind input Figure 10. input voltage of direct wind Figure 11. input current of direct wind
CONCLUSION
Converter has the ability to work form an supply voltage which is greater or less than the regulated output voltage. Converter design which includes minimal active elements, a simple controller and relevant waveforms which provide low noise operation. Harmonics in the power systems result in increased heating in the equipment and conductors, thus resulting in misfiring of variable speed drives and torque variations in motors. Total harmonics distortion is a complex and often confusing concept to understand. From the above analysis it can be concluded that the converter performance has been enhanced as well as the total harmonic distortion has been reduced to a considerable level. By means of reducing the THD value, the entire working of converter with the wind energy input has been improved to a considerable level and this paper also proves that the lower range of THD in Power systems paves the way for high power factor, low peak currents and higher efficiency. Simulation results prove that the output voltage is controlled as per the requirements and it changes by controlling the duty cycle. | 2020-10-19T18:04:10.819Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "e06d7926a598da42c0ea44263c44c24d5bf39aa1",
"oa_license": "CCBYSA",
"oa_url": "http://ijpeds.iaescore.com/index.php/IJPEDS/article/download/20591/13273",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "191d3a172868b4ed3c44ffcb1f7242a326f77c2e",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
231668252 | pes2o/s2orc | v3-fos-license | Asthma-prone areas modeling using a machine learning model
Nowadays, owing to population growth, increasing environmental pollution, and lifestyle changes, the number of asthmatics has significantly increased. Therefore, the purpose of our study was to determine the asthma-prone areas in Tehran, Iran considering environmental, spatial factors. Initially, we built a spatial database using 872 locations of children with asthma and 13 environmental factors affecting the disease—distance to parks and streets, rainfall, temperature, humidity, pressure, wind speed, particulate matter (PM 10 and PM 2.5), ozone (O3), sulfur dioxide (SO2), carbon monoxide (CO), and nitrogen dioxide (NO2). Subsequently, utilizing this spatial database, a random forest (RF) machine learning model, and a geographic information system, we prepared a map of asthma-prone areas. For modeling and validation, we deployed 70% and 30%, respectively, of the locations of children with asthma. The results of spatial autocorrelation and RF model showed that the criteria of distance to parks and streets as well as PM 2.5 and PM 10 had the greatest impact on asthma occurrence in the study area. Spatial autocorrelation analyses indicated that the distribution of asthma cases was not random. According to receiver operating characteristic results, the RF model had good accuracy (the area under the curve was 0.987 and 0.921, respectively, for training and testing data).
Today, with the growth of societies, diseases are increasing in terms of diversity and the number of people involved. One of the diseases that has become extremely common is asthma 1 . The immune system of people with asthma reacts more than usual to seemingly harmless substances in the habitat. The number of patients with asthma is increased by 5% every year 2 . Over the past 50 years, asthma has increased dramatically among children in modern and developed countries because of the contamination of the environment with stimulants 3 . According to the latest report of the World Health Organization (WHO), the number of asthmatics in the world is 300 million, which is estimated to increase to 400 million by 2025 4 . In 2015, about 82% of deaths in Iran were due to chronic noncommunicable diseases, of which 4% were related to respiratory diseases. The prevalence of asthma in Tehran province is higher than that in other provinces of Iran and is 12.6% in the age group 6-7 years old and 16.6% in the age group 13-14 years old. The higher prevalence of asthma in Tehran, compared to the general statistics of Iran, is due to various factors involved in asthma, including high air pollution in Tehran province 5 . Owing to the large number of asthma patients, if they were untreated and uncontrolled, it could lead to a serious problem for public health 5 .
There are several factors involved in the development and exacerbation of this disease, which vary depending on the type of geography, environmental conditions, and lifestyle of individuals. Identifying allergens and preventing exposure to allergens is the best way to prevent allergies 6 . Since an important part of these factors is related to the human environment, the discovery of environmental factors affecting the prevalence of asthma can play a significant role in reducing its effects. Therefore, by collecting appropriate information about the living environment of individuals, the role of various environmental factors in the occurrence and exacerbation of this disease can be measured 7 .
The technology of geographic information system (GIS) is particularly useful in assessing the relationship between disease occurrence and environmental quality. GIS can be applied to process health data, analyze geographical distribution, and prepare a disease prediction map, surveillance, and epidemic management. Locationbased analysis can be effective in the conduct of epidemiological study of asthma risk factors (exposure), the identification of areas prone to asthma, and the prevention and management of the disease 8 . So far, many studies have implemented GIS to analyze asthma spatially. Hashimoto et al. investigated the effect of climate on emergency patients with asthma attacks in Tokyo, Japan 9 . According to their results, high pressure, humidity, and temperature have a significant positive correlation with asthma. Zanolin et al. found in several Italian cities that the prevalence of asthma and its symptoms increased with decreasing latitude and increasing average annual 16 . For this purpose, the spatial analysis of hot spots, least squares method, and weighted geographical regression were used to map high-risk areas for asthma. Prediction is the process of estimating unknown situations. Forecasting provides an estimation of future events and can turn past experiences into predicting future events 17 . Given that we face a large number of effective criteria and disease data in predicting areas prone to asthma, big data are discussed. A good tool for big data analysis is machine learning. The main purpose of machine learning is to better understand the data and discover the relationships between dependent and independent variables and ultimately estimate a value 18 . Although machine learning models require more observational data to learn, they are faster and more efficient than traditional methods and have fewer limitations by some assumptions 19 . Machine learning models often perform better in various areas of environmental research in terms of accuracy, speed, and computational cost 20 . Random forest (RF) is one of the machine learning models that has been considered in environmental modeling in recent years owing to its simplicity, robustness, and capacity to deal with complex data 21 . According to the authors' knowledge, although the RF model has not been implemented to assess areas susceptible to asthma, its good performance has been proved in other environmental fields, such as groundwater potential 33 , groundwater hardness 22 , flood risk 23 , and PM 10 risk 19 . Therefore, the purpose of this study was to map the areas prone to asthma using the RF model and environmental factors in Tehran, Iran. The innovation of the present study is the application of RF machine learning modeling in combination with GIS to determine asthma-prone areas by considering environmental factors affecting asthma.
Methodology
This research was conducted in five steps. In the first step, a spatial database was created using the location of children with asthma and 13 environmental factors affecting asthma. In the second step, using the frequency ratio (FR) model, the spatial relationship between asthmatics and environmental factors was determined. In the third step, the spatial autocorrelation of asthma incidence was examined. In the fourth step, the RF model was deployed to determine the asthma-prone areas. In the last step, modeling was evaluated using the receiver operating characteristic (ROC) curve and sensitivity analysis.
Study area.
Tehran is the capital city of Iran and has a population of 8,693,706. In terms of population, it is ranked first in West Asia and 24th globally. Tehran has an area of 730 km 2 and is located at a longitude between 51°17′ E and 51° 33′ E and a latitude between 35° 36′ N and 35° 44′ N. Tehran's altitude ranges from 900 to 1800 m above sea level; it decreases from north to south. Air pollution is one of the most important environmental problems in Tehran and derives from geographical factors, e.g., the enclosing effect of mountains, vehicles, e.g., cars and motorcycles, fuel houses, and pollution from factories. The location of Tehran is shown in Fig. 1.
Spatial database.
In the first step, independent and dependent datasets were used to create a spatial database. Dependent data included the locations of asthmatic children in 2019 in Tehran. These data were obtained from the Hospital Information System, one of the largest centers for the provision of medical services in the field of respiratory diseases (872 cases). We used 70% (611 cases) of asthmatics' position data for modeling and 30% (261 cases) for evaluation (see Fig. 1). According to WHO's reports and previous research, environmental factors affecting asthma have been identified. These factors include air pollution parameters (O 3 , CO, NO 2 , SO 2 , PM 10, and PM 2.5), meteorological parameters (rainfall, temperature, humidity, pressure, and wind speed), distance to streets, and distance to parks. Air pollution data were prepared using 23 pollution measuring stations of the Tehran Air Pollution Control Company. For this purpose, the annual average of these parameters in the period 2009-2019 was used. To prepare a map of meteorological parameters, the annual average of these parameters was used for 12 meteorological stations in Tehran province from 2009 to 2019. The Kriging interpolation method was applied in ArcGIS 10.3 environment to map the air pollution and meteorological parameters. Criteria for distance to street and distance to park were prepared using the land use map of Tehran. Environmental factors affecting asthma are shown in Fig. 2.
Spatial autocorrelation analysis.
In environmental studies, there are often data that are not independent, and their dependence is due to their locations in the study space 24 . The main assumption of most common statistical methods is based on data independence. Therefore, owing to the correlation and spatial effect between these types of data, this assumption is not actually realized and the data are interdependent, thereby conventional statistical methods are not suitable for studying them 25 . Hence, geostatistical methods are the suitable option. To model events such as diseases, we need first examine the spatial autocorrelation between their occurrences and determine which distribution (random, dispersed, or cluster) follows the spatial pattern of the event www.nature.com/scientificreports/ in the region 26 . In spatial autocorrelation, there are two approaches including the spatial structure and structural function. In spatial structure, the spatial pattern of the data is studied. Here, we utilized Moran's I and Getis-Ord's indexes for this purpose. In the structural function, the spatial dependence of the data is addressed; it uses semivariance to measure the spatial dependence between two observations as a function of the distance between them. Semivariogram is a graph of how semivariance changes as the distance between observations changes 27 .
Moran's I index.
This index is one of the tools to study spatial autocorrelation between spatial data. In a dataset, the Moran's I is between − 1 and + 1. If the Moran's I index value is higher than zero, the spatial autocorrelation is positive; if it is lower than zero, it is negative; and if it is close to zero, no spatial autocorrelation exists 28 . The Moran's I index is calculated using Eq. (1): where x i and x j are the numbers of asthma cases in polygon i and j, respectively, x is the average number of asthma cases, N is the total number of asthma cases, and w ij is the spatial weight between polygons i and j. In local Moran's I index, this analysis investigates the relation between points and neighbors, in which four cases might occur 28
Getis-Ord Gi* index.
This index is used to examine the accumulation of very large or very small amounts of the occurrence of an event, which includes indicators of hot spots (high-risk areas) and cold spots (low-risk areas). Positive Z-score values indicate hot spots and negative Z-score values indicate cold spots 29 . The Getis-Ord Gi* index is calculated from Eq. (2): where x i and x j are the numbers of asthma cases in polygon i and j, respectively, N is the total number of asthma cases, and w ij is the spatial weight between polygons i and j.
Semivariogram. Semivariogram is known to detect the spatial coherence of a variable. Spatial coherence means that adjacent specimens are interdependent to a certain distance, and it is assumed that the dependence between specimens can be represented by a mathematical model called semivariogram 30 . Semivariogram is calculated using Eq. (3). Semivariogram has three parameters-range, sill, and nugget-which are defined as follows 31 : Range (or radius of impact): It is the distance at which the variogram reaches a fixed point and approaches the horizontal line. Sill: It is the constant value the variogram reaches in the range of effect. Its value equals to the total variance of all the samples used to calculate the facade change. Nugget: It is the value of the variogram at the origin, i.e., for h = 0. Ideally, its value should be zero.
To determine the best correlation, the spatial dependence index based on Eq. (4) is used.
Its value is examined in three cases: if it is less than 25%, it means strong spatial correlation; between 25 and 75%, moderate spatial correlation; and more than 75%, weak spatial correlation 32 .
The FR model. In the FR model, the set of training points are introduced as a dependent variable, whereas the parameters affecting asthma are introduced as independent variables 33 . This model calculates the probability of the occurrence of asthma in each class for all criteria. To determine the effect of each class, each variable independent of Eq. (5) is used 34 .
where FR is the effect of each class of each parameter, F i the percentage of training points located in class i, and P i the percentage of the pixels of class i in the entire study area.
The RF model. The RF model was proposed by Breiman as a cumulative learning method for regression and clustering problems based on decision tree development 35 . An RF is a collection of trees not pruned, which are obtained with a recursive segmentation algorithm 36 . An RF is constructed using a set of trees based on N independent observational data. This model is a combination of several decision trees in which several bootstrap instances of the data are involved and a number of input variables are randomly involved in the construction of each tree. Using the bootstrap method, a large number of N samples from the primary observational datasets are sampled and placed. About one third of the data is not sampled during the sampling process and is considered an out-of-process sample. After constructing all the trees, the test data are introduced to the tree, and the number of trees for the input vector of output is obtained. By averaging these outputs, the final output is calculated 33 . Validation. Here, to evaluate the modeling of asthma-prone areas, the ROC index and area under the curve (AUC), root mean square error (RMSE) and mean absolute error (MAE), and sensitivity analysis were used.
ROC curve. The ROC curve consists of two axes of sensitivity (x-axis) and a transparency axis (y-axis). These axes are calculated through Eqs. (6) and (7), which are obtained from the comparison matrix with the definition of the threshold between zero and one 37 .
where TP denotes the pixels that are correctly assigned to the desired category, TN the pixels that are not properly assigned to the category, FP the pixels that are incorrectly assigned to the desired category, and FN the pixels that are not incorrectly assigned to the desired category 33 .
The area below the ROC curve is called AUC. Its value varies between 0.5 and 1; the closer it is to one, the higher the modeling efficiency is 34 .
RMSE and MAE indexes. Predictive error, as a quantitative method, defines the difference between observed and estimated values, which is used to determine the accuracy of the model. Here, to evaluate the modeling accuracy, RMSE and MAE indices were used in the form of Eqs. (8) and (9) 33 . Sensitivity analysis. Sensitivity analysis shows how modeling input's changes affect modeling output. By eliminating any of the effective criteria, the necessity of their presence or absence is determined 38 . Sensitivity analysis is conducted using Eq. (10).
where RD is the relative decrease index, AUC all the final AUC value of the training data for all parameters, and AUC i the AUC value for the training data where parameter i is omitted 39 .
Results
Spatial autocorrelation result. The results for the Moran's I and Getis-Ord Gi* indexes are presented in Table 1. According to them, the distribution of asthma in the study area was clustered. P-value parameter is small and shows that the results of autocorrelation tests are statistically significant, the condition of the null hypothesis is correct based on the observed data, and the distribution of disease is not random. Spatial clusters using Moran's I and Getis-Ord Gi* indexes are shown in Figs. 3 and 4, respectively. The high-high and hot spot areas indicate areas where disease clusters are present. The low-low and cold spot areas indicate areas where disease-free clusters are present. The results of semivariogram are shown in Fig. 5 and Table 2. According to the results of the nugget, its highest value is related to SO 2 , CO, O 3 , and wind speed, whereas its lowest value is related to the distance to street, distance to park, PM 10, PM 2.5, and rainfall. The results of the range showed that its highest value is www.nature.com/scientificreports/ related to the wind speed, CO, and SO 2 , whereas its lowest value is related to the distance to street, humidity, and temperature. According to the results of the sill, its highest value is related to SO 2 , pressure, and O 3 , whereas its lowest value is related to rainfall, wind speed, and PM 2.5. The results of the SD index showed that its lowest value is related to the distance to street, distance to park, PM 2.5, PM 10, and rainfall, whereas its highest value is related to SO 2 , CO, and wind speed. Figure 6 demonstrates the spatial relationship between asthma and the environmental criteria affecting it. According to the results of distance to street, the highest weight value of FR is related to the class 100-200 m, and also at shorter distances, there is a positive correlation between the occurrence of asthma and the distance to street. The results of the PM 10 criterion indicate that the highest FR is related to the class greater than 93.24 and the probability of asthma increases as the PM10 criterion increases. The results of PM 2.5 imply that asthma is more likely to occur in the middle classes of this criterion. According to the results of CO, as this criterion increases, the value of FR increases, thereby the probability of asthma increases. The results of O 3 signify that the value of FR is higher in lower values of this criterion. This criterion seems to have a negative correlation with the occurrence of asthma in the study area. The results of SO 2 show that the value of FR as well as the probability of asthma increase as the values of this criterion increase. According to the results of NO 2 , asthma is more likely to occur in the middle classes of this criterion. The results of the pressure criterion show that the highest FR value is related to the class 1009.69-1010.17 and the increase of pressure is directly related to the occurrence of asthma. The value of the FR in the wind speed criterion implies that asthma is more likely to occur in the lower values of this criterion. However, its effect on the incidence of asthma is not apparent. The results of the humidity criterion demonstrate that the highest FR value is related to the class 40.48-41.59. According to the results of the temperature criterion, the middle classes of this criterion have a higher FR value. The results of rainfall criterion denote that the highest FR is related to the class 303.98-338. 15. The results of distance to parks suggest that the probability of asthma increases as the distance to parks increases, and this parameter has a positive correlation with asthma.
Result of FR model.
Result of RF model. To model the asthma-prone areas using the RF model of the weights obtained, the FR model was used for each criterion and location of asthmatics. To implement the RF model, besides the places where asthma occurred, we needed places where asthma did not occur. For this purpose, the number of asthma locations (value 1) and non-asthma locations (value 0) were randomly generated and considered as target data. Spatial database including the weights obtained from the FR model for each criterion (13 environmental criteria) as well as locations of occurrence and nonoccurrence of asthma was considered for the input of RF model. From the data, 70% (604 locations of asthma patients) were used as training data and 30% (268 locations of asthma patients) as test data. RF model was implemented in the Waikato Environment for Knowledge Analysis software. The fit of the training and test data to the target data is shown in Fig. 7. The results of the RF model performance are presented in Table 3. The importance of each effective criterion for modeling asthma-prone areas was prepared using an RF model and is shown in Fig. 8. According to the results, distance to park, distance to street, PM 2.5, and PM 10 are most important in modeling asthma-prone areas, whereas pressure, wind speed, and CO are least important.
After modeling the training data using the RF model, the fitted model was generalized to the entire study area. For this purpose, the output results were transferred to ArcGIS 10.3 software and the final map of asthma-prone areas in Tehran was prepared using an RF model. Using the Natural breaks classification method, it was divided into five classes ranging from very low risk to very high risk (see Fig. 9). According to the results, the central and southeastern regions of Tehran are more dangerous than other regions.
Validation of final map.
To evaluate the modeling results, 30% of the locations of asthmatics were used. To validate the final map, the number of asthma locations (value 1) (268 locations) and non-asthma locations (value 0) (268 locations) were randomly generated. According to them, the AUC value of the RF model in mapping asthma-prone areas is 0.987 and 0.921, respectively, for training and testing data.
The results of sensitivity analysis using the RD index are shown in Table 4 and Fig. 10. According to them, the criteria of distance to park and distance to street are most important in modeling. These two criteria increase the modeling accuracy by 2.83% and 2.26%, respectively. The rainfall criterion is least important in modeling, thereby having no effect on the accuracy of modeling.
Discussion
The results of spatial autocorrelation indexes in the study area indicated that the distribution of asthma was not random and the occurrence of the disease was affected by environmental conditions. According to the results of semivariogram between the criteria affecting asthma, the criteria of distance to park, distance to street, PM 2.5, PM 10, and rainfall had the highest spatial dependence, while SO 2 , CO, and O 3 criteria had the least spatial dependence. According to the results of the range parameter, the criteria of distance to street, temperature, and humidity had the highest spatial variability, while the criteria of wind speed, CO, and SO 2 had the least spatial variability. The results of autocorrelation showed that all the criteria affecting asthma had a strong spatial correlation with asthma; among them, the criteria of distance to park, distance to street, PM 2.5, and PM 10 had a stronger spatial correlation.
According to the results of the FR model, asthma was more likely to occur at shorter distance to street. Based on the results of the FR model, spatial correlation, and RF model, the criterion of distance to street had a great impact on the occurrence of asthma in the study area. This is due to the traffic in the streets and the proximity of industrial centers near the streets 40 . The spatial relationship between the PM 10 criterion and the probability of asthma attacks showed that the latter increased as the former increased. As PM 2.5 increased, the FR value and likelihood of asthma attacks increases as well. Based on the results of FR, spatial autocorrelation, and RF model, among the air pollution criteria, PM 2.5 and PM 10 had a strong spatial relationship with the probability of asthma attacks in the study area. PM 2.5 and PM 10 are generally the result of fossil fuel activities, such as oil, gas and coal, vehicle traffic, metal smelting and processing, and power plants. PM 2.5 particles stay longer in the air and penetrate deeper into the lungs 41 . The results of the CO criterion showed that, as it increased, the FR value and probability of asthma attacks increased. Transportation and movement of vehicles produce and emit more than 70% of carbon monoxide. This gas interferes with the transport of oxygen in the human blood, leading to impaired cell respiration 42 . The O 3 criterion showed that it had an inverse relationship with the FR value, i.e., in lower values of this parameter, asthma attacks were more likely to occur. Ozone gas is generated at an altitude of 30 km above the ground and enters the lower floors because of severe climate change 43 . It seems that, if there were no severe climate changes in the study area, this criterion could not play any role in modeling asthma. As the SO 2 values increased, the rate of asthma attacks increased. Sulfur dioxide has a higher solubility in water www.nature.com/scientificreports/ than other pollutants, thereby having a high tendency to be absorbed in the respiratory tract when inhaled 44 .
According to the results of the NO 2 criterion, in its middle classes, the probability of asthma attacks was higher. As air pressure increased, the likelihood of asthma attacks in the study area increased. Changes in air pressure result in storms and climate change and can indirectly affect air pollutants and asthma. The results of wind speed criterion showed that it was inversely related to the FR value and incidence of asthma attacks; therefore, it was not effective in modeling asthma in the study area. Strong winds are able to disperse pollutants and increase dust; in the study area, however, this criterion did not have much effect on modeling areas prone to asthma because of the low wind speed. Humidity had an indirect effect on the occurrence of asthma attacks; by increasing this criterion, secondary pollutants such as sulfate and nitrate increased. The results of humidity criterion showed that the probability of asthma attacks in the study area was higher at a humidity of 40%. Rainfall criterion had an inverse relationship with the occurrence of asthma attacks; the concentration of pollutants and thus the associated chemical reactions in the atmosphere decreased as the rainfall increased. The results of rainfall showed that asthma was more likely to occur in the middle classes of this criterion (303-340 mm). The spatial relationship between the temperature and the occurrence of asthma showed that asthma was more likely to occur at 15 ℃.
In general, as the temperature rises, photochemical reactions and ozone concentrations increase. The results of www.nature.com/scientificreports/ the distance to park showed that this criterion had a strong spatial relationship with the occurrence of asthma in the study area. Proximity to city parks has health benefits associated with physical activity, social cohesion, and stress reduction 6 . The results showed that the probability of asthma attacks in the study area increased as the distance to park increased. The results showed that the RF model had good accuracy in modeling asthma in the study area. One of the advantages of the RF model was that using the average of several decision trees in the output results prevented overfitting by constructing a random subtree of features as well as a smaller tree using this subtree. There was no need for scalability in the RF model, because accuracy remained at a good level even without data scaling. Even in the absence of a large amount of data, RF model could be highly accurate 33 .
The most basic principle of fighting diseases is to change people's lifestyles. In this regard, GIS could deliver health warnings to people at risk. By identifying where the disease is spreading, people become more aware of www.nature.com/scientificreports/ their surroundings and better understand safety issues. Furthermore, the identification of disease centers could reduce health costs and expenses.
Conclusions
The purpose of this study was to map the areas prone to asthma in Tehran, Iran using an RF model. The results of the research are as follows: 1. The results of spatial autocorrelation showed that the criteria of distance to park, distance to street, PM 2.5, and PM 10 had a strong spatial correlation with asthma. 2. Based on the FR model results, the asthma in the study area occurrence was higher when distance to street equaled to 100-200 m, a PM 10 more than 93. 24 15, and a distance to park between 300 and 400 m. 3. Based on the results of the RF model, the criteria of distance to park, distance to street, PM 2.5, and PM 10 had the greatest impact on the modeling of asthma areas. 4. The results showed a good accuracy (AUC is equal to 0.987 and 0.921, respectively, for training and testing) of the RF model in modeling areas prone to asthma. 5. Deploying disease risk maps using GIS could help prevent, manage, and control diseases. www.nature.com/scientificreports/ | 2021-01-22T14:28:43.031Z | 2021-01-21T00:00:00.000 | {
"year": 2021,
"sha1": "afd09f7cce5d44147e07eb1927cbfddf7946321d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-81147-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d127f892e72e339aa404548a039c44d6c4a44c6a",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4978930 | pes2o/s2orc | v3-fos-license | Socio-demographic determinants of women’s satisfaction with prenatal and delivery care services in Italy
Abstract Objective The aim of this study was to examine the extent to which socio-demographic variables affect women’s satisfaction regarding antenatal and perinatal care. Design To take into account the role of the context in shaping women’s satisfaction, we used multilevel models, with women at the lower level, and the health districts of residence, or the hospitals in which the delivery took place, at the higher level. Setting Tuscany (Italy) Participants The study is based on a representative survey focused on the satisfaction and experience of 4598 new mothers who gave birth in one of the 25 hospitals in Tuscany (Italy) in 2012. Main Outcome Measures Women’s overall satisfaction in the prenatal period and their overall satisfaction during hospitalization for delivery. Results Regarding pregnancy, women’s satisfaction increased with age, and was generally higher among foreign women coming from non-Western countries and among highly educated women. Regarding delivery, age proved insignificant, whereas citizenship and education maintained the same association with satisfaction. Contrary to our expectations, the number of previous pregnancies turned out to be insignificant. Conclusions Our findings suggest that the quality of maternity services was perceived differently in different socio-demographic groups: women’s expectations affected satisfaction, but in different ways, in various socio-demographic groups, both during pregnancy and at delivery. Keeping these socio-demographic factors into account in the analysis of satisfaction may help organisations to identify areas where pregnancy and delivery services can be better targeted and where increasing awareness among professionals in their everyday practice is most needed.
Introduction
Before this century, prenatal care for a woman and her unborn child was not subject to rigorous scientific evaluation in most high-income countries. After a phase of increasing medicalization, the World Health Organization defined a model of prenatal care with a set of guidelines and recommendations for decision-makers and health care providers, urging them to promote patients' empowerment [1].
Despite this recommendation, few large-scale surveys have focused on pregnant women's needs and their assessment of the care provided as the starting point of a patient-centred approach in providing health care in high-income countries. In fact, typically women's assessment has been investigated with other means: ethnographic research, qualitative interviews or small descriptive studies [2][3][4][5]. The national survey on maternity care regularly conducted in the UK as well the 2008 Australian survey are rare exceptions [6][7][8]; whereas population-level surveys on perceived quality regarding maternal and newborn health are widespread in low-income countries [9][10][11][12][13].
Patients' satisfaction and experience are important measure of the quality of health service and in the last few years are routinely used together with clinical indicators in both high-income and lowincome countries for continuous quality improvement [14,15]. Patient satisfaction is a complex and multidimensional measure, affected by a number of clinical and technical factors, but also by expectations and personal characteristics [16][17][18][19][20]. With regard to women's satisfaction for maternity services, previous studies have shown its importance not only for the health and well-being of both mothers and children, but also for service providers and decisionmakers [21][22][23][24][25][26][27].
The factors that have been shown to matter are, for instance, respect for the patient and her dignity, emotional support by the staff, contact with friends and family, information and guidelines, physical comfort, trust in treatment providers, autonomy and participation in decision-making; and confidentiality [28][29][30][31][32][33][34]. Of course, expectations play a major role in this sphere: when services meet women's expectations of care, women are usually satisfied and tend to report a higher quality of care [31]. Education is one of the most important intervening variables here [35,36]: it shapes expectations, and thus influences satisfaction: women with low education frequently report feeling alone, ignored or harassed [31].
Our research investigates the role of socio-demographic factors on women's satisfaction regarding the maternity services using data from an ad-hoc, representative survey conducted in Tuscany in 2012-13 [37]. We aim to illustrate how socio-demographic characteristics interact with expectations in influencing women's perception of quality. The survey that we used in this study is unique in Italy. We believe that it could also be used as a template for other countries because it overcomes some of the limitations that are frequently noted in the literature: the sample is large and representative, and the perceived quality of the various stages of the process (antenatal period and childbirth) is investigated separately [38]. In addition, women's satisfaction (during pregnancy and at delivery, separately [38]) can be analysed in relation to variables whose role is still controversial in the literature: for instance, maternal age, educational level, number of previous pregnancies and country of origin [6,28,30,[38][39][40][41][42][43][44], taking into account the area in which each woman lived during her pregnancy or where the delivery took place [44,45].
The women in our sample were generally satisfied with the services received in the prenatal period and during delivery, as is often the case in the evaluation of maternity wards [44]. However, some differences emerged: our paper tries to explain this variability in the light of women's expectations through the lens of their sociodemographic characteristics. In short, our research questions are: What is the relationship between women's socio-demographic characteristics (age, education, citizenship and previous pregnancies) and their satisfaction with the service they received? Which expectations matter most in 'explaining' satisfaction levels?
Analysing satisfaction measures by socio-demographic subgroups and their interaction with expectations may provide an insight for policy makers and practitioners into the areas where services need to be better targeted, and increase awareness of the sociocultural context of pregnant women and new mothers in clinical practice.
Data source
The present study was based on a representative survey conducted between October 2012 and March 2013 by the 'Management and Health Laboratory (MeS)' of the 'Scuola Superiore Sant'Anna' of Pisa and commissioned by the administrative government of Tuscany within the Performance Evaluation System of the Tuscan healthcare system [46]. All of the 4598 new mothers who participated in the survey (37.2% of those who had been contacted) were considered for the analysis of women's satisfaction at delivery, whereas for the analysis of satisfaction during the prenatal period, we excluded the 131 respondents who did not live in Tuscany (see [37] for additional details on the survey). The missing figures (1981 for 19 variables, that is about 100 cases per variable, on average) were imputed using multivariate imputation with chained equations (MICE) [47]. Finally, we verified ex-post that the sample of respondents was not significantly different from that of nonrespondents, on the basis of the information available in the (random) sample list. Adopting the potential-outcome framework for causal inference [48], we formalised the statistical issues involved in estimating the effect of participating or not participating in the survey on women's satisfaction. Our sensitivity analyses showed that respondents did not appear to be selected in any way. This holds also for the subgroup of foreign women (see Supplementary material online for details).
Response variables
In the survey, women were asked to rate their overall satisfaction with the assistance received in two different phases: in the prenatal period and during their hospitalization for delivery [49]. In both cases, women's assessment was expressed with a five-category Likert-type scale (Excellent, Good, Fair, Poor and Very poor).
Explanatory variables
Our key explanatory variables were of different types. Some were socio-demographic: age, education, citizenship and previous pregnancies. Other variables, identified by the specialists as potentially relevant [50,51], related to the women's experience and to the clinical conditions of each phase. In the analysis of the satisfaction regarding their experience of pregnancy, we considered the number of ultrasounds ('low' if below 3) and the occurrence of a pathological pregnancy, and we considered whether a preparation course for birth had been attended, the birth centre visited, and the patient duly informed about her 'path' from pregnancy to childbirth.
As for delivery, we included the type of delivery, whether it was preterm, whether it was outside or inside the health district of residence of the woman, whether inconsistent information was supplied by the personnel about breastfeeding, whether pain control was appropriate, whether the woman had felt alone during labour or delivery (the survey questionnaire did not specify whether this was caused by her partner, by lack of assistance or both), whether there had been skin-to-skin mother-to-child contact immediately after delivery, whether the woman had been with her newborn during hospitalization and whether she trusted the doctors, nurses and/or midwives. (As the questionnaire is administered shortly after birth, it seems logical to assume that women referred to the medical staff they had met on this occasion, although their general feeling towards the category probably also influenced their answers).
In both phases, we also considered the type of interview: postal questionnaire, Computer Assisted Web Interview (CAWI) or Computer Assisted Telephone Interview (CATI) ( Table 1).
Finally, in order to (partly) capture the variability among health districts or hospitals, which is another relevant factor [52,53], we included a few contextual variables in our analyses, namely ad-hoc indicators derived from the Performance Evaluation of the Tuscan healthcare system for the years 2012-13. With regard to pregnancy, we included the access rate by childbearing-age women to professional counselling in the health district and the percentage of prenatal screening in the health district; for delivery, we included the percentage of breastfeeding within 2 h from delivery in the hospital.
Analytical strategy
We estimated two separate models: one for pregnancy and one for delivery. Multilevel proportional odds models were chosen in both cases, keeping into account the ordinal nature of the items, the hierarchical structure of the phenomenon, and the unbalanced number of interviews by hospital or health district (see online Supplementary material for the choice and appropriateness of the model). Women (N = 4467 in the model for pregnancy and N = 4598 in the model for childbirth) were the first, or lower, level of the model, and the 34 health districts (for pregnancy evaluation), or the 25 hospitals (to assess delivery performances) were the second, or higher level (Table 1).
This nested (multilevel) procedure enabled us to take into account the role of the health district or hospital in shaping subjective characteristics such as women's satisfaction [45]. To better appreciate the effect of firstand second-level covariates, in the estimation process we introduced them in blocks (see Models 1-3 both in Table 2, for pregnancy, and in Table 4, for the delivery phase), keeping correlation under control. Finally, we added an interaction term between women's education and the antenatal course for birth in the analysis for pregnancy, and between women's education and the evaluation of pain control in the model for delivery, to account for the unbalanced use of this service between different social classes, because non-Italian women and low educated women, for instance, typically show lower rates of attendance [37,54]. Other potential interactions of socio-demographic covariates with experience items, which were tested in both analyses but proved insignificant, are not presented here.
The response variable was the satisfaction towards services and assistance during pregnancy and, in the other model, during delivery (both with C = 5 categories). The underlying model is described by the following equation: is the cumulative probability up to the cth category for woman i in cluster j (i.e. health district or hospital), c α is the specific threshold for the cth cumulative probability, X ij is the vector of first-level covariates (some interaction terms included) and Z j the vector of second-level covariates. Finally, u j is the random effect for cluster j, which is assumed to be Normally distributed [55]. The data were analysed using STATA/IC 13.1.
Results
Assessing satisfaction during pregnancy Table 2 shows the model results for women's satisfaction for the services and the assistance received during pregnancy. Women's satisfaction increased with age, but not linearly. While women coming from non-Western countries were usually more satisfied than Italian women, the opposite was true for women coming from Western countries (but not significantly so in Models 2 and 3). Women's satisfaction increased for highly educated women, while the number of previous pregnancies apparently played no role.
Among women's experience and clinical covariates, only those concerning the presentation of the birth path and the antenatal course were significant, even if moderated by education (i.e. highly educated women attended antenatal classes more often and were more satisfied by the course than their less educated counterparts; see Table 3). Women who attended the course and found it useful were generally more satisfied with prenatal services; if, instead, they had not liked the course, they presumably considered it a waste of time, and were even markedly less satisfied than those who had not participated at all.
Among the second-level covariates, both indicators-reflecting the diffusion and the proactivity of prenatal services throughout the districts-proved non-significant. Taking second-level random effects into account, the differences in the predicted, conditional probabilities across local authority districts were not large because satisfaction was high in all the health districts. Instead, the predicted probabilities varied significantly in terms of the different values of the socio-demographic covariates. This would seem to imply that personal traits influenced women's satisfaction more than the health district of residence (results available upon request). Table 4 reports the results for women's satisfaction with the services and the assistance at delivery. In this case, age was not associated with higher satisfaction, whereas citizenship and education proved significant, as before: foreign, non-western women as well as highly educated women were the most satisfied. The number of former pregnancies proved, once again, not significant.
Assessing satisfaction during delivery
Women's experience and clinical covariates proved almost always significant. Having a Caesarean section, for instance, was negatively associated with satisfaction, compared with a vaginal delivery. Lack of or inconsistent information about breastfeeding as well as insufficient pain control, the feeling of loneliness during labour or at delivery, and the privation of skin-to-skin contact after delivery were all factors that lowered women's satisfaction. At the same time, confidence in doctors, nurses and midwives turned out to be important variables for a higher level of satisfaction. Women's experience and health during hospitalization and delivery appeared more relevant for their satisfaction than was the case during pregnancy, but education played an important mediating role. For example, better-educated women were less satisfied if they had not had appropriate pain control: in short, highly educated women appeared to be a more demanding group. They tended to show appreciation if their expectations were fulfilled, but expressed criticism in the opposite case.
Looking at the hospital level variables, the percentage of women who breastfed no later than 2 h after delivery in the hospital was not significant. Taking into account second-level random effects, a bigger variability emerged at the hospital level in this analysis than in the case of pregnancy (variance = 0.09 for delivery against 0.02 for pregnancy- Table 2). Thus, the predicted probabilities for the satisfaction varied more among hospitals than among health districts (results available upon request).
Discussion
In our study, we addressed women's satisfaction during pregnancy and at delivery, focusing on the association between women's satisfaction and some of their socio-demographic characteristics: educational attainment, age, citizenship and the number of previous pregnancies. According to previous studies on this topic, the link between women's satisfaction and their socio-demographic characteristics was not always straightforward [23,38,40,41]: we tried to explain this controversy through the intermediate role played by women's expectations.
Our results confirm the importance of socio-demographic factors in explaining women's satisfaction, both for the prenatal period and during hospitalization for delivery. Relatively older women were all in all more satisfied than others about the care received during pregnancy, but not at delivery, as found in other studies [39,44]. This appears to be due to the special attention that the Tuscany region devotes to 35 and older pregnant women, who, for example, receive prenatal exams for free: as for age, women's satisfaction during pregnancy is driven by actual differences in care received. Apart from this, however, age is scarcely related to satisfaction, if it all, and the same holds for the number of previous pregnancies [44,56]. A possible explanation is that patient education with regard to pregnancy and childbirth-which is supposed to be higher for multiparous women-may control expectations, which in turn have a lower Note: The sum of the different categories is not always equal to N = 4598 because of missing data. The percentage does not always add up to 100 because of rounding.
Source: Own processing of survey data (N = 4598).
influence on perceived quality. Instead, both citizenship and education are significant in both phases. Women from non-Western countries are more satisfied than Italians, even if they benefited less from antenatal services. Women from low-income countries presumably have lower expectations because of their previous experience of healthcare in their home country, and therefore they appreciate what they are offered [57], and tend to report higher levels of satisfaction [58]. Satisfaction is higher for the most educated women in both models (pregnancy and delivery), but women's satisfaction among the highly educated very much depends on the fulfilment of their expectations, as the interaction terms show, which is in line with what is normally found in the specialised literature [36,42]. Compared to the influence of individual socio-demographic characteristics, the role of the context (i.e. the health district or the hospital) in explaining women's satisfaction is more limited, at least in Tuscany, but still significant. Two main methodological points emerge from our analysis. First, the various phases of the process (prenatal and delivery) must be analysed separately because results may differ, also in the association between satisfaction and the socio-demographic characteristics of the woman. Second, the importance of the context must be emphasised, be it the district where the woman lived or the hospital where delivery took place. In both cases, this contextual level needs to be modelled properly, to avoid the risk of bias in the estimation of what determines women's satisfaction. Note: In Model 3, we controlled also for another individual-level covariate, the type of questionnaire, but it was not significant. In terms of policy implications, the patients' evaluation of care is fundamental, especially when developing targeted policies to enhance patient-centred care [59]. Indeed, our results show differences among satisfaction and experience across the diverse patient socio-demographic characteristics and thus confirm the need for a pro-active approach aligning the organization and the delivery of healthcare services with the culture, needs and expectations of the diverse segments of the population. Therefore, healthcare organisations should develop policies and procedures to engage professionals and improve practices that address the needs of the different types of patients.
Our findings suggest that the socio-demographic component should not be underestimated: both citizenship and education should be considered by health authorities and decision-makers because they affect the perception of the quality of maternity services. In addition, while it is generally accepted that the patients' Notes: In Model 2, we also controlled for three other individual-level covariates (preterm delivery, out-of-Local Health Authority delivery, mother and newborn together during hospital stay), but they were not significant. In Model 3, we controlled also for another individual-level covariate (type of questionnaire), but it was not significant. Finally, we checked whether including or excluding confidence in doctors/nurses/midwives had a significant impact on the results. As it turned out, it did not: the confidence intervals of all our socio-demographic variables largely overlapped (not shown here). care experience is likely to influence their satisfaction, we also found that the relationship between experience and satisfaction is mediated by socio-demographic characteristics. In practical terms, this means that services need to be more precisely targeted to a woman's particular characteristics. For example, the different population groups identified by the study may require different access policies (e.g. different service hours) to increase participation in prenatal classes, especially for mothers with low and medium education, given that patient education with regard to pregnancy and childbirth may improve women's experience and their overall satisfaction [35,60]. Another example is the relationship between pain-management and education: scientific knowledge alone may not suffice, and healthcare professionals should also consider the patients' values, needs and preferences (i.e. highly educated women's greater desire for epidural anaesthesia), in order to ensure that respectful and responsive care is delivered to each segment of the population [61].
This study has also some limitations. A few potentially relevant questions were not asked in the survey, such as those on the newborns' and on their mothers' health, about the family and the partner and about the length of stay in Italy for foreign women. The lack of these elements may have reduced our capability to explain the observed differences in satisfaction, both between individuals and between hospitals or health districts.
However, this study provides fresh insights into an understudied topic, and contributes to a better understanding of the association between women's socio-demographic characteristics and their satisfaction in relation to maternity and counselling services. This is particularly important in view of the increased need for empirical evidence to formulate policies in which care is provided in a way that better fits women's different needs and values.
Supplementary material
Supplementary material is available at International Journal for Quality in Health Care online. | 2018-04-27T04:56:13.220Z | 2018-04-17T00:00:00.000 | {
"year": 2018,
"sha1": "02997c4f3dd0b48bd3c9d07afe9ad2a6bcba5ec7",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/intqhc/article-pdf/30/8/594/27259826/mzy078.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "02997c4f3dd0b48bd3c9d07afe9ad2a6bcba5ec7",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
210890181 | pes2o/s2orc | v3-fos-license | Modelling the number of antenatal care visits in Bangladesh to determine the risk factors for reduced antenatal care attendance
The existence of excess zeros in the distribution of antenatal care (ANC) visits in Bangladesh raises the research question of whether there are two separate generating processes in taking ANC and the frequency of ANC. Thus the main objective of this study is to identify a proper count regression model for the number of ANC visits by pregnant women in Bangladesh covering the issues of overdispersion, zero-inflation, and intra-cluster correlation with an additional objective of determining risk factors for ANC use and its frequency. The data have been extracted from the nationally representative 2014 Bangladesh Demographic and Health Survey, where 22% of the total 4493 women did not take any ANC during pregnancy. Since these zero ANC visits can be either structural or sampling zeros, two-part zero-inflated and hurdle regression models are investigated along with the standard one-part count regression models. Correlation among response values has been accounted for by incorporating cluster-specific random effects in the models. The hurdle negative binomial regression model with cluster-specific random intercepts in both the zero and the count part is found to be the best model according to various diagnostic tools including likelihood ratio and uniformity tests. The results show that women who have poor education, live in poor households, have less access to mass media, or belong to the Sylhet and Chittagong regions are less likely to use ANC and also have fewer ANC visits. Additionally, women who live in rural areas, depend on family members’ decisions to take health care, and have unintended pregnancies had fewer ANC visits. The findings recommend taking both cluster-specific random effects and overdispersion and zero-inflation into account in modelling the ANC data of Bangladesh. Moreover, safe motherhood programmes still need to pay particular attention to disadvantaged and vulnerable subgroups of women.
Introduction
Following the third Sustainable Development Goal of reducing the global maternal mortality ratio (MMR) to 70 per 100,000 live births by 2030 [1,2], the Bangladesh government set targets to reduce the MMR to 143 and 105 per 100,000 live births, in 2015 and 2022 respectively. Though Bangladesh has significantly improved the MMR, the improvement stagnated at 196 per 100,000 live births, according to the Bangladesh Maternal Mortality and Health Care Surveys conducted in 2010 and 2016 [3]. Access to maternal health service for all women during their pregnancy period and childbirth is crucial to further reduce pregnancy-related morbidity and mortality. According to the WHO [4], antenatal care visits (ANC) should include at least four visits to medically trained personnel to avoid complications and have a safe delivery. Despite relatively high rates of ANC utilization and improved access to health facilities in Bangladesh, the number of pregnant women with at least this number of four ANC visits remains low (at 31%) and the number of women without any ANC visits remains high (at 22%) [5].
Many studies have assessed the determinants of prenatal care attendance in Bangladesh, particularly focussing on the WHO guideline of taking at least four ANC visits [6][7][8][9], but only a few studies have assessed the determinants of the number of ANC visits in general [10][11][12][13]. It has been shown that there may be two separate processes that generate decisions regarding the use and frequency of prenatal care use [14,15]. Without distinguishing these generating processes, Poisson regression (PR) and negative binomial regression (NBR) models have been widely used to model the number of ANC visits in Bangladesh [12,13,16]. But these models may provide inconsistent regression coefficient, as overdispersion and excess zeros remain unaccounted for [17]. In such situations, zero-inflated and hurdle regression (ZIR and HR) models can be applied [18]. These so-called two-part models have been used in a limited number of studies [10,11]. In addition to the problems of overdispersion and zero-inflation in the ANC data, correlation among measurements (a common phenomenon in longitudinal, repeated surveys, and clustered data) known as intra-cluster correlation (ICC) needs to be considered [19]. The problems associated with ICC can be partially solved by incorporating cluster specific random effects in the standard PR, NBR, HR, and ZIR models [20,21]. Guliani, Sepehri and Serieux [22] employed a two-part HR model incorporating such cluster effects in the model and explored the determinants of ANC use and the frequency of ANC visits utilizing ANC data from 32 developing countries.
To the best of our knowledge, no study has simultaneously considered all the issues discussed above in modelling the number of ANC visits of Bangladeshi women. Thus, the objectives of this study are two-fold: (i) to develop a proper count regression model for the number of ANC visits in Bangladesh covering the issues of overdispersion, zero-inflation, and ICC; and (ii) to determine the risk factors for no ANC use as well as the frequency of ANC visits.
Data description
In this study, data are extracted from the nationally representative 2014 Bangladesh Demographic and Health Survey (BDHS), where the country was stratified into 20 sampling strata according to urban and rural enumeration areas of 7 divisions [5]. A two-stage stratified sampling design was implemented to collect data: in the first stage 600 clusters (393 from rural and 207 from urban areas) were drawn with probability proportional to the enumeration area size and in the second stage 30 households per cluster were selected with an equal probability systematic procedure. The 2014 BDHS covers 17,863 ever-married women aged 15-49 years from 17,300 households. The information on ANC visits was collected from 4493 ever married-women who gave birth in the three years preceding the survey. Among women with two or more live births within the given period, information was only recorded for the last birth. Mothers were asked a number of questions about ANC visits and the received health care during the antenatal visits. The number of ANC visits (a non-negative integer) is the target response variable for which this study aims to identify a proper count regression model. A number of explanatory variables at individual (woman), household, community, and regional levels have been considered based on recent studies on ANC utilization [6,9,13]. The bivariate relationship of the explanatory variables with the number of ANC visits were examined first by a developing simple PR model for each of the explanatory variables. Individual-level explanatory variables in this study include education status of the women and their husbands, women's access to mass media exposure, women's decision-making power on their own healthcare issues, and women's desire of pregnancy along with household wealth status, and place of residence, and regional settings.
Statistical models
Let y ij denote the number of ANC visits of the i th women living in the j th cluster, and the vector X ij the corresponding values of the considered explanatory variables. Assuming independence of ANC visits of the women, the PR and NBR models are defined by: where μ ij is the expected number of ANC visits as a function of explanatory variables, β 0 the overall intercept, and β the vector of regression coefficients. The difference between the PR and the NBR model is in the assumed distribution of y ij in the respective models. In PR, the response variable is assumed to follow a Poisson distribution with E(y ij ) = μ ij = var(y ij ), while in NBR it is assumed to follow a negative binomial distribution with E(y ij ) = μ ij and varðy ij Þ ¼ m ij þ m 2 ij =y, where θ is the shape parameter which controls the dispersion. When θ!1, the NBR model converges to a PR model without overdispersion. Thus, the PR model is a limiting model of the NBR model as the dispersion ðm 2 ij =yÞ approaches zero. The PR and NBR models assume that the observations conditioned on the predictors are independent and identically distributed [23]. However, these assumptions may be violated in clustered data. Ignoring possible correlation in the data result in a model that could lead to biased estimates and misinterpretation of the results [24]. An acceptable way of accommodating this non-independence of observations is to use mixed-effects models, also known as multilevel models. The use of a multilevel modelling strategy accommodates the clustered or hierarchical nature of the BDHS data and corrects standard errors of the estimated coefficients for ICC. A simple mixed-effects PR/NBR model is obtained by incorporating cluster-specific random effects in the standard PR/NBR model: where b oj stand for the random intercepts at cluster level and are assumed to follow a normal distribution with constant variance. The mixed-effects PR and NBR models are referred to hereafter as MPR and MNBR models respectively.
The zero-inflated and hurdle extensions of the PR and NBR models are the most prominent and effective models not only to handle excess zeros in a count data but also to accommodate overdispersion resulting from the variance being greater than the mean [18]. Both ZIR and HR models have a mixture of two generating processes. In the ZIR model, the first process generates only zero counts (structural or genuine zeros) with probability φ ij , while the second process generates non-negative counts (which could result in zeros called sampling zeros) from either a Poisson or a negative binomial distribution with probability (1−φ ij ). Like the ZIR model, the HR models also assume that the first process generates only structural or genuine zeroes while the second process generates truncated positive counts from a zero-truncated Poisson or negative binomial distribution [25]. In relation to ANC visits, structural zeros occur if a pregnant woman would never visit and sampling zeros occur if she could visit but has no reason to do so within the specified time frame. In this study, women reported the ANC visit for their last birth over the three years preceding the survey. Accordingly, the reported zeros could be structural, but they could also be sampling errors due to incorrect recall or misspecification of the time frame. In the ZIR model, the distribution of the number of ANC visits is modeled as: and where f ij (.)is either a Poisson or a negative binomial distribution.
The basic difference between the two models is that the ZIR model uses two distribution for the zero counts, while the HR model use one distribution for the zero counts: The first so-called zero part of the process can be modelled as a binary or logit model. According to the modelling distribution (Poisson or negative binomial) of the second so-called count part of the process, the ZIR and HR models are referred to as either ZIPR/HPR or ZINBR/ HNBR. In general, separate explanatory variables can be used in the two parts of the ZIR and HR models. To explain these models mathematically, let X ij and Z ij be vectors of known explanatory variables used in the zero-and count-part models respectively. Then, the zero and count part models under the simple ZIR and HR models can be expressed, respectively, as: where γ 0 is the overall intercept, and γ is the vector of regression coefficients of the binary process (binary logistic model). The HR model separates the structural zeros from the non-zero responses by modelling non-zero counts with a truncated Poisson/negative binomial distribution. Consequently, the effects of covariates on φ ij in the HR model (on the log-odds of a structural zero) and their effects on φ ij in ZIR model (on the log-odds of a structural and sampling zero) are not equivalent [26,27]. The mixture of two parts in a ZIR/HR model allows to interpret separate answers of the two questions (i) which factors influence whether a pregnant woman will attend ANC or not and (ii) which factors predict the number of times she will take ANC. Moreover, explanatory variables may have different impacts in the two processes.
The mixed-effects ZIR and HR models can be expressed by adding a cluster-specific random component b 0j to β 0 in the count part and another cluster-specific random component c 0j to γ 0 in the zero part: When cluster-specific random effects are considered only in the count part (so c 0j = 0), the mixed-effects ZIR and HR models are denoted by MZIPR/MZINBR and MHPR/MHNBR respectively depending on the assumed distribution (Poisson or negative binomial) of the count part. Models with also a random effect in the zero part (usually it is considered as an extra random effect in the mixed-effects ZIR/HR model) are denoted hereafter as MZINBR. ERE (for example) for the MZINBR model. The two model processes or their log-likelihoods are assumed functionally independent, so the joint likelihood can be maximized by maximizing each part separately [26]. A maximum likelihood method approximating the integrals over the random effects with an adaptive Gaussian quadrature rule [28] was used to fit the mixed-effects ZIR and HR models. Several Rpackages were used to analyse different versions of the PR and NBR models. The recently developed "GLMMadaptive" package of Rizopoulos [29] was employed to fit mixed-effects ZIR and HR models. Significance of the dispersion parameter, zero-inflation, and goodnessof-fit of the model (H 0 : fitted model suits well for the data) which is further referred to as the uniformity test were assessed using the residual diagnostics for hierarchical (multi-level/ mixed) regression models available in the DHARMa (Diagnostics for Hierarchical Regression Models) package of Hartig [30].
Since the considered models are based on different assumptions, their direct comparison is complicated. A step by step comparison procedure is followed providing priority to the uniformity test and significance of the cluster-specific random effects to select the final model. The basic steps are: Step 1: First PR and NBR models with cluster-specific random intercept (MPR and MNBR) are examined assessing whether overdispersion and zero-inflation issues are covered by the fitted models.
Step 2: If zero-inflation remains, ZIR and HR models with (MZIPR, MZINBR, MHPR, MHNBR) or without (ZIPR, ZINBR, HPR, HNBR) cluster-specific random intercept are estimated and compared. Mixed-effects ZIR and HR models are developed considering cluster-specific random intercept only in the count part (say, MZINBR) as well as in both the count and the zero parts (say, MZINBR.ERE). Nested and non-nested models are compared using likelihood ratio (LR) and Vuong tests [31] respectively.
Step 3: The final model is selected using the DHARMa's uniformity test, assessing which model fits better for the data. Since mixed-effects ZIR and HR models are not directly comparable, the uniformity test is used to examine whether MZINBR or MHNBR (with or without extra cluster-specific random effects) suits the studied data better. Significance of the cluster variance component in the zero and count parts is assessed using the LR test.
Results
The distribution of the number of ANC visits shown in Fig 1 is positively skewed with low mean (2.75) and median (2.0) number of ANC visits. About 22% of the pregnant women did not take any ANC visits and only 31% took ANC at least 4 times during their pregnancy period. Table 1 shows mean and median numbers of ANC visits according to different background characteristics of the women. Both the mean and the median frequency significantly vary with all these characteristics. Women from the Khulna division had a higher mean (3.42) and median (3) number of ANC visits and those from the Sylhet division (2.02 and 1 respectively) had the lowest. As expected, urban women have a higher mean and median number of ANC visits than the rural women. Women's education and exposure to mass media (TV), their husbands' education and the household wealth status showed a significant positive association with the mean and median number of ANC visits. Women who wanted their pregnancy had a higher mean and median number of ANC visits than those with an unwanted pregnancy. Women's decision-making power on their own healthcare issues showed significant association with the mean and median number of ANC visits.
Model selection
Policy makers, stakeholders, and donors explore risk factors for reduced prenatal care use or lower frequency of visits to design strategies to improve maternal health care. A proper count regression model for the number of ANC visits incorporating multiple factors helps to identify those core risk factors. In this study, one-part regression models (such as PR and NBR) and two-part regression models (such as ZIR or HR) with and without consideration of ICC are compared to examine whether there are indeed two generating processes in the number of ANC visits as well as to determine the risk factors of the processes. A fixed set of explanatory variables was used in all models of ANC visits for comparison purposes. The comparison of the standard PR and NBR models with their mixed-effects models MPR and MNBR shown in Table 2 indicates that PR and MPR models fail to capture the overdispersion, while both NBR and MNBR do account for overdispersion, but all four models are unable to account for the issue of zero-inflation. Among these models, the NBR model is preferred by the DHARMa uniformity test with lower p-value (0.066). However the AIC, log- likelihood, and LR test indicate that inclusion of random intercepts are required for the studied ANC data. Thus, the ZIR and HR models without and with random intercepts were developed. The results of Vuong tests for non-nested models shown in Table 3 indicate that either ZINBR or HNBR can be considered to be the better model to account for excess zeros. Table 3 also reflects that the overdispersion is captured better by the NBR-based models than the PRbased models. The results of the LR tests for nested models shown in Table 4 indicate that cluster-specific random effects should be considered in the NBR-based models. Random effects are also found important for both the count-and the zero-part models in both cases of the ZINBR (MZINBR and MZINBR.ERE) and the HNBR (MHNBR and MHNBR.ERE) models.
Since there are four possible candidates to be the best model for the ANC data, the DHAR-Ma's uniformity test was performed to find the best suitable model among these. Table 5 shows that the ZINBR model with random intercepts in the count-part (MZINBR) confirms uniformity (p-value = 0.283) with the observed count data, but the LR test in Table 4 shows that this model still requires random intercepts in the zero part (MZINBR.ERE) (pvalue < 0.001). However, the MZINBR.ERE failed the uniformity test (p-value = 0.012). On the other hand, HNBR with random intercepts in the count part (MHNBR) and HNBR with random intercepts at both the count and the zero parts (MHNBR.ERE) passed the uniformity test (p-value = 0.549 and 0.118). Thus, MHNBR.ERE is considered to be the best model among the possible candidate models for the ANC data of Bangladesh. Also, the MHNBR and MHNBR.ERE models provide lower cluster-specific variance components (as well as lower ICC) than the MZINBR and MZINBR.ERE models. Do note that the same set of explanatory variables are maintained in all the one-part models and two-part models for comparison purposes. Informal diagnoses of the cluster-specific residuals through Q-Q and distribution plots of the standardized residuals shown in Fig 2 confirm that the cluster-specific residuals obtained from both the count and the zero parts are normally distributed with constant variance.
Risk factors
According to the selected HR model with random effects in both the count and the zero parts (the MHNBR.ERE model), division, place of residence, household wealth, women's media exposure, women's and their partner's education status, women's decision-making power on their own healthcare issues, and desire for pregnancy have highly significant effects on either zero prenatal care use or the frequency of prenatal care use (Fig 3 and Table 6). The count-part model shows the effects of the considered factors on the frequency of ANC visits represented as incidence rate ratio (IRR), while the zero-part model shows the effects of the considered factors on the women's decision to take no ANC represented as odds ratio (OR). The estimated IRR and OR with their 95% CI are in red and blue respectively in Fig 3. Since both parts have cluster-specific random effects, the estimated parameters represent the effects of individual-, household-, regional-, and community-level characteristics on ANC attendance and the frequency of ANC visits after controlling for the unobserved community level factors. It is noted that regression coefficients of other models are not presented here, only their summary statistics were reported for comparison purposes.
The results of the finally selected count-part model shown in Fig 3 and Table 6 The estimated variance components in the count part (s 2 c = 0.069) and the zero part (s 2 Z = 0.626) indicate significant community-level variation in the number of ANC visits, due to between-cluster heterogeneity.
Discussion
Aim of this study was to identify an appropriate count regression model for the number of ANC visits among pregnant women in Bangladesh utilizing recent nationally representative survey data. Since a substantial proportion of women did not take any prenatal care and the women are clustered according to the survey design, the performance of the standard Poisson and negative-binomial regression models have been compared with their zero-inflated and hurdle models with and without consideration of ICC in the model selection process.
The study has followed a systematic procedure to select the most appropriate count regression model for the frequency of ANC visits by examining a variety of criteria, particularly the existence of zero-inflation and community effects in the responses. It is found that the zero ANC visits are generated from two different processes and hence either zero-inflated or hurdle regression model should be used to model the frequency of ANC visits in Bangladesh. Since the household surveys in Bangladesh use a complex cluster sampling design, regression models should also incorporate correlation (unless the considered explanatory variables in the model could explain the cluster-level variability), to prevent biased estimates with unfortunate undercoverage due to lower standard errors [32]. In this study, the incorporation of the ICC along with the help of uniformity tests facilitated the selection of the mixed-effect hurdle model as the appropriate model for the considered data.
Fig 3. Estimated incidence rate ratio (IRR) of having ANC visits (red dot and confidence line) and odds ratio (OR) of not attending any ANC visit (blue dot and confidence line) with 95% confidence interval (CI) from the hurdle negative binomial regression with random intercept at both count-and zero-part (MHNBR. ERE) models.
https://doi.org/10.1371/journal.pone.0228215.g003 Based on the selected hurdle regression model, women living in the Khulna and Rangpur divisions had a significantly lower probability of attending no antenatal care, compared to those living in the Sylhet and Chittagong divisions, while women from the Khulna and Rangpur divisions also had significantly higher frequency of ANC visits compared to women from the Sylhet and Chittagong divisions. These findings are highly supported by the findings obtained in a very similar study on ANC visits by Rahman et al. [33]. The findings may suggest that large-scale maternal and neonatal health programs worked properly in the economically poor Khulna and Rangpur regions compared to the economically rich Chittagong and Sylhet regions [34]. Another explanation could be worse access to maternal health services for the women who live in the remote hill-tract areas of the Chittagong division [35] and in the haor areas (a wetland ecosystem in the north-eastern part of Bangladesh) of the Sylhet division. Guliani, Sepehri and Serieux [22] showed that women living in urban settings are more likely to attend prenatal care and have a higher frequency of visits compared to their counterparts, based on ANC data of 32 developing countries including Bangladesh. The results from the present study also indicate that women residing in urban areas have a higher frequency of ANC visits than those in rural areas. However, in our multivariate analysis, place of residence did not have a statistically significant influence on the woman's attitude to use ANC during pregnancy. The difference between frequency and use by urban-rural settings may arise mainly from the attitude of married adolescent women living in rural areas, who are less likely to use skilled maternal health services than those residing in urban areas [36][37][38]. The higher IRR for urban areas supports the idea that availability of health care centres has increased the access to maternal health services for urban women compared to the rural women. Moreover, women living in urban areas are relatively more educated, are more aware of health, and have more decision-making power on their own healthcare issues compared to women living in rural areas.
Factors Category Count-part (Number of ANC visits) Zero-part (No ANC attendance)
A positive association between the ANC utilization and the household wealth status has been found in many studies on ANC use [22,33,39]. This positive association does not vary by urban-rural settings [40]. The estimated OR and IRR in this study indicate that the probability to take no ANC and the frequency of ANC visits both increased with increasing household wealth status. A possible explanation could be that women who belong to well-off families usually have proper education, access to mass media, and an ability to spend more money to take frequent ANC visits compared to women from poorer families.
The findings of this study showed that women who have access to mass media at least once a week are less likely to keep away from ANC visits and have more ANC visits. Some studies on ANC utilization support this finding, particularly for women living in rural [39] and slum areas of big cities like Rajshahi [41] and Dhaka [42]. Mass media broadcast different sorts of health-related programs and news that make women aware of their well-being and the wellbeing of their unborn baby.
The likelihood of prenatal care attendance and the frequency of ANC use are both positively associated with the level of women's education and the influence of education is more pronounced for seeking prenatal care than the number of ANC visits [22]. The current study also found that the level of education had a stronger impact than other factors on both the use and frequency of ANC visits. Educated women took more ANC since they have more knowledge of the benefits of frequent ANC visits such as a reduction of pregnancy complications, ensuring safe delivery, and supporting healthy life of the babies. Moreover, they are more knowledgeable about how to find health care.
The findings of this study show that the partner's education also contributes to deciding whether a woman will take ANC, rather than the frequency of ANC visits. The probability of avoiding ANC significantly decreased with an increase of the partner's education status. Rahman, Islam and Islam [41] also found that the husband's education has a significant influence on taking prenatal care. The findings suggest that educated partners may be more concerned with their pregnant wives and the associated pregnancy complications.
The desire of pregnancy has a significant influence on the number of ANC visits in this study, rather than on the decision to seek ANC. Rahman et al. [33] also found that women are more likely to seek care for pregnancy complications when they intended to have the pregnancy. Conversely, when women are unwilling and unhappy about untimely pregnancy, they may be more likely to hide it and less likely to take frequent ANC visits. Hiding behaviour is common among women who live in a more conservative rural environment.
Women empowerment in health care decision-making is also found to be significantly associated with the number of ANC visits rather than seeking ANC. Women who can take decisions by themselves take ANC visits more frequently than their counterparts do. A possible reason behind this finding could be that educated woman living in urban areas (who usually take decision by themselves) are more conscious about their own and their unborn babies health compared to illiterate women who depend on other's decision to seek prenatal care. Hossain and Hoque [11] also found a significant positive influence of women empowerment (measured by education, freedom of choice/movement, household decision making power, and economic activities) on the decision and intensity of utilization of antenatal care in Bangladesh.
Conclusion
The selected hurdle regression model confirms that two processes generate the number of ANC visits in Bangladesh: one process generates zero ANC visits and the other generates the frequency of ANC visits. The significance of the cluster-specific variance component at both the zero and the count part of the hurdle regression model indicates that the community (cluster) has a significant effect on the variation of both the women's decision on prenatal care use and the frequency of ANC visits, although most of the variations originated from women-, household-, and regional-level factors. The findings of the study thus show the necessity of considering community effects (ICC) along with overdispersion and zero-inflation in modelling the ANC data of Bangladeshi women and hence also in identifying risk factors for not attending any ANC as well as for the frequency ANC visits. Though only random intercept models have been investigated in this study, further investigations can be performed to assess the relevance of random slopes in the model. Also, clustering at higher administrative units (such as district and sub-district) can be investigated using three-or four-level models [43]. Moreover, we found that hurdle and zero-inflated type models should be selected carefully since poorer assumptions of one type of (structural) zeros are difficult to derive from real world data. It is better to select the model structure statistically (whether the fitted model can explain all the zeros) rather than based on types of zeros.
The findings of this study might help policy makers to find out which socio-economic and demographic groups should be given priority to encourage women to attend ANC and to have more ANC visits to medically trained personnel during their pregnancy period. This study also suggests that besides improving women's academic education and household wealth, women should be motivated to change their attitude to seek medical care during their pregnancy. The significant cluster-level variation in the developed model also indicates that the goal of reducing maternal death could be achieved if heterogeneity in the prenatal care use and its frequency could be reduced at the community level. | 2020-01-26T14:04:46.091Z | 2020-01-24T00:00:00.000 | {
"year": 2020,
"sha1": "46dec9279973860dd6deee558c519c75806f936f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0228215&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "377417a6f9d0a0520ee29d2c40265fc4d511ef61",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
134160070 | pes2o/s2orc | v3-fos-license | Engineering Advantages of Vegetation on Slope Stabilization
There are various conventional methods used to improve stability of slope and surface erosion. They all have merits and demerits, but the use of vegetation has many advantages such as root does not corrode, they are self-repairing, regenerating and environmental friendly. This discipline has gained a global recognition for a long time and has been addressed as a new entity, “Ecological Engineering” which is defined as the design of sustainable ecosystem that integrates human society with its natural environment for the benefit of both. This paper considers the potential engineering influences of vegetation and how it can be characterized on site within a geotechnical framework for stability assessment. To gain more understanding on its soil root interaction and effects on slope stabilization, the mechanical and hydrological effect of vegetation would be combined and their overall effect on slope stabilization and slope stability analyses would be evaluated. The results obtained for the Vetiver Grass and Lime Tree will be considered. In overall the results show the considerable improvement in the slope stability by applying vegetation on finite slope depending to their location on slope. The results also indicate that Vetiver Grass can cause significant improvement in the slope stability compare to the Lime Tree even when it is located at the crest of slope due to its roots geometry and lower weight.
INTRODUCTION
The influence of vegetation can be hydrological and mechanical factors, beneficial or adverse to slope stabili hydrological and mechanical parameters @ IJTSRD | Available Online @ www.ijtsrd.com | Special Issue Publication | November 2018 There are various conventional methods used to improve stability of slope and surface erosion. They all have merits and demerits, but the use of vegetation has many advantages such as root does not corrode, repairing, regenerating and mental friendly. This discipline has gained a global recognition for a long time and has been addressed as a new entity, "Ecological Engineering" which is defined as the design of sustainable ecosystem that integrates human society with its ment for the benefit of both. This paper considers the potential engineering influences of vegetation and how it can be characterized on site within a geotechnical framework for stability assessment. To gain more understanding on its soiland effects on slope stabilization, the mechanical and hydrological effect of vegetation would be combined and their overall effect on slope stabilization and slope stability analyses would be evaluated. The results obtained for the Vetiver Grass Tree will be considered. In overall the results show the considerable improvement in the slope stability by applying vegetation on finite slope depending to their location on slope. The results also indicate that Vetiver Grass can cause significant ment in the slope stability compare to the Lime Tree even when it is located at the crest of slope due to its roots geometry and lower weight.
Lime Tree, slope divided into which can be ility [1]. The s reflecting the effect of vegetation in stab additional effective cohesion; slice due to the vegetation; a force by the roots present on wind force; possible chang strength due to moisture rem and changes in pore water pres have been further explained geotechnical framework. Roo determining factor when eval vegetation on slope stability functions that the plant may bioengineering system in support, anchor, drain rei depending upon the type of bioengineering, the nature characteristics. The importa slope stabilization and su is enormous.
Slopes
Slopes may be man-made as highways and rail-roads, e containment of water, land industrial and other developm and other water conduits and t Slopes may also be naturally stream banks. At all location not level, there are forces movements of the soil from points. The significant impor the component of gravity, wh of the probable motion. Also well recognized, is the force several forces produce shear soil mass and a movement shearing resistance on every positive failure surface throughout the mass is sufficiently larger than the shearing stress.
Cause and mechanism of slope failure
The causes of major failure of slopes are the insufficient control of surface water and the presence of local weaknesses, discontinuity, sheet jointing. Adoption of deficient geological or hydrogeological modes of slope design is the most important factor of major failures in engineered slopes. Another problem associated with the large slides is adverse groundwater conditions undetected during design and construction stage [2].
Minor slope failures are caused by surface water, the mechanism mainly involves concentrated surface runoff leading to erosion and water ingress during intense rain, inadequate maintenance generally takes the form of blocked or cracked drainage channel, and inadequate attention to proper detailing. Another cause of minor failure in slope is local weakness in the ground mass, most in soil cuts and rock cuts are associated with the presence of local weakness of weak geological material and adverse ground water build up of local transient perched water table.
Common failure mechanism of a fill slopes are flow slides due to inadequate compaction, washout and sliding and those for soil cut slopes are washout and sliding [3].
The second mechanism of failure is liquefaction which is the sudden collapse of metastable soil structure within a loose soil mass in a slope when it is subjected to a high degree of saturation under sustained shear stresses, resulting in a significant reduction of soil shear strength and leading to a flow slide type of failure which is a special case of sliding failure.
The third mechanism is washout which is detachment of part of the soil mass induced by the scouring action of running surface water. reinforcement and identified a series of empirical and physical based relationship between root development and soil strength. Even low root density can provide substantial increase in shear strength and the magnitude additional apparent cohesion varies with the distribution of the roots within the soil and with the tensile strength of the individual roots. [4,5].
Root reinforcement is a function of root strength, interface friction between root and soil and the distribution of root within the soil and root-reinforced soil is more able to resist continued deformation without loss of residual strength than soil alone [5]. The magnitude of the mechanical reinforcing effect of vegetation is a function of the following root Properties: density, tensile strength, tensile modulus, length/diameter ratio, surface roughness, alignment; straightness and angularity and orientation to the direction of principal strains [1].
Root area ration
The ability of a tree to reinforce soil will depend, not only in the depth to which its root systems extend but also on the total cross-sectional area of its roots at the given depth [6, 7, 8 and 9].
Root tensile strength
Nilaweera and Nutalaya [10] pointed out that the pullout resistance of a tree is generally controlled by its root strength and morphological characteristics and the pull-out resistance of the tree increased with root length distribution and the depth of root penetration.
Anchorage, arching and buttressing
The taproot and the sinker roots of many tree species penetrate into the deeper soil layers and anchor them against down-slope movement.
Surcharging
Surcharge is the effect of the additional weight on a slope resulting from the presence of vegetation. Surcharge could have adverse effects, although it can be beneficial depending on the slope geometry, the distribution of vegetation cover and the properties of the soil. Wind loading is particular relevant when considering the stability of individual trees, but is of lesser significant for general slope stability, where the wind forces involved represent a much smaller proportion of the potential disturbing forces, and trees within a cluster (stand) are sheltered to some extend by those at the edge.
Hydrological effect of vegetation 1.4.1. Rainfall interception
Vegetation intercepts a proportion of the incoming rainfall, part of which is stored on the leaves and stems of the plants and is returned to the atmosphere by evaporation. Thus, interception decreases the rate and volume of rainfall reaching the ground surface.
Surface water runoff
The combination of surface roughness, infiltration and interception, surface water runoff from the vegetated areas is much less than that of bare soil.
Infiltration
Vegetation increases the permeability and infiltration of the upper soil layers due to roots, pipes or holes where the roots have decayed, increased surface roughness.
Evaporation and transpiration
Hydrological effects involves the removal of soil water by evaporation through vegetation, which lead to an increase in soil suction or reduction in porewater pressure, hence an increase in the shear strength [11]. Therefore, vegetation affects slope stability hydrologically by extracting soil moisture through transpiration. Apart from increasing the strength of soil by reducing its moisture content, evaporation by plant reduces the weight of the soil mass.
Materials and methods
The study has been carried out within the United Kingdom and its environs as this study is the continuous of Rees and Ali [11 and 12] works with temperate climate with plentiful rainfall all year round. The plant used for the research will be limited to the mature lime tree (Tilia) and Vetiver Grass. The transpiration rate, weight; root geometry of these plants shall be used. Mechanical properties of Boulder clay soil would be considered. Analyzing the factor of safety of vegetation on finite slopes will be done by using SLIP4EX computer program.
The equations used in the SLIP4EX spreadsheet are derived from the basic limit equilibrium stability equation [13]: By resolving forces to determine N΄, the full stability equation based on effective forces will be obtained [14].
The simple mathematical form of the Greenwood stability equations with the Factor of Safety simply expressed by a summation of restoring and disturbing moments or forces makes the inclusion of additional forces due to ground reinforcement, anchors or vegetation effects relatively straightforward.
It is not straightforward to add these additional forces in the Bishop and other "sophisticated" published solutions where the global factor of safety is applied to the shear strength parameters for each slice of the analysis resulting in some unrealistic force scenarios for the slices where anchor and reinforcement loads are applied [15]. The General equation 2 is adapted for inclusion of the vegetation effects, reinforcement and hydrological changes, Figure 2, as follows [16, 17 and 18]: But in the SLIP4EX changes in ground water table due to vegetation are included however the changes in ground water table employed in their work were taken directly from piezometer readingsno numerical simulation of this process was involved. The study was based on the effective stress approach and as such is valid only for saturated soils. In this study, the stability of an unsaturated soil slope is considered in relation to soil suction created by the plant water-uptake process. These changes primarily affect the matric suction component only. Hence matric suction is considered in this study by adding equation [4] into the SLIP4EX [19].
International Journal of Trend in Scientific Research and Deve @ IJTSRD | Available Online @ www.ijtsrd.com Table 1, the use of lime tree the slope has shown an increase of Factor of Safety (FOS) and an increase the middle of slope. However, is has a when the lime tree was used at the crest the difference of 2.75%. On the other ha Vetiver Grass at the toe, middle and at t slope has indicated increased percent especially at the middle of the slope, b increase of 3%. | 2019-04-27T13:12:59.040Z | 2018-11-30T00:00:00.000 | {
"year": 2018,
"sha1": "1c5d6dd9faf4855335c0a956548f95a68d858888",
"oa_license": "CCBY",
"oa_url": "https://www.ijtsrd.com/papers/ijtsrd19139.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d03b53aaa3fa1f2b0b35a872f2ebc5a7601f3820",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
268918860 | pes2o/s2orc | v3-fos-license | Intranasal Administration of GRP78 Protein (HSPA5) Confers Neuroprotection in a Lactacystin-Induced Rat Model of Parkinson’s Disease
The accumulation of misfolded and aggregated α-synuclein can trigger endoplasmic reticulum (ER) stress and the unfolded protein response (UPR), leading to apoptotic cell death in patients with Parkinson’s disease (PD). As the major ER chaperone, glucose-regulated protein 78 (GRP78/BiP/HSPA5) plays a key role in UPR regulation. GRP78 overexpression can modulate the UPR, block apoptosis, and promote the survival of nigral dopamine neurons in a rat model of α-synuclein pathology. Here, we explore the therapeutic potential of intranasal exogenous GRP78 for preventing or slowing PD-like neurodegeneration in a lactacystin-induced rat model. We show that intranasally-administered GRP78 rapidly enters the substantia nigra pars compacta (SNpc) and other afflicted brain regions. It is then internalized by neurons and microglia, preventing the development of the neurodegenerative process in the nigrostriatal system. Lactacystin-induced disturbances, such as the abnormal accumulation of phosphorylated pS129-α-synuclein and activation of the pro-apoptotic GRP78/PERK/eIF2α/CHOP/caspase-3,9 signaling pathway of the UPR, are substantially reversed upon GRP78 administration. Moreover, exogenous GRP78 inhibits both microglia activation and the production of proinflammatory cytokines, tumor necrosis factor-α (TNF-α) and interleukin-6 (IL-6), via the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) signaling pathway in model animals. The neuroprotective and anti-inflammatory potential of exogenous GRP78 may inform the development of effective therapeutic agents for PD and other synucleinopathies.
Introduction
Parkinson's disease (PD) is an age-related chronic neurodegenerative disorder ranking second in frequency after Alzheimer's disease [1].About 16 million people worldwide suffer from PD, and it is estimated that the number of PD patients will rise by 1.5-2-fold within the next 20-30 years due to the increase in centenarians [2].The etiology of PD is largely unknown, with more than 90% of PD cases being sporadic [3].Older age in combination with genetic profile and/or exposure to environmental pollution (herbicides, pesticides, infectious agents, etc.) are considered causative factors of sporadic PD onset and progression [4,5].PD diagnosis relies on clinically significant symptoms, such as resting tremors, bradykinesia, muscular rigidity, and loss of balance.These symptoms are indicative of motor dysfunctions, which are associated with the degeneration of 50-60% of dopaminergic (DA) neurons in the substantia nigra pars compacta (SNpc), resulting in a reduction of dopamine in the striatum [6,7].Such a delayed diagnosis of PD, when most specific neurons are already lost, explains the low effectiveness of existing PD therapies, primarily aimed at relieving symptoms.However, neuronal death also occurs in extranigral brain regions responsible for non-motor symptoms that may manifest in the pre-symptomatic (preclinical) PD stage, 20-30 years prior to the first motor symptoms [8].A wide range of non-motor symptoms in PD includes sleep disorders, olfactory disturbances, enteral dysfunction, etc. [6].Therefore, progress in treating PD is linked to the advancement of early diagnosis technologies and pathogenetically significant therapy, aiming to prevent or attenuate neurodegeneration in its early stage [9].
A pathological hallmark of PD is the overexpression and/or abnormal accumulation and misfolding of α-synuclein (α-syn) followed by the formation of Lewy bodies and Lewy neurites [10,11].The Lewy bodies consist of up to 90% of α-syn phosphorylated at serine-129 (Ser129), and this post-translational modification appears to be associated with the formation and/or toxicity of aggregated proteins [12,13].The intraneuronal accumulation of aberrant α-syn forms (misfolded and phosphorylated at Ser129) results from the dysfunction of both the ubiquitin-proteasome system (UPS) and the autophagylysosomal pathway [14][15][16].The weakening of conformational control mechanisms ensured by heat shock proteins also contributes to protein aggregation [17][18][19].The discovery of UPS functional insufficiency in familial and sporadic PD has promoted the development of novel in vivo PD models based on proteasome inhibitors, such as lactacystin (LC) [20,21].LC is a metabolite of ubiquitous soil and water bacteria, e.g., Streptomyces lactacystinaeus.Given the lipophilic properties of LC and its ability to enter the human body with food, water, or dust and accumulate over time, exposure to this proteasome inhibitor can underlie some cases of PD [22].When injected directly into the SNpc in rats, LC replicates the key neuropathological features of PD in the nigrostriatal and extranigral systems stage by stage, with the effects varying in a dose-dependent manner [21].This sets LC apart from other neurotoxins used to model PD, making it one of the most promising substances for testing therapeutic strategies that can slow down neurodegeneration at various stages of PD development.In this study, we established an LC-induced model of early PD.This stage is particularly crucial for effective treatment since the majority of DA neurons are still preserved.
A growing body of evidence from animal models of PD and postmortem studies in PD patients suggests that the accumulation of α-syn oligomers can trigger neuronal death via the apoptosis pathway coupled with microglial activation, which contributes to the pathogenesis of progressive PD [10,[23][24][25][26][27].In addition to the cytosol of neurons, α-syn pathologically aggregates and accumulates in the endoplasmic reticulum (ER) lumen.This, in turn, induces ER stress, initiating an adaptive response through the activation of the unfolded protein response (UPR) [26,[28][29][30][31][32].The UPR includes three signaling pathways initiated by the PKR-like ER kinase (PERK), inositol-requiring transmembrane kinase/endoribonuclease 1α (IRE1α), or the activating transcription factor 6 (ATF6) [33].A member of the 70 kDa heat-shock protein (HSP) chaperone family, the ER-associated 78 kDa glucose-regulated protein, also known as the immunoglobulin heavy chain-binding protein (GRP78/BiP), is a key regulator of the UPR signaling pathways.Under normal conditions, GRP78 binds IRE1α, PERK, and ATF6, maintaining their inactive state.When misfolded proteins accumulate within the ER, GRP78 binds them to prevent further misfolding and, thus, dissociates from the three ER stress receptors mediating the UPR.Biologically, the UPR aims to restore the ER function and protect cells against the toxic build-up of un-/misfolded proteins.
However, if ER stress is prolonged or exceeds the adaptive capacity of the cell, it can lead to the activation of apoptosis signaling and cell death.The PERK protein plays a crucial role in the regulation of ER-stress-induced apoptosis due to its involvement in many branching pathways [31,34].Another important player is the C/EBP homologous protein (CHOP), a transcription factor and a downstream regulator of the PERK pathway.pPERK and its phosphorylated downstream target eukaryotic initiation factor 2α (eIF2a), both markers of UPR activation, are detected in neuromelanin-containing DA SNpc neurons in PD cases but not in age-matched controls [28].The induction of the UPR has been documented in various in vitro and in vivo models of PD [26,30,35].Overall, these findings indicate the involvement of the ER-stress PERK/CHOP pathway in neurodegeneration in PD.Therefore, modulating ER stress and inhibiting the PERK/CHOP-dependent proapoptotic UPR pathway could be a prospective therapeutic approach.
GRP78 chaperone is a specific modulator of ER stress, ensuring conformational control of nascent membrane-bound or secretory proteins.Unlike the cytosolic HSP members, GRP78 contains a signal sequence that targets it in the ER.In vivo and in vitro studies have shown that within cells containing α-syn aggregates, α-syn binds to the ER-stress sensor GRP78 [26,30].This indicates that α-syn is a molecular target for GRP78.Chaperone GRP78 is a multifunctional protein that assists in a wide range of folding and refolding processes, the proteasomal endoplasmic-reticulum-associated protein degradation (ERAD) of misfolded proteins, maintaining calcium homeostasis, and regulating the UPR signaling [32,36].Moreover, extracellular and exogenous GRP78 proteins demonstrate long-term anti-inflammatory and immunomodulatory properties in inflammatory diseases [37].The literature data indicate that the mobilization of the GRP78-based chaperone mechanism serves as the first "line of defense" against the fatal consequences of α-syn toxicity and prolonged ER stress.The overexpression of GRP78 can reduce the death of DA neurons in the SNpc and loss of striatal dopamine by halting ER stress and apoptosis in an α-syn model of PD in rats [30,38].
These findings suggest that GRP78 can be a potential therapeutic target for the treatment of PD.Notably, GRP78 protein levels decrease in the SNpc with aging and in sporadic PD patients, which reflects the weakening of the protein conformational control and increased vulnerability of DA neurons to ER stress [38,39].GRP78 is required for the survival of nigral neurons, and its lower level is suggested to be a predisposing factor for the onset and progression of PD and synucleinopathies in humans [38].One of the ways to increase GRP78 in brain neurons is the intranasal delivery of the recombinant GRP78 protein.The intranasal method of GRP78 administration is informed by the in vitro data on the ability of exogenous GRP78 to penetrate living cells, translocate to the ER, and directly affect proteostasis and cell physiology [40].Our preliminary experiments have shown that intranasal GRP78 administration helps mitigate the process of neurodegeneration in the nigrostrial system, suggesting the bioavailability of exogenous GRP78 [41].This study aimed to develop a new neuroprotective approach to PD therapy through the intranasal administration of recombinant GRP78.We hypothesized that elevating GRP78 levels in the SNpc would prevent the abnormal accumulation and formation of pathological α-syn, modulate the UPR, block apoptosis, and inhibit microglial activation.Consequently, this would promote the survival of nigral neurons in the proteasome inhibitor-induced rat model of PD.
GRP78 Treatment Prevents Neuronal Loss in the Substantia Nigra Pars Compacta in a Lactacystin-Induced Rat Model of Parkinson's Disease
To assess the therapeutic potential of exogenous GRP78, we used an LC-induced model of PD in rats, reproducing key pathological signs of PD [20,21].At a dose of 0.4 µg, LC was injected into each side of the SNpc twice, with a 7-day interval (n = 8, group LC).Recombinant human GRP78 was administered intranasally (n = 8) at a dose of 1.6 µg/8 µL to each nostril 4 h and 28 h after each LC microinjection, as well as 7 days after the last microinjection (group LC+GRP78, see Figure 1 for details).Control rats (n = 8) were treated similarly but received an equivalent volume of the vehicle instead of LC and GRP78 (group vehicle).GRP78 was also introduced to naïve animals (n = 3, group GRP78).Twenty-one days after the first LC or vehicle microinjection, behavior tests were performed and then the animals were sacrificed for further immunohistochemical and biochemical analyses.In preliminary experiments [41], using motor behavior tests (sunflower seed test, Suok test, inverted horizontal grid test), we showed that double injections of 0.4 µg LC in the SNpc lead to no motor dysfunction.To assess the number of surviving DA neurons and their axons, brain sections were stained with antibodies against tyrosine hydroxylase (TH), the marker of DA neurons.
Twenty-one days after the first LC or vehicle microinjection, behavior tests were p formed and then the animals were sacrificed for further immunohistochemical and b chemical analyses.In preliminary experiments [41], using motor behavior tests (sunflow seed test, Suok test, inverted horizontal grid test), we showed that double injections of µg LC in the SNpc lead to no motor dysfunction.To assess the number of surviving D neurons and their axons, brain sections were stained with antibodies against tyrosine h droxylase (TH), the marker of DA neurons.The morphological analysis of nigrostriatal sections revealed the loss of 27% of D neurons in the SNpc and 19.4% of their axons in the dorsal striatum after LC administ tion compared to the vehicle (Figure S1).This suggests the development of neurodege eration imitating the pre-symptomatic (preclinical) stage of PD, as motor dysfuncti symptoms do not manifest until at least 50-60% of the DA neurons in the SNpc are l [9].Treatment with GRP78 significantly prevented the loss of DA neurons in the SN (Figure S1a,c) and their axons in the dorsal striatum (Figure S1b,d).On the other han GRP78 did not affect the number of TH-positive neurons in the SNpc in rats untreat with LC, indicating that GRP78 administration is not responsible for neurodegenerati Thus, our data demonstrate that exogenous GRP78 mitigates the neurodegenerative p cess in the nigrostriatal system in the LC-induced rat model of PD, and chaperone thera has a neuroprotective effect.
Exogenous GRP78 Can Penetrate Brain Structures and Be Internalized by Neurons and Microgliocytes in a Lactacystin Rat Model of Parkinson's Disease
The neuroprotective potential of exogenous GRP78 shown in the LC model of suggests its bioavailability to the brain upon intranasal administration.To experimenta prove that exogenous GRP78 can penetrate the brain and be internalized by cells wh administered intranasally, we analyzed the localization of the fluorescently labeled p tein in brain structures pathogenetically significant for PD.GRP78, labeled by Alexa-5 was administered intranasally to rats after the microinjection of LC and after the injecti of the LC vehicle (phosphate buffer saline, PBS) into the SNpc.Using antibodies agai TH, we found that labeled GRP78 (red signal, Figure 2d-j) penetrates the brain and loc izes in the cytosol, but not in the nuclei of DA neurons of the SNpc 3 h after its intrana administration, both after LC injections (Figure 2a,d,g,i) or after LC vehicle injection (F ure S2).The merged signal (yellow) illustrates the co-localization of labeled GRP78 a TH in cell bodies in the SNpc (Figure 2g,j).A red fluorescent signal was absent followi the intranasal administration of unlabeled GRP78 to a rat receiving the LC microinjecti into the SNpc (Figure S3).
Next, we stained SNpc brain sections with antibodies against GRP78 and evaluat the optical density of GRP78 in SNpc neurons in animals treated with Alexa-labe GRP78 and control animals receiving its solvent (PBS).The analysis showed a clear tre towards increased protein levels GRP78 in neurons of the SNpc; the content of GRP increased 1.4 times (p = 0.06) 3 h after its administration compared with the control (Figu The morphological analysis of nigrostriatal sections revealed the loss of 27% of DA neurons in the SNpc and 19.4% of their axons in the dorsal striatum after LC administration compared to the vehicle (Figure S1).This suggests the development of neurodegeneration imitating the pre-symptomatic (preclinical) stage of PD, as motor dysfunction symptoms do not manifest until at least 50-60% of the DA neurons in the SNpc are lost [9].Treatment with GRP78 significantly prevented the loss of DA neurons in the SNpc (Figure S1a,c) and their axons in the dorsal striatum (Figure S1b,d).On the other hand, GRP78 did not affect the number of TH-positive neurons in the SNpc in rats untreated with LC, indicating that GRP78 administration is not responsible for neurodegeneration.Thus, our data demonstrate that exogenous GRP78 mitigates the neurodegenerative process in the nigrostriatal system in the LC-induced rat model of PD, and chaperone therapy has a neuroprotective effect.
Exogenous GRP78 Can Penetrate Brain Structures and Be Internalized by Neurons and Microgliocytes in a Lactacystin Rat Model of Parkinson's Disease
The neuroprotective potential of exogenous GRP78 shown in the LC model of PD suggests its bioavailability to the brain upon intranasal administration.To experimentally prove that exogenous GRP78 can penetrate the brain and be internalized by cells when administered intranasally, we analyzed the localization of the fluorescently labeled protein in brain structures pathogenetically significant for PD.GRP78, labeled by Alexa-555, was administered intranasally to rats after the microinjection of LC and after the injection of the LC vehicle (phosphate buffer saline, PBS) into the SNpc.Using antibodies against TH, we found that labeled GRP78 (red signal, Figure 2d-j) penetrates the brain and localizes in the cytosol, but not in the nuclei of DA neurons of the SNpc 3 h after its intranasal administration, both after LC injections (Figure 2a,d,g,i) or after LC vehicle injection (Figure S2).The merged signal (yellow) illustrates the co-localization of labeled GRP78 and TH in cell bodies in the SNpc (Figure 2g,j).A red fluorescent signal was absent following the intranasal administration of unlabeled GRP78 to a rat receiving the LC microinjection into the SNpc (Figure S3).
Next, we stained SNpc brain sections with antibodies against GRP78 and evaluated the optical density of GRP78 in SNpc neurons in animals treated with Alexa-labeled GRP78 and control animals receiving its solvent (PBS).The analysis showed a clear trend towards increased protein levels GRP78 in neurons of the SNpc; the content of GRP78 increased 1.4 times (p = 0.06) 3 h after its administration compared with the control (Figure S4).This further confirms that GRP78 penetrates the brain and starts to accumulate in neurons 3 h after its intranasal administration.
Since proteins can undergo proteolysis in the brain, we stained the SNpc sections of rats that received Alexa-555-labeled GRP78 with antibodies against GRP78.We showed that GRP78, recognized by specific antibodies (green signal, Figure 2b), co-localized (the merged signal, yellow) with exogenous labeled GRP78 (Figure 2h,k).Importantly, exogenous GRP78 also migrated to other brain regions affected by PD in humans [8,42], such as the ventral tegmental area and locus coeruleus.There, GRP78 was able to cross the plasma membranes of neurons and localize in their cytosol, as illustrated in Figure S5.Thus, we can conclude that the loss of fewer DA neurons in the SNpc is linked to an increase in the content of the exogenous GRP78 protein after its intranasal administration and its neuroprotective properties in the LC-induced PD model.).This further confirms that GRP78 penetrates the brain and starts to accumulate in neurons 3 h after its intranasal administration.Since proteins can undergo proteolysis in the brain, we stained the SNpc sections of rats that received Alexa-555-labeled GRP78 with antibodies against GRP78.We showed that GRP78, recognized by specific antibodies (green signal, Figure 2b), co-localized (the merged signal, yellow) with exogenous labeled GRP78 (Figure 2h,k).Importantly, exogenous GRP78 also migrated to other brain regions affected by PD in humans [8,42], such as the ventral tegmental area and locus coeruleus.There, GRP78 was able to cross the plasma membranes of neurons and localize in their cytosol, as illustrated in Figure S5.Thus, we As shown previously, exogenous GRP78 is rapidly internalized by monocytes in the peripheral blood, directly impacting various phenotypical and metabolic functions of myeloid cells [37,43].We assumed that microglial brain cells could internalize exogenous GRP78, mediating its immunomodulatory effect.To test our assumption, we used antibodies against the microglial surface marker Iba-1 (ionized calcium-binding adaptor molecule) (Figure 2c).As seen in Figure 2f, GRP78 (red signal) was efficiently internalized by the microgliocytes of the SNpc (Figure 2i,l).
Thus, the protective effect of exogenous GRP78 on DA neurons of the SNpc appears to be associated with its ability to penetrate neurons and microglia and directly influence proteostasis and cell physiology during the development of PD-like pathology.
Exogenous GRP78 Prevents Abnormal Accumulation of Phosphorylated pS129-α-syn in Nigral Tissue in the Lactacystin Model of Parkinson's Disease
In order to find out whether the protective effect of exogenous GRP78 on DA-ergic neurons is associated with a decrease in the signs of α-syn pathology, we analyzed the total content of the water-soluble α-syn protein and its phosphorylated form pS129 using a Western blot analysis with antibodies against α-syn and pS129-α-syn.We tested nigral tissue samples of rats, treated or untreated with GRP78, 21 days after the first LC microinjection (see the experimental scheme for details, Figure 1).
Our results showed that the total concentration of water-soluble monomeric α-syn and its pS129 form in the SNpc of rats in the LC-induced PD model is 1.3 and 1.4 times higher, respectively, compared to the vehicle control (Figure 3a,b,d).As Figure 3c,d illustrate, the pS129/total soluble α-syn ratio increased with LC treatment.Therefore, pS129-α-syn predominates in the water-soluble monomeric α-syn fraction.At the same time point, the neurodegeneration of DA neurons in the SNpc and their axons in the dorsal striatum was observed after LC microinjections (Figure S1), which may be a consequence of the pS129-α-syn toxicity.Treatment with GRP78 prevented the LC-induced accumulation of pS129 α-syn (Figure 3b-d), while levels of total water-soluble α-syn remained elevated (Figure 3a,d).This effect coincided with a better survival of TH-positive neurons in the nigrostriatal system (Figure S1).GRP78 did not change the amount of water-soluble α-syn protein and pS129 α-syn in LC-untreated control rats.Thus, our data demonstrate that the treatment of LC-animals with exogenous GRP78 can reduce the content of potentially cytotoxic pS129 α-syn form.To determine whether exogenous GRP78 counteracts the activation of the PERK/CHOP pro-apoptotic pathway of the UPR, we measured the GRP78 level and phosphorylation of eIF2α as ER stress indicators in nigral tissue using Western blot analyses.Furthermore, we assessed the levels of a pro-apoptotic transcription factor CHOP and well-known effectors of neuronal apoptosis-cleaved forms of caspase-3 and caspase-9-which play a crucial role in cell degeneration through the canonical mitochondrial apoptosis pathway.
Using specific antibodies against GRP78 and against total and phosphorylated (Ser51) forms of eIF2α, we found that the GRP78 protein level increased by 66 ± 13.2% (p ≤ 0.001) in the SNpc on day 21 after the first LC injection compared to the vehicle control (Figure 4a,d).eIF2a phosphorylation (Ser51) also increased in the LC-treated animals, which suggests that there is ER stress in the SNpc (Figure 4b,d).Next, we investigated whether the upregulation of pSer51-eIF2 coincided with elevated levels of pro-apoptotic factors in the SNpc, such as CHOP and cleaved caspase-9 and caspase-3.We found that the CHOP protein was upregulated in LC-injected animals compared to the vehicle control (Figure 4c,d).We also observed a 23 ± 7% increase in cleaved caspase-9 protein levels (p ≤ 0.05) and a 24 ± 4.8% increase in cleaved caspase-3 levels (p ≤ 0.01) after LC administration (Figure 4e-g), indicating the activation of the pro-apoptotic PERK-CHOP branch of the UPR and the development of neuronal apoptosis induced by ER stress.
In contrast, treatment with GRP78 downregulated ER stress mediators and the level of pro-apoptotic proteins in LC-injected animals.The Western blot assessment showed no increase in the levels of GPR78 or pSer51-eIF2a in the SNpc (Figure 4b,d).Moreover, GRP78 prevented the upregulation of the pro-apoptotic factor CHOP, as evidenced by a decrease in CHOP protein levels compared to control values (Figure 4c,d).We also found that the levels of cleaved caspase-9 and cleaved caspase-3 returned to normal in the SNpc after GRP78 administration (Figure 4e-g).There was no change in the GRP78 content in the group of GRP78-treated animals.This is due to the fact that the content of GRP78 was measured 7 days after the last administration of exogenous GRP78.Thus, we suggest that by this time point, the administered exogenous GRP78 protein degraded.Consequently, in our experiments in the LC+GRP78 group, the elevation of GRP78 content was not observed for two reasons: (i) exogenous GRP78 had degraded by this time; (ii) treatment with GRP78 prevented the development of ER stress; therefore, the expression of endogenous GRP78 did not occur.
which suggests that there is ER stress in the SNpc (Figure 4b,d).Next, we investigated whether the upregulation of pSer51-eIF2 coincided with elevated levels of pro-apoptotic factors in the SNpc, such as CHOP and cleaved caspase-9 and caspase-3.We found that the CHOP protein was upregulated in LC-injected animals compared to the vehicle control (Figure 4c,d).We also observed a 23 ± 7% increase in cleaved caspase-9 protein levels (p ≤ 0.05) and a 24 ± 4.8% increase in cleaved caspase-3 levels (p ≤ 0.01) after LC administration (Figure 4e-g At the same time, the Western blot analysis demonstrated no significant changes in GRP78, p-eIF2, CHOP, activated caspase-3, and caspase-9 in the SNpc in control (LCuntreated) rats.This indicates that GPR78 itself does not induce ER stress or apoptosis in healthy animals.
In summary, our results showed that the intranasal administration of GRP78 prevented the activation of the GRP78/eIF2/CHOP signaling pathway, caspase-9, and caspase-3.This inhibition effectively mitigated the ER stress response and reduced apoptosis in the SNpc in the LC model of the preclinical stage of PD in rats.
Exogenous GRP78 Inhibits Microglia Activation and the Production of Proinflammatory Cytokines TNF-α and IL-6 via the NF-κB Signaling Pathway in the Lactacystin Model of Parkinson's Disease
We then investigated whether exogenous GRP78 has anti-inflammatory properties.As an increased number of activated microgliocytes is a marker of neuroinflammation [44], we first assessed the status of microglia in the SNpc of LC-treated rats.For this purpose, we implemented immunohistochemistry using antibodies against the microglial marker Iba-1 to quantify the number of Iba-1-immunopositive cells.We showed that LC caused a 38% (p = 0.002) increase in Iba-1-positive cells in the SNpc on day 21 after the first injection compared to the vehicle control (Figure 5a,b).During the visual analysis under a light microscope, we observed LC-induced morphological changes, such as larger soma sizes and less ramified processes (Figure 5a, lower panel).This indicates an increase in the number of microglial cells adopting an activated phenotype.
Next, we investigated whether the activation of microglia is associated with the release of pro-inflammatory cytokines TNF-α and IL-6, which participate in the pathogenesis of PD [44].The immunoblot analysis demonstrated that the levels of TNF-α and IL-6 in the SNpc increased by ~2 times in LC-injected rats compared to the control (Figure 6).Taken together, these data indicate the development of the inflammatory process in the SNpc coupled with the death of DA neurons in the LC-induced rat model of the preclinical stage of PD.
In contrast, treatment with GRP78 decreased reactive microgliosis, as indicated by a 20% (p < 0.05) decrease in Iba-1-positive cells (Figure 5), and inhibited the production of pro-inflammatory cytokines TNF-α and IL-6 (Figure 6) in the SNpc of model animals.The administration of GRP78 alone affected neither the number of Iba-1-positive cells nor TNF-α and IL-6 levels in the SNpc.The results show that GRP78 can provide neuronal protection against the excessive activation of microglia in the LC-induced rat model of PD. 20% (p < 0.05) decrease in Iba-1-positive cells (Figure 5), and inhibited the production o pro-inflammatory cytokines TNF-α and IL-6 (Figure 6) in the SNpc of model animals.Th administration of GRP78 alone affected neither the number of Iba-1-positive cells no TNF-α and IL-6 levels in the SNpc.The results show that GRP78 can provide neuron protection against the excessive activation of microglia in the LC-induced rat model of PD To establish the mechanism enabling the GRP78-mediated inhibition of the microgl activation, we explored the activity of the NF-κB-dependent p65/RelA signaling pathwa This pathway facilitates the induction of proinflammatory cytokines [45] and NF-κ dysregulation has been found in patients with PD and in the substantia nigra of MPTP treated mice [46].Post-mortem studies showed an increase p65 nuclear translocation i melanized neurons of the substantia nigra that is supportive of NF-κB activation in PD We assessed the expression patterns of p65 and phosphorylated-p65 (p-p65) after LC trea ment with or without GRP78 using Western blot analysis.The level of p-p65 was found To establish the mechanism enabling the GRP78-mediated inhibition of the microglia activation, we explored the activity of the NF-κB-dependent p65/RelA signaling pathway.This pathway facilitates the induction of proinflammatory cytokines [45] and NF-κB dysregulation has been found in patients with PD and in the substantia nigra of MPTPtreated mice [46].Post-mortem studies showed an increase p65 nuclear translocation in melanized neurons of the substantia nigra that is supportive of NF-κB activation in PD.We assessed the expression patterns of p65 and phosphorylated-p65 (p-p65) after LC treatment with or without GRP78 using Western blot analysis.The level of p-p65 was found to increase in the LC group only, suggesting the activation of NF-κB during PD development.However, GPR78 inhibited the increase of p-p65 expression in the SNpc in the LC model (Figure 7), while no significant changes were found in LC-untreated rats.Hence, the decrease in p65 phosphorylation can be an essential factor in inhibiting activated microglia by exogenous GRP78.
Taken together, our data demonstrate that GRP78 can protect neurons from the excessive activation of microglia via NF-κB signaling pathways in the LC-induced rat model of PD.
the decrease in p65 phosphorylation can be an essential factor in inhibiting activated microglia by еxogenous GRP78.
Taken together, our data demonstrate that GRP78 can protect neurons from the excessive activation of microglia via NF-κB signaling pathways in the LC-induced rat model of PD.
Discussion
With the population aging rapidly, the global prevalence of PD is rising, which significantly contributes to the increase in healthcare costs.Developing preventive PD therapy has proven challenging due to the limited bioavailability of neuroprotective drugs, partly because of the blood-brain barrier.One of the new approaches is the intranasal
Discussion
With the population aging rapidly, the global prevalence of PD is rising, which significantly contributes to the increase in healthcare costs.Developing preventive PD therapy has proven challenging due to the limited bioavailability of neuroprotective drugs, partly because of the blood-brain barrier.One of the new approaches is the intranasal route of administration, delivering the drug from the nasal cavity directly to the brain via the olfactory and trigeminal nerves.It allows neurotherapeutic agents, including both small and large molecules, to bypass the blood-brain barrier [47,48].Intranasal administration has shown therapeutic effects in animal and human studies of different pathologies [48].
In this study, we evaluated the neuroprotective potential of the intranasally administered recombinant human protein GRP78 in a rat PD model.The intranasal route was chosen considering that GRP78 protein can leave the ER, traverse the cell membrane, and enter the extracellular space [49], cerebrospinal fluid, and peripheral blood under normal and pathological conditions [37,[50][51][52].Moreover, when administered intravenously, exogenous GRP78 or its synthetic analog IRL201805 can be rapidly internalized by monocytes in the peripheral blood and directly impact various phenotypical and metabolic functions of myeloid cells [37].It is noteworthy that the ability to enter the mammalian brain and neurons has been observed for another member of the same chaperone family, HSP70 (HSPA1).After intranasal administration, human recombinant HSP70 demonstrates therapeutic effects in animal models of PD and Alzheimer's disease [53,54].This highlights the potential of using the intranasal delivery of chaperones to the brain for neuroprotection.
Neuroprotective interventions are most effective at the early (preclinical) stage of the pathological process.Therefore, we utilized a previously developed LC-induced model [21,55] that reproduces the main pathogenetic signs of the preclinical stage of PD in rats.These include the degeneration of 27% of DA neurons in the SNpc (a level that is characteristic of the preclinical PD stage [8] (Figure S1), development of α-syn pathology (Figure 3), and signs of chronic neuroinflammation (Figures 5 and 6)).At the molecular level, the model is characterized by the activation of the pro-apoptotic GRP78/PERK/eIF2/CHOP UPR pathway, caspases-9 and -3 (Figure 4), and the NF-κB-dependent p65 inflammatory signaling pathway (Figure 7).Yet, no motor dysfunction is detected.
At the first stage of our research, we demonstrated that intranasally administered GRP78 penetrates the mammalian brain and is internalized by DA neurons in the SNpc and other brain regions that can be affected by PD in humans (Figure S2).In addition, we have shown that exogenous GRP78 penetrates the brain under normal conditions (Figure S2) and accumulates in the neurons of the SNpc 3 h after administration (Figure S4), but 7 days after administration, GRP78 degrades, since its concentration in the SNpc tissue does not change in comparison to control animals (Figure S4).The internalization of GRP78 is assumed to occur through nonspecific or receptor-mediated endocytosis [37].However, it is unclear what specific receptors and/or docking proteins facilitate endocytosis.
Next, we showed that GRP78 treatment mitigated the process of neurodegeneration in the rat model that mimics the preclinical stage of PD.It is evidenced by an increase in the number of TH-positive neurons in the SNpc and TH-positive axons in the dorsal striatum (Figure S1).Furthermore, intranasal treatment with GRP78 in control animals, without LC, was characterized by neither neurodegeneration in the nigrostriatal system nor behavioral deficit.This indicates the absence of cytotoxic properties of GRP78.Similar neuroprotective effects of elevating GRP78 via its overexpression in the SNpc have been shown in α-syn pathology models in rats [30,38], the 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) PD model in mice [56], and the rotenone PD model in rats [57].Taken together, our data, along with the existing literature, indicate the therapeutic significance of elevating GRP78 levels in the brain during the development of PD-like pathology.
The abnormal accumulation of α-syn, and especially its phosphorylated and oligomeric forms, in the ER lumen and cytosol of the SNpc DA neurons is known to play a critical role in neuronal death in PD, although the underlying mechanism is poorly understood [10,58,59].We first focused on investigating whether the neuroprotective effect of exogenous GRP78 is related to its ability to prevent α-syn pathology and the induction of the pro-apoptotic ER stress branch.We assessed levels of total and phosphorylated α-syn accumulated in the SNpc in the LC-induced PD model (Figure 3a,b) and demonstrated that pS129-α-syn predominated in the water-soluble monomeric α-syn fraction (Figure 3c).Hence, the enhanced S129-phosphorylation of α-syn may play a key role in the death of DA neurons in the SNpc.This assumption is supported by evidence showing that blocking S129-phosphorylation results in fewer α-syn aggregates and reduces neuronal cell death induced by the mitochondrial toxin rotenone [59].In addition, the increased phosphorylation of α-syn can contribute to its transformation into oligomers or even aggregates, thereby affecting the cytotoxicity of α-syn and promoting neuronal death [12,[59][60][61].It is assumed that extensive S129 phosphorylation during PD-like pathology is most likely caused by an increased influx of extracellular Ca 2+ due to mitochondrial impairment [62] and the increased expression and activity of polo-like kinase 2 (PLK2, also known as serum-inducible kinase or SNK) [63,64].
Here, we showed that treatment with GRP78 reduced the content of the pS129 α-syn form (Figure 3b-d) in the LC model of PD.This effect coincided with an increase in the survival of DA neurons in the SNpc.We suggest that, at least in part, it was mediated by the direct interaction of GRP78 with both phosphorylated and non-phosphorylated forms of α-syn [26,29,30].Such interaction could prevent excessive S129-phosphorylation and inhibit the multistep aggregation pathway of α-syn, reducing related toxicity.
The accumulation of aberrant α-syn forms is a central element for the induction of the UPR that can trigger apoptotic cell death in PD [31].Since the exogenous GRP78 protein prevented the development of α-syn pathology in our preclinical PD model in rats, we then tested the hypothesis that this effect can lead to the inhibition of the ER stress response and a reduction of apoptosis in the SNpc.Indeed, the accumulation of monomeric α-syn in the SNpc correlated with the activation of the pro-apoptotic PERK-dependent pathway of the UPR in LC-treated rats (Figure 4a-d).This was evidenced by an increase in the level of the sensor protein and UPR activator GRP78, as well as the activation of eIf2α and upregulation of the ATF4-dependent pro-apoptotic factor CHOP (Figure 4a-d).At the same time, CHOP upregulation resulted in the activation of caspase-9 and caspase-3 (Figure 4e-g), promoting cell degeneration through the canonical mitochondrial apoptosis pathway.Overall, our data indicate that the prolonged hyperactivation of the PERK/CHOP pathway of the UPR promotes ER stress-dependent apoptosis in the SNpc in the animal model of preclinical PD.These findings correlate with studies on postmortem tissues from PD patients [28,33] and animal models of PD [35,[65][66][67] that demonstrate the activation of the pro-apoptotic PERK/CHOP pathway in nigral tissue.Treatment with GRP78 downregulated ER stress mediators of the PERK-dependent pathway of the UPR (Figure 4a-d) and prevented the activation of pro-apoptotic caspases-9 and -3 (Figure 4e-g), which contributed to the survival of DA neurons in the SNpc in LC-injected animals.These results support the data on the neuroprotective effect of GRP78 overexpression, which is associated with the downregulation of the pro-apoptotic factor CHOP and a reduction in apoptosis in the SNpc in the rat model of α-syn pathology [30].
Neuroinflammation manifests in microglia activation and lymphocyte infiltration.It can be provoked by the release of misfolded α-syn from damaged and dead neurons, leading to the development and progression of PD [23].Activated microglia is a chronic source of pro-inflammatory cytokines, reactive oxygen species (ROS), and nitric oxide (NO), all of which can induce neuronal death [68].Large numbers of activated microglia and elevated levels of TNF-alpha receptor R1 in the SNpc, along with activated caspase-1 and caspase-3, have been observed in PD [69][70][71].Furthermore, in vivo imaging has confirmed that widespread microglial activation is associated with the pathological process in idiopathic PD [45].In our LC-induced model of the preclinical PD, the number of Iba-1 positive cells of amoeboid-like phenotype (Figure 5) was correlated with an increase in pro-inflammatory cytokines TNF-α and IL-6 (Figure 6) in the SNpc.This indicates the development of reactive microgliosis and neuroinflammation, potentially contributing to the death of DA neurons.We showed that intranasally delivered GRP78 was efficiently internalized by the microgliocytes of the SNpc (Figure 2c,f,i,l) and directly affected cell physiology.Its protective action manifested in a decreased number of Iba1-positive cells and lower levels of TNF-α and IL-6; these data indicate reduced microglial activation and neuroinflammation in the SNpc.However, it is not yet clear whether the anti-inflammatory effect of GRP78 is associated with the phenotypic shift of pro-inflammatory M1 microglia to anti-inflammatory M2 microglia, which may promote neuroprotection.Notably, following the systemic administration of GRP78 or its analog IRL201805 in an animal model of rheumatoid arthritis, these proteins are rapidly internalized by monocytes, which even-tually leads to an increased secretion of IL-10 and the suppression of TNF-α and IL-1β release [37].These anti-inflammatory properties of exogenous GRP78 help regulate and resolve chronic inflammation.
Toll-like receptors (TLRs) can serve as essential immune receptors in PD, triggering neuroinflammation [72,73].TLRs can recognize a wide variety of damage-associated molecular patterns, including misfolded α-syn, released by damaged and dead neurons.Upon the recognition of these molecules, TLRs trigger a signaling cascade that activates NF-κB factors.NF-κB factors play a crucial role in the regulation of inflammation and apoptosis, and are involved in the pathogenesis of PD [46].To find out whether the anti-inflammatory effect of exogenous GRP78 depends on its ability to modulate the NF-κB signaling pathway, we assessed expression patterns of p65 and phosphorylated-p65.The NF-κB-dependent p65 signaling pathway was shown to be activated in the SNpc in the LC-induced model of preclinical PD.This may be a signal regulating the molecular activation of microglia at an early stage of the disease.However, treatment with GRP78 inhibited the nigral activation of NF-κB (Figure 6).Taken together, the results demonstrate that exogenous GRP78 exerts potent anti-inflammatory effects.It can protect neurons against the excessive activation of microglia by targeting NF-κB signaling pathways during the development of LC-induced PD-like pathology.
Overall, our data support the therapeutic relevance of delivering GRP78 intranasally to the brain to prevent and/or slow down PD-like neurodegeneration.We determined that the neuroprotective potential of exogenous GRP78 is linked to its ability to (i) prevent the manifestation of α-syn pathology, (ii) block ER stress-dependent apoptosis, and (iii) mitigate the excessive activation of microglia by targeting NF-κB signaling pathways in the LCinduced rat model of preclinical PD.
Animals
The study was carried out in 6-month-old male Wistar rats, weighing 280-310 g.The animals were housed in individual cages under standard environmental conditions (12:12 h light-dark cycle; ambient temperature 23 ± 2 • C; food and water available ad libitum).The experiments were conducted under the requirements of the EU Directive 2010/63/EU on the treatment of laboratory animals and those of the Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences (Protocol No # 1-17/2022, 27 January 2022).The rats were placed in groups randomly.
Implantation of Guiding Cannulas
Before implantation surgery, animals were anesthetized with intramuscular injection of Zoletil-100 (50 mg/kg; tiletamine hydrochloride and zolazepam; Virbac, Carros, France) and then placed into a stereotaxic device (Narishige, Tokyo, Japan).Two stainless-steel guide cannulas (internal diameter 0.3 mm) were implanted into the SNpc for bilateral drug injections.The coordinates were as follows: 5.0 mm caudal to the bregma, 2.0 mm lateral to the midline, and 7.5 mm deep from the skull surface [74].The guide cannulas were secured with Akrodent dental cement (Stoma, Kharkiv, Ukraine).Then, animals were returned to their home cages, and experiments began no earlier than 7 days post-surgery.
Modeling Parkinson's Disease in Wistar Rats
Cannulated animals were used to model PD and evaluate the protective potential of intranasally administered GRP78.To create the PD model, we used a specific, irreversible proteasome inhibitor lactacystin (LC; Enzo Life Sciences, Farmingdale, NY, USA).Phosphate-buffered saline (PBS) was filtered through a sterilized syringe filter (30 mm, PVDF, 0.22 µm; JET BIOFIL ® , Seoul, Republic of Korea).LC was diluted in sterile PBS to the final concentration of 0.4 µg/µL and injected immediately after dilution through cannulas to rats (n = 6-8, group LC).Two sequential bilateral microinjections of LC into the SNpc were performed at a dose of 0.4 µg/1 µL with a weekly interval.For microinjections, we used a needle with an external diameter of 0.2 mm attached to a 1 µL Hamilton syringe (Hamilton, Reno, NV, USA) via a short length of polyethylene tubing.LC was injected at a flow rate of 0.1 µL/ min.Control rats were treated similarly but received an equivalent volume of vehicle (PBS) instead of LC.
GRP78 Treatment
Recombinant human heat shock protein GRP78 (Sigma, Livonia, MI, USA) was diluted in sterile PBS (pH 7.4) and administered intranasally (to each nostril) to rats (n = 6-8, group LC+GRP78) at a dose of 1.6 µg/8 µL, 4 h and 28 h after each microinjection of LC.Additional administration of GRP78 was performed 7 days after the last LC microinjection.The control group of animals (n = 6-8) received an equivalent volume of the vehicle (PBS) instead of LC and GRP78.GRP78 was also administered to intact animals, untreated with LC (n = 3, group GRP78) (see experimental design in Figure 1).Intranasal injections were performed using a 10 µL micropipette (JET BIOFIL ® , Seoul, Republic of Korea) at a flow rate of 3 µL/min.Animals were given one-minute intervals to regain normal respiratory function.All the effects were evaluated 21 days later.
GRP78 Labeling and Confocal Microscopy
GRP78 protein was conjugated with a fluorescent Alexa-555 dye (Invitrogen, Waltham, MA, USA) according to the manufacturer's protocol.Briefly, 50 µL of 10 mg/mL Alexa-555 solution in dimethyl sulfoxide was slowly added to 5 mg of GRP78 in 500 µL of 0.1 M sodium bicarbonate, pH 8.3, and vortexed for 2 min.The mixture was incubated for 1 h at 4 • C with continuous stirring.The reaction was stopped by adding 50 µL of freshly prepared 1.5 M hydroxylamine, pH 8.5.The conjugate was separated from non-reacted labels through triple dialysis in PBS at 4 • C.
GRP78 protein labeled with Alexa-555 was administered intranasally to rats (n = 4) at a dose of 1.6 µg/8 µL, 4 h after a microinjection of LC or after PBS injection (n = 4) into SNpc.Animals after LC-injection into SNpc treated with unlabled-GRP78 (n = 4) were used as controls.Three hours later, the rats were anesthetized with Zoletil-100 (50 mg/kg, i.m.) and rapidly transcardially perfused with 0.1 M PBS (pH = 7.4) and 4% paraformaldehyde in 0.1 M PBS.After that, the animals were decapitated, and their brains were isolated and placed in the same fixative overnight at 4 • C. Following 48 h incubation in 30% sucrose/PBS at 4 • C for cryoprotection, the brains were frozen in cold isopentane (−42 • C) and stored at −80 • C for further use.Serial frontal brain sections were prepared using a Leica CM-1520 cryostat ("Leica Biosystems", Nussloch, Germany).Sections (10 and 20 µm) of the SNpc, the ventral tegmental area (VTA), and the locus coeruleus were prepared according to the brain atlas [74].Eight to twelve alternate series of sections were mounted on SuperFrost Plus Adhesion Microscope Slides ("Gerhard Menzel GmbH", Braunschweig, Germany) and stored at −22 • C.
For confocal microscopy, brain sections were dried at 23 • C overnight, repeatedly washed in PBS or PBS with 0.1% Tween-20 (PBST), and pre-incubated in 4% blocking solution (2% bovine serum albumin and 2% normal goat serum diluted in PBST) for 1 h at 23 • C. Next, the sections were incubated with primary antibodies against tyrosine hydroxylase (TH; 1:900; rabbit, ab117112, Abcam, Cambridge, UK), GRP78 (1:300; rabbit, ab21685, Abcam, Cambridge, UK), or Iba (1:500; rabbit, Novus Biologicals, Centennial, CO, USA) for 24 h.After washing with PBS, the sections were incubated for 2 h at room temperature with secondary anti-rabbit IgG antibodies labeled with DyLight-488 (1:350; 35552, Thermo Scientific, Waltham, MA, USA).Following several PBS washes, the slides were coverslipped with Mowiol (Sigma, Burlington, MA, USA).Unlabeled sections were used to measure autofluorescence.Images were obtained on a DMI6000 confocal microscope with a Leica TCS SP5 laser scanning confocal setup (Leica Microsystems, Wetzlar, Germany) using a ×63 oil immersion objective.The resulting images were analyzed using the Leica LAS AF version 4.0 software package.To avoid cross-interference between fluorochromes, images for Alexa-555 and DyLight-488 were acquired using the sequential image recording method.
Immunohistochemical Studies
21 days after the first LC microinjection, rats were anesthetized with Zoletil-100 and decapitated.One half of each brain was used for immunohistochemical assays.The second half was used for further biochemical analysis.For immunohistochemical assays, brains were isolated and placed in the 4% paraformaldehyde in 0.1 M PBS overnight at 4 • C. Following 48 h incubation in 30% sucrose/PBS at 4 • C, the brains were frozen in cold isopentane (−42 • C) and stored at −80 • C for further use.Serial frontal brain sections were prepared using a Leica CM-1520 cryostat ("Leica Biosystems", Nussloch, Germany).Sections (10 µm) of the SNpc were prepared according to the brain atlas [74].Ten to twelve alternate series of sections were mounted on SuperFrost Plus Adhesion Microscope Slides ("Gerhard Menzel GmbH", Braunschweig, Germany) and stored at −22 • C.
Images of the stained sections of SNpc were obtained using a Zeiss Axio Imager A1 microscope (Carl Zeiss, Jena, Germany) with a built-in camera and Axio-Vision 4.8 software.Quantitative analysis was performed using 10-12 sections from each animal at the same level of the studied zones, separated by approximately 70 µm.The number of cells accounted for a standard area of tissue captured by a light microscope camera using ×20 lens-697 × 523 µm for Iba staining, and ×10 lens-1389 × 1040 µm for GRP78 staining.The number of Iba-positive cell bodies was counted manually and expressed as the average number of positive stained microglia cells per SNpc section.The optical density reflecting the content of an GRP78-immunopositive substance was calculated as the difference between intensely colored neurons containing an immunoreactive substance and the intensity of background coloring (not containing an immunoreactive substance) on the same section.The results were presented in relative units of optical density.
Immunoblotting
The SNpc was dissected from the brain according to the brain atlas [74].All samples were weighed, frozen at −80 • C, and stored until the analysis.SNpc tissue was then homogenized in lysis buffer containing 20 mM Tris-HCl (pH 7.5), 150 mM NaCl, 0.5% Triton X-100, and 2 mM EDTA supplemented with a protease inhibitor cocktail (Sigma Aldrich, St. Louis, MO, USA) and a phosphatase inhibitor cocktail (Roche, Basel, Switzerland).Next, homogenized tissue was incubated on ice for 1 h until the lysis of the samples was complete.Following centrifugation (13,500× g for 10 min), the supernatant was used for protein quantification and further assays.Protein concentration was measured by the Lowry assay with BSA as a standard.For Western blotting, the protein supernatant was mixed 2:1 with loading buffer (0.0625 M Tris-HCl (pH 6.8), 10% glycerol, 2% SDS, 0.1 mM EDTA, 0.006% bromophenol blue, 10% β-mercaptoethanol) and heated at 95 • C for 7 min.Equal volume aliquots containing 30 µg of total protein were loaded onto 11% polyacrylamide gel and separated by electrophoresis with the Precisions Plus Protein Dual Xtra Standards marker (BioRad, Hercules, CA, USA).Protein bands were then transferred onto PVDF membranes (pore size 0.2 µm; BioRad, Hercules, CA, USA) by wet transfer with a TransBlot device (BioRad, Hercules, CA, USA).
Protein levels were normalized to the GAPDH or β-Actin signal.The relative amounts of phospho-eIF2α (Ser51) or phospho-NFkB p65 (Ser536) were determined by adjusting for total eIF2α or NFkB p65 protein or for GAPDH.Densitometric analysis was performed in the open-source ImageJ 1.8 software (National Institutes of Health, New York, NY, USA).The ratios of the optical densities of specific protein bands to the total protein were compared to the mean of the control group.
Statistics
All data were analyzed using GraphPad Prism 8 (GraphPad Software, San Diego, CA, USA).The distribution normality was checked using the Kolmogorov-Smirnov test.Multiple comparisons between the groups of rats were run using the two-way ANOVA test followed by Tukey's post hoc tests.Intergroup differences were considered statistically significant at p ≤ 0.05.All data were represented as the mean ± standard error of the mean (SEM) and as individual values.
Conclusions
In an LC-induced rodent model, we achieved the first successful treatment of PDlike pathology using an intranasal delivery of the recombinant human protein GRP78 to the brain.We report that intranasally administered GRP78 rapidly enters affected brain regions and prevents the development of neurodegeneration in the nigrostriatal system during the preclinical stage of PD in rats.LC-induced disturbances, including ER stressdependent apoptosis and the abnormal accumulation of monomeric phosphorylated pS129α-syn, are alleviated with GRP78 administration.Moreover, exogenous GRP78 exhibits anti-inflammatory properties and can protect neurons against the excessive activation of microglia, as well as the increased production of pro-inflammatory cytokines, TNF-α and IL-6, by targeting NF-κB signaling pathways.Although further investigation into
Figure 1 .
Figure 1.Experimental design.The timing and sequence of LC and GRP78 injections and procedu are shown.Red arrows, LC-microinjections of the proteasome inhibitor lactacystin into the SN (0.4 µg/1 µL).Black arrows, GRP78-treatment with recombinant human heat shock protein GRP (1.6 µg/8 µL) or the corresponding vehicle, sterile PBS, administered intranasally 4 h and 28 h lowing each microinjection of LC or vehicle and 7 days after the last injection.
Figure 1 .
Figure 1.Experimental design.The timing and sequence of LC and GRP78 injections and procedures are shown.Red arrows, LC-microinjections of the proteasome inhibitor lactacystin into the SNpc (0.4 µg/1 µL).Black arrows, GRP78-treatment with recombinant human heat shock protein GRP78 (1.6 µg/8 µL) or the corresponding vehicle, sterile PBS, administered intranasally 4 h and 28 h following each microinjection of LC or vehicle and 7 days after the last injection.
Figure 2 .
Figure 2. Labeled GRP78 penetrates the brain and is localized in DA neurons and microglial cells of the substantia nigra pars compacta (SNpc) 3 h after its intranasal administration in a rat model of Parkinson's disease.GRP78 protein labeled by Alexa-555 was administered intranasally to rats (n = 4) after a microinjection of lactacystin as described in the Materials and Methods.Brain sections were stained with (a) specific anti-TH antibodies (green signal), (b) anti-Grp78 antibodies specific to human protein (green signal), and (c) anti-Iba-1 antibodies (green signal).(d-f) Localization of labeled GRP78 is seen as a red signal.(g-i) Panels show co-localization of labeled GRP78 and anti-TH, anti-Grp78, or anti-Iba-1 signals.(j-l) Panels show magnified representative images of the colocalization within neurons and microglia cells marked by yellow box.Arrows indicate co-localization of labeled GRP78 with corresponding proteins.Images were obtained using confocal microscopy.Scale bars are 25 µm for neurons in the SNpc and 10 µm for microglia.
Figure 2 .
Figure 2. Labeled GRP78 penetrates the brain and is localized in DA neurons and microglial cells of the substantia nigra pars compacta (SNpc) 3 h after its intranasal administration in a rat model of Parkinson's disease.GRP78 protein labeled by Alexa-555 was administered intranasally to rats (n = 4) after a microinjection of lactacystin as described in the Materials and Methods.Brain sections were stained with (a) specific anti-TH antibodies (green signal), (b) anti-Grp78 antibodies specific to human protein (green signal), and (c) anti-Iba-1 antibodies (green signal).(d-f) Localization of labeled GRP78 is seen as a red signal.(g-i) Panels show co-localization of labeled GRP78 and anti-TH, anti-Grp78, or anti-Iba-1 signals.(j-l) Panels show magnified representative images of the co-localization within neurons and microglia cells marked by yellow box.Arrows indicate co-localization of labeled GRP78 with corresponding proteins.Images were obtained using confocal microscopy.Scale bars are 25 µm for neurons in the SNpc and 10 µm for microglia.
Figure 3 .Figure 3 .
Figure 3. Exogenous GRP78 prevents abnormal accumulation of α-syn phosphorylated at S129 (pS129) in nigral tissue in rat model of Parkinson's disease.Nigral content of (a) soluble form of αsyn, (b) α-syn phosphorylated at S129 (pS129), (c) phosphorylated to soluble α-syn ratio.Western blot analysis of nigral tissue was conducted with the antibodies against soluble and S129-phosphorylated forms of α-syn.Anti-GAPDH antibody staining was used as the loading control.(d) Representative Western blots are shown in panel.The results are presented as percentages of the control (100%) in (a-c) panels.Bar charts indicate mean values with standard errors.The dots, squares, triangles and rhombus show individual values per rat.Two-way ANOVA test followed by Tukey's post hoc analysis were performed to determine the effects of GRP78 therapy.Asterisks indicate sig-Figure 3. Exogenous GRP78 prevents abnormal accumulation of α-syn phosphorylated at S129 (pS129) in nigral tissue in rat model of Parkinson's disease.Nigral content of (a) soluble form of α-syn, (b) α-syn phosphorylated at S129 (pS129), (c) phosphorylated to soluble α-syn ratio.Western
2. 4 .
Exogenous GRP78 Counteracts the Activation of the GRP78/eIF2α/CHOP/Caspase-3,9 Pro-Apoptotic UPR Signaling Pathway in the Lactacystin Model of Parkinson's Disease ), indicating the activation of the pro-apoptotic PERK-CHOP branch of the UPR and the development of neuronal apoptosis induced by ER stress.
Figure 5 .
Figure 5. Exogenous GRP78 inhibits microglia activation in a lactacystin rat model of Parkinson's disease.(a) Brain sections (10 µm) of the substantia nigra pars compacta (SNpc, (a)) were prepared according to the brain atlas and stained with antibodies against Iba-1 (1:500; rabbit, Novus Biologicals, Centennial, CO, USA).The images were obtained using a Zeiss Axio Imager A1 microscope (Carl Zeiss, Oberkochen, Germany) with a built-in video camera and Axio-Vision 4.8 software.Original images are shown in the upper panel.Scale bars are 100 µm.The second panel show magnified images of microglia cells (zoom).The third panel show magnified images of microglia morphology of cells within dotted box area (zoom).(b) Quantitative analysis was performed using 10-12 sections from each animal at the same level of the studied zones, separated by approximately 70 µm.The number of cells accounted for a standard area of tissue captured by a light microscope camera using ×20 lens.The analysis was performed using the PhotoM freeware version 1.21 (http://www.t_lambda.chat.ru/accessed on 11 December 2019).Bar charts indicate mean values with standard errors.The dots, squares, triangles and rhombus show individual values per rat.Twoway ANOVA test followed by Tukey's post hoc analysis were performed to determine the effects of GRP78 therapy.Asterisks indicate significant differences between groups according to Tukey's post hoc tests: ** p < 0.01 vs. the vehicle group; # p < 0.05 vs. the LC group.Interaction factor for microglia in SNpc F (2, 23) = 2.099; Grp78 factor F (2, 23) = 5.466 p = 0.0284; LC factor F (2, 23) = 17.04 p = 0.0004.
Figure 5 .
Figure 5. Exogenous GRP78 inhibits microglia activation in a lactacystin rat model of Parkinson's disease.(a) Brain sections (10 µm) of the substantia nigra pars compacta (SNpc, (a)) were prepared according to the brain atlas and stained with antibodies against Iba-1 (1:500; rabbit, Novus Biologicals, Centennial, CO, USA).The images were obtained using a Zeiss Axio Imager A1 microscope (Carl Zeiss, Oberkochen, Germany) with a built-in video camera and Axio-Vision 4.8 software.Original images are shown in the upper panel.Scale bars are 100 µm.The second panel show magnified images of microglia cells (zoom).The third panel show magnified images of microglia morphology of cells within dotted box area (zoom).(b) Quantitative analysis was performed using 10-12 sections from each animal at the same level of the studied zones, separated by approximately 70 µm.The number of cells accounted for a standard area of tissue captured by a light microscope camera using ×20 lens.The analysis was performed using the PhotoM freeware version 1.21 (http://www.t_lambda.chat.ru/accessed on 11 December 2019).Bar charts indicate mean values with standard errors.The dots, squares, triangles and rhombus show individual values per rat.Two-way ANOVA test followed by Tukey's post hoc analysis were performed to determine the effects of GRP78 therapy.Asterisks indicate significant differences between groups according to Tukey's post hoc tests: ** p < 0.01 vs. the vehicle group; # p < 0.05 vs. the LC group.Interaction factor for microglia in SNpc F (2, 23) = 2.099; Grp78 factor F (2, 23) = 5.466 p = 0.0284; LC factor F (2, 23) = 17.04 p = 0.0004. | 2024-04-05T18:30:25.675Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "4a731157809630d20a17530eab6c1d7b28193d42",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/25/7/3951/pdf?version=1712044620",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "69a4ec1aea02734c14c1d9e6a6a52c510ddeac0a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
18641390 | pes2o/s2orc | v3-fos-license | Cisplatin-induced emesis: systematic review and meta-analysis of the ferret model and the effects of 5-HT3 receptor antagonists
Purpose The ferret cisplatin emesis model has been used for ~30 years and enabled identification of clinically used anti-emetics. We provide an objective assessment of this model including efficacy of 5-HT3 receptor antagonists to assess its translational validity. Methods A systematic review identified available evidence and was used to perform meta-analyses. Results Of 182 potentially relevant publications, 115 reported cisplatin-induced emesis in ferrets and 68 were included in the analysis. The majority (n = 53) used a 10 mg kg−1 dose to induce acute emesis, which peaked after 2 h. More recent studies (n = 11) also used 5 mg kg−1, which induced a biphasic response peaking at 12 h and 48 h. Overall, 5-HT3 receptor antagonists reduced cisplatin (5 mg kg−1) emesis by 68% (45–91%) during the acute phase (day 1) and by 67% (48–86%) and 53% (38–68%, all P < 0.001), during the delayed phase (days 2, 3). In an analysis focused on the acute phase, the efficacy of ondansetron was dependent on the dosage and observation period but not on the dose of cisplatin. Conclusion Our analysis enabled novel findings to be extracted from the literature including factors which may impact on the applicability of preclinical results to humans. It reveals that the efficacy of ondansetron is similar against low and high doses of cisplatin. Additionally, we showed that 5-HT3 receptor antagonists have a similar efficacy during acute and delayed emesis, which provides a novel insight into the pharmacology of delayed emesis in the ferret.
Introduction
It is generally accepted that nausea and vomiting (emesis) are components of a protective mechanism by which the human body defends itself against ingested toxins. However, the emetic reXex can be triggered inappropriately, and nausea and vomiting are also relatively common side eVects of drugs in current use (e.g. morphine, anti-cancer chemotherapy) as well as dose-limiting toxicities, which may limit the development of novel chemical entities intended for the treatment of a range of diseases (e.g. phosphodiesterase-IV inhibitors for the treatment of asthma [135]). The multi-system nature of the emetic reXex coordinated in the brainstem and the behavioural and sensory expression of nausea have meant that, to date, preclinical studies of the mechanisms involved and identiWcation of novel anti-emetic agents have involved studies in whole animals (conscious, anaesthetised or decerebrate) [61]. Nausea and vomiting are particularly associated with the treatment of cancer by cytotoxic drugs (e.g. cisplatin), symptoms which patients Wnd particularly distressing and impact upon compliance with treatment. In the absence of anti-emetic prophylaxis, cisplatin induces nausea and vomiting in virtually all patients [109]; the emetic response lasts up to 5 days on each cycle and is characterised by an intense acute phase lasting »24 h and a less intense but more protracted delayed phase peaking during the period 48-72 h following the administration of cisplatin [72]. In the early 1980s, the ferret (Mustela putorius furo L.) was reported to develop an acute emetic response to high-dose cisplatin (8-10 mg kg ¡1 ) and was proposed as an alternative model to the dog, cat and monkey (commonly used at the time) to study cytotoxic drug-induced emesis and identify potential anti-emetic agents [40]. Subsequently, the acute cisplatin model was modiWed and the dose of cisplatin lowered to 5 mg kg ¡1 to investigate delayed emesis [114]. The ferret model of cisplatin-induced emesis was rapidly adopted for the investigation of new anti-emetic agents and was pivotal in establishing the anti-emetic eYcacy of 5-hydroxytryptamine 3 (5-HT 3 ) [94] and tachykinin NK 1 receptor antagonists [148], which are both currently in widespread use for the treatment of chemotherapy-induced nausea and vomiting [109].
The use and beneWt of animal models in research is regularly questioned and anecdotal evidence or unsupported claims, as opposed to quantitative support, are too often used as justiWcations [88,105]. There has recently been a growing interest in systematic reviews and meta-analyses to assess the validity of animal models (i.e. how preclinical research has informed clinical research) and their utility in drug discovery (i.e. evaluate data and inform the decision to carry out a clinical trial). The NuYeld Council for Bioethics [101] recommends that such reviews are undertaken to "evaluate more fully the predictability and transferability of animal models". Such analyses also have implications for the application of the principles of the 3Rs (Replacement, ReWnement, Reduction) to animal experimentation [61,68] and should inform preclinical guidelines produced by regulators (e.g. [37]). Recently, systematic reviews and meta-analysis of animal models of stroke have been carried out. A retrospective study concluded that even though individual studies had reported beneWcial eVects of the calcium channel blocker nimodipine; overall, the preclinical data available were not conclusive [62], which is consistent with the fact that this type of drug was without eVect in humans [63] and highlights the necessity of quantifying animal data adequately before starting clinical trials. Later studies assessed the preclinical evidence of the eVect of potential treatments in experimental stroke and characterised their neuroprotective properties in order to identify research priorities [78][79][80].
The cisplatin-induced emesis ferret models provide a unique opportunity to assess the value of systematic reviews in speciWc areas, because the wealth of data available in this relatively circumscribed area allows assessment of two characteristics of a model: the response to cisplatin itself, and the anti-emetic potential of agents that are currently used in humans. The aim of this systematic review is twofold: Wrstly, this study intends to provide an objective measure of the characteristics of cisplatin-induced emesis in the ferret, in terms of the latency, magnitude (number of retches and vomits) and proWle of the emetic response. Secondly, the eVect of 5-HT 3 receptor antagonists in the ferret model will be quantiWed; the present study will assess the eYcacy of ondansetron against the acute phase of emesis; additionally, we will compare the overall eVect of 5-HT 3 receptors antagonists against the acute and delayed phases of emesis.
This paper is the Wrst systematic review and meta-analysis covering a model of emesis and anti-emetics. It provides evidence, which supports the predictability of the model and identiWes new features of the model not apparent from individual studies. Additionally, it shows the limitations of the model and identiWes opportunities for enhanced animal welfare according to the principles of the "3Rs" formulated by Russell and Burch over 50 years ago [126].
Search strategy
Studies were identiWed from Pubmed (1974( to March 2007 and Embase (1980( to March 2007 using the combination of words: CISPLATIN and FERRET; hand searching of abstracts of scientiWc meetings and personal Wles. All references of newly identiWed publications were also screened until no further eligible references were found. Language was not restricted. Values for data expressed graphically were either requested from authors or measured from the graphs. Corresponding authors were also contacted to obtain data that was not reported clearly enough in their publications.
Inclusion criteria: • Report of cisplatin-induced emesis in the ferret • Emetic response documented, and quantiWed by at least one of the following: latency to onset of emesis (retching or vomiting), number of animals developing emesis, number of retches (R), vomits (V), retches and vomits (R+V) deWned according to our deWnition and reported as mean only or mean § SEM or SD, and number of ferrets per group.
Exclusion criteria: • Number of animals not stated • Emetic response investigated under anaesthesia • Emetic response not reported as the number of animals developing emesis, or mean latency, or mean number of retches and vomits compatible with the standard deWnition of emesis.
Emesis was deWned as retching (i.e. rhythmic abdominal contractions against a closed glottis) and vomiting (i.e. rhythmic abdominal contractions associated with the oral expulsion of solid or liquid materials from the gastrointestinal tract) [14,22,89]. Reports stating this deWnition in their methods section were included in this study; if the deWnition was absent or unclear, evaluation of the results reported and their inclusion in this quantitative review were left to the judgement of the investigator and discussed with co-workers; reports were included: • If the same team had published other reports clearly stating this deWnition or a member of the team-most senior or corresponding author-was contacted to conWrm the deWnition used to characterise emesis • If the report referred to publications clearly stating this deWnition • If the deWnition stated allowed the identiWcation of the number of retches and/or vomits according to our deWnition.
The latency (time to onset of emesis) was a potential confounding factor as many publications reported the latency as a mean for all the animals in the groups, including those free of emesis, in which case the latency was taken as the total duration of the observation time. All latencies were recalculated as the mean latency to the Wrst retch or vomit in animals that developed emesis only. The latency was either measured as the time to the Wrst retch, the time to the Wrst vomit or the time to the Wrst emetic episode. All the latencies reported were included combined together, as it was considered that only a minimal period of time separates the Wrst retch from the Wrst vomit [144], a more rigid approach including only the studies with either one or the other measurement and excluding those reporting the latency to the Wrst emesis or emetic episode would have induced a greater error.
Meta-analysis: the ferret model of cisplatin-induced emesis
The number of retches (R), vomits (V), retches + vomits (R+V) and/or latency data from control groups (i.e. animals that received no other drug than cisplatin or cisplatin and an inactive-i.e. non-emetogenic-vehicle) were extracted as mean, standard deviations (SD) and number of animals per group. Weighted mean and weighted mean of the SD were calculated and a one-way ANOVA was carried out to compare the onset of emesis following diVerent doses of cisplatin. Unless stated, all results are reported as mean § SD. In order to identify variables modulating the emetic response, subgroup analyses were carried out according to criteria such as the vehicle used, duration of the observation period, the mode of administration of cisplatin (i.v. or i.p.), the use of anaesthesia and the recovery time prior to the emetic challenge, the strain (Wtch or albino), sex and origin of the ferrets. This analysis was only carried out on the most common doses of cisplatin used to induce acute (10 mg kg ¡1 ) and acute and delayed emesis in the ferret (5 mg kg ¡1 ); the two doses were treated separately. Weighted means and the weighted mean of the SDs were calculated and proWles of emesis were constructed with Graphpad Prism ® version 5.0, Graphpad Software Inc., San Diego, USA. DiVerences were assessed by a one-way analysis of variance (ANOVA) or independent sample t tests as appropriate. Descriptive statistics and comparisons were carried out using SPSS ® 14.00, SPSS Inc., Chicago, USA and CLINSTAT (M. Bland). DiVerences were considered statistically signiWcant when P < 0.05.
Meta-analysis: the eVect of anti-emetics
For the meta-analysis of the eVects of anti-emetics, comparisons were only included if the eVect of a prophylactic antiemetic treatment was reported, the 3 outcomes measured were: the number of R+V, proportion of animals experiencing emesis and latency to the onset of emesis. To calculate the eVect size and its 95% conWdence interval for the continuous outcomes (i.e. R+V and latency), the mean outcome for the treatment group, and the SDs in treatment and control groups were expressed as a proportion of the outcome in the control group [80]. Actual data were used for dichotomous outcomes (i.e. the number of animals with emesis). When a control group was used to assess more than one treatment group, the number of animals in the control group was divided by the number of treatment groups and if needed, adjusted to the next integer. This methodology is consistent with what has been done in another metaanalysis of animal data in a model of stroke [80]. The eVect of ondansetron was examined for each of the 3 outcomes; subgroup analyses were carried out depending on the dose of cisplatin and duration of the observation period, dose, timing and mode of administration of ondansetron, mode of administration of cisplatin, origin of the ferrets, and quality score of the study. Additional analyses examined the eVects of 5-HT 3 receptor antagonists on the latency to the onset of cisplatin (10 mg kg ¡1 )-induced emesis and on the acute and delayed R+V induced by 5 mg kg ¡1 cisplatin (criteria for variation: individual compound). In the later analysis, comparisons on a given day were only included if the antiemetic treatment started before or at the start of the 24-h period and was continued throughout the day.
Methodological quality of individual studies was assessed according to criteria chosen to evaluate the reliability of the data extracted. These criteria were: no duplicate publication identiWed-conWrmed or suspected-, retch and vomit clearly deWned or deWnition conWrmed by authors, latency to the Wrst retch or vomit given, SEM/SD given for the mean latency, number of retches and vomits or R+V given, SEM/SD given for the mean R+V, number of ferrets completely protected given (1 point per criterion fulWlled), origin, sex, strain and body weight of the ferrets given (1/2 point per criterion fulWlled). Each study was given a quality score out of a possible total of 9 points. The DerSimonian and Laird method was used to combine dichotomous (risk diVerence [RD]) and continuous data (weighted mean diVerence [WMD]). The random-eVects model was chosen over the Wxed eVect assumption because it incorporates inter-study diVerences into the analysis of the overall treatment eYcacy [28]. The data were analysed with Review Manager (RevMan. Version 5.0 for Macintosh. Copenhagen: The Nordic Cochrane Centre, The Cochrane Collaboration, 2008). All eVect estimates are reported as mean and 95% conWdence intervals. Z tests were used to assess the overall eVect of treatments and Chisquared ( 2 ) tests were used to assess the heterogeneity along with I 2 , which describes the percentage of the variability in eVect estimates that is due to heterogeneity rather than chance (a value greater than 50% may be considered substantial heterogeneity). The diVerences between the treatment eVects of sub-categories of a particular outcome were assessed by a Z test [1,87], potential publication biases were assessed by Funnel plots [143].
Publications
As of March 2007, 182 publications were retrieved, 115 publications describing cisplatin-induced emesis in the ferret were identiWed, 32 publications were excluded and 83 publication contained usable data ( Fig. 1; Tables 1, 2). A further 15 publications were excluded on the grounds that data was already reported elsewhere. The remaining 68 publications were either fully or partly included as some papers presented original data and duplicate data together, in this case only the original data was extracted and the duplicate data was ignored.
Out of the 68 publications from which at least one outcome was extracted, 44, 10 and 9 publications reported the eVect of at least one 5-HT 3 , NK 1 receptor antagonist and [97] glucocorticoid, respectively. In terms of outcome, 63 studies reported the latency, which was either measured as the latency to the Wrst retch (13% of the publications reporting the latency), the latency to the Wrst vomit (16%) or the latency to the Wrst emetic episode (75%). Fifty-one studies reported the number of retches and vomits (R+V) in a given observation time, 37 and 36 studies respectively reported the number of retches and vomits separately. The number of animals with emesis during the duration of the observation time was reported in 45 publications.
Dose of cisplatin
The latency to the onset of emesis was dose dependant; the time of onset was signiWcantly delayed following a dose of cisplatin of 5 mg kg ¡1 , compared with higher doses (6-20 mg kg ¡1 , one-way ANOVA followed by Bonferroni post-tests, see Fig. 2). Additionally, diVerences between the doses of 6-20 mg kg ¡1 were detected (P < 0.001, one way ANOVA) and the latency shortened when the dose increased. (Fig. 3a).
The latency to the onset of emesis was shorter when cisplatin was administered intravenously (1.16 § 0.35 h, n = 277) compared to intraperitoneally (1.51 § 0.29 h, n = 134, P < 0.001, independent sample t test). However, the route of administration was directly related to the use of anaesthesia, and all the animals injected with cisplatin (1.33 § 0.24 h, n = 7; 148 § 100 R+V, n = 7) was also found to increase the number of R+V (P < 0.05, one-way ANOVA and Bonferroni post-tests).
No diVerences were detected in latency or number of R+V over 4 h in groups of male ferrets only compared to groups of males and females (P > 0.05, independent sample t tests). The latency was signiWcantly reduced in albino ferrets (1.07 § 0.14 h, n = 29) compared to Wtch ferrets (1.34 § 0.20 h, n = 70) and mixed groups of albino and Wtch ferrets (1.25 § 0.36 h, n = 108, P < 0.05, one-way ANOVA and Bonferroni post-tests) but not enough data were available to compare the number of R+V.
The latency to the onset of emesis was signiWcantly longer in ferrets bred in New Zealand (1.73 § 0.37 h, n = 11) compared to animals bred in the UK (1.25 § 0.44 h, n = 170) and in the USA (1.31 § 0.19 h, n = 176, P < 0.05, one-way ANOVA and Bonferroni post-tests). No diVerences were detected in the number of R+V over 4 h (P > 0.05), but the number of R+V over 2 h was reduced in New Zealand ferrets (40 § 41 R+V, n = 6) compared to U.K. animals (88 § 32 R+V, n = 18, P = 0.007, independent sample t tests). As all the ferrets originating from New Zealand were challenged with an intraperitoneal dose of cisplatin, a sub-analysis was carried out only in animals administered cisplatin i.p. and the delay in latency was still signiWcant in ferrets bred New Zealand compared to animals bred in the UK (1.43 § 0.46 h, n = 29, P < 0.05) but not to animals bred in the USA (1.51 § 0.21 h, n = 84, P > 0.05, one-way ANOVA and Bonferroni post-tests).
The 5 mg kg ¡1 cisplatin model (acute and delayed emesis model)
Fourteen studies investigated cisplatin-induced acute and delayed emesis in the ferret; cisplatin was administered i.p. in all studies and no study reported prior use of anaesthesia. A biphasic proWle of emesis was observed; the acute phase started 10.51 § 0.58 h (n = 156) after cisplatin administration, peaked after 12 h and a nadir was reached after 24 h (Fig. 3b). The delayed phase was more intense than the acute phase and reached a peak 48 h post cisplatin before gradually decreasing in intensity during the next 24 h, until 72 h post cisplatin, at which time a small amount of emesis still persisted. 105 § 83 R+V (n = 215) and 340 § 171 (n = 153) were observed during the acute and delayed phase, respectively (161 § 98 R+V during day 2 and 179 § 94 R+V during day 3, n = 130). Overall, 448 § 231 R+V (n = 153) were observed during the entire 72-h period.
In animals that did not receive any vehicle, the latency was 11.76 § 9.86 h (n = 98) and animals that received i.p. injections of vehicles such as saline, distilled water and 10% NaHCO 3 had a latency of 5.52 Overall, the injection of a vehicle had an impact on the latency (P < 0.0001, one-way ANOVA) but no speciWc diVerences were detected compared to the group that did not receive any vehicle (P > 0.05, Bonferroni post-tests). None of the vehicles had a signiWcant impact on the number of R+V during the acute or the delayed phase (P > 0.05, one-way ANOVA). Neither the strain nor the sex of the ferrets had an impact on the latency or the number of R+V (P > 0.05, one-way ANOVA). Five variants of the acute model of cisplatin-induced emesis were combined in this meta-analysis: emesis induced by 5 mg kg ¡1 cisplatin and R+V quantiWed for 24 h and emesis induced by 10 mg kg ¡1 and R+V quanti-Wed for 24, 6, 4 and 2 h. Ondansetron signiWcantly reduced the R+V in all variants of the model but one; as shown Table 3, the reduction of R+V did not reach statistical sig-niWcance when emesis was induced by 10 mg kg ¡1 cisplatin and quantiWed for 24 h. There was a trend for the eVect of ondansetron to increase with shorter observation times (41, 68, 70 and 93% reduction for 24, 6, 4 and 2-h observation periods, respectively) but this did not reach statistical signiWcance. The R+V reduction was dose-dependent, doses of 1-10 mg kg ¡1 aVorded a more eVective protection than lower doses of 0.1-0.5 mg kg ¡1 (40 and 83%, respectively, Z = 2.52, P = 0.010). The regimen of ondansetron administration did not change the outcome; the eVect of ondansetron was similar with i.v., i.p., and s.c. injections (Z tests, P > 0.05) and ferrets treated 30 min prior to cisplatin administration received the same degree of protection as ferrets treated at the time of cisplatin injection (Z = 0.04, P = 0.965, see Table 3). With longer observation periods, ondansetron three times daily was as eVective as twice-daily injections (Z = 0.05, P = 0.959). Ondansetron had the same eYcacy on the R+V induced by an i.p. or an i.v. dose of cisplatin (Z = 1.07, P = 0.287) and the origin of the animals or the quality score of the studies (see Table 1) did not inXuence the outcome (Z tests, P > 0.05).
Meta
Outcome: number of animals with emesis Data on the number of animals with emesis was extracted from 14 publications; 28 comparisons involving 256 ferrets were identiWed. The global estimate of the eVect of ondansetron was Fig. 4 EYcacy of ondansetron number of retches + vomits during the acute phase of emesis induced by cisplatin 5 or 10 mg kg ¡1 . Point estimates and 95% conWdence intervals for each of the ondansetron versus control comparisons ranked by dose. The eVect estimate was computed as the weighted mean diVerence (WMD) and expressed as the proportion of retches and vomits in the control group. An eVect estimate of ¡1 indicates that emesis was abolished in the treatment group, 0 indicates that the treatment had no eVect on the R + V response and an eVect estimate >0 indicates that the treatment increased the number of R + V. The size of each square represents the weight of the comparison in the WMD calculation ¡0.33 (¡0.48 to 0.17) indicating that following the administration of cisplatin 5 or 10 mg kg ¡1 , ondansetron abolished the emetic response during the observation time in one-third of the ferrets (see Fig. 5). Overall, this eVect was signiWcant (Z = 4.09, P < 0.001) but a substantial statistical heterogeneity ( 2 = 115.13, df = 27, P < 0.001, I 2 = 76.5%) was detected between comparisons. Subgroup analyses revealed diVerences in estimates of eYcacy between variants of the model. Whereas ondansetron had a signiWcant eVect against emesis induced by 10 mg kg ¡1 cisplatin and quantiWed over short observation times (2-4 h) it did not show any signiWcant eVect against 10 mg kg ¡1 cisplatininduced emesis quantiWed for 6 and 24 h and 5 mg kg ¡1 cisplatin-induced emesis quantiWed for 24 h (see Table 4). EYcacy increased as the dose increased, and only doses of ondansetron higher than 0.1 mg kg ¡1 had a signiWcant eVect but there was signiWcant heterogeneity within those two subgroups (see Table 4). No statistical diVerences were detected between i.p., i.v. and oral (p.o.) administration of ondansetron (Z tests, P > 0.05) but 2 and I 2 tests revealed a high degree of heterogeneity in all 3 subgroups. Administered s.c., ondansetron was ineVective but this could Table 3 Sensitivity analyses of the eVect of ondansetron on the number of retches + vomits (R+V) induced by cisplatin (5 or 10 mg kg ¡1 ) The eVect estimate was computed as the weighted mean diVerence (WMD) and expressed as the proportion of retches and vomits in the control group. An eVect estimate of ¡1 indicates that emesis was abolished in the treatment group, 0 indicates that the treatment had no eVect on the R+V response and an eVect estimate >0 indicates that the treatment increased the number of R+V. The variables examined were the variant of the cisplatin model (5 or 10 mg kg ¡1 cisplatin and the duration of the observation period), the mode of administration of cisplatin, the dose of ondansetron, the regimen of ondansetron administration (mode of delivery and timing relative to cisplatin administration), the animal origin (country animals were bred) and the quality score assigned to the study where comparisons were extracted potentially be misleading as only 2 comparisons were included in that group and both reported the number of animals completely protected for 24 h in which setting none of the animals were completely protected (see Table 4, Fig. 5). The regimen used did not inXuence signiWcantly the number of animals completely protected by ondansetron. There were no diVerences between the subgroup that received ondansetron at the time of cisplatin injection and the subgroups that received it 30 min and 1 h prior to cisplatin (Z tests, P > 0.05) but once again, these results should be taken with caution as the heterogeneity within each subgroup was high. With longer observation times (24 h), ondansetron was equally ineVective given 2 or 3 times a day. Ondansetron appeared slightly more eVective when cisplatin was injected i.v. compared to i.p. but this did not reach statistical signiWcance (Z = 1.757, P = 0.079) and could be biased by the fact that in all the comparisons where animals were observed for 24 h, cisplatin was injected i.p. Furthermore, heterogeneity was, once again, highly signiWcant in both groups (see Table 4). The origin of the animals and the quality score of the studies did not inXuence the eVect of ondansetron (Z tests, P > 0.05); however heterogeneity was high in all subgroups (see Table 4).
Outcome: latency Latency data was extracted from 10 full papers. Fifteen comparisons involving 131 ferrets assessed the eVect of ondansetron on emesis induced by 10 mg kg ¡1 cisplatin and 3 comparisons assessed the eVect of ondansetron on the latency to the onset of emesis induced by 5 mg kg ¡1 cisplatin (see Fig. 6). The global estimate of the eVect of ondansetron on the latency was 0.86 (0.49-1.24), which means that the latency was nearly twice as long in the groups treated with ondansetron compared to the control groups. This eVect was signiWcant (Z = 4.45, P < 0.001) but 2 test revealed a high degree of heterogeneity ( 2 = 314.35, df = 14, P < 0.001), which was corroborated by the I 2 (96%). Subgroup analysis revealed that ondansetron statistically delayed the latency to the onset of emesis induced by 10 mg kg ¡1 cisplatin but not 5 mg kg ¡1 cisplatin ( Table 5, Fig. 6). The eVect of ondansetron was dose dependant and doses of 1 mg kg ¡1 conferred a signiWcantly higher protection, increasing the latency by about 200% (Z tests, P < 0.05). Ondansetron was more eVective when given i.p. than p.o. or i.v. (Z tests, P < 0.05); however, this result needs to be taken with caution as ondansetron was injected i.p. in all the comparisons where the highest dose was given (see Fig. 6). No signiWcant diVerences were observed with diVerent treatment times, there was no diVerence in outcome if cisplatin was injected i.p. or i.v. and the origin of the ferrets did not inXuence the outcome (Z tests, P > 0.05). The outcome was not inXuenced by the quality of the study (see Table 1) as no The size of each square represents the weight of the comparison in the total RD calculation diVerences were detected between studies scoring less than 5 out of 9, between 5 and 7/9 and 7.5 or higher (Z tests, P > 0.05). Funnel plots for the eVect of ondansetron on the R+V and the number of animals with emesis were relatively symmetrical (see Fig. 7). The funnel plot for the latency was slightly asymmetrical, which reXects the high degree of heterogeneity detected for this outcome. Overall, no associations between treatment eVect and sample size were detected, suggesting no evidence of publication bias.
EVect of 5-HT 3 receptor antagonists on the latency to the onset of emesis induced by 10 mg kg ¡1 cisplatin
The eVects of 5-HT 3 receptor antagonists on the latency to the onset of 10 mg/kg cisplatin-induced emesis were Table 4 Sensitivity analyses of the eVect of ondansetron on the number of animals with emesis following the administration of cisplatin (5 or 10 mg kg ¡1 ) The eVect estimate was computed as the risk diVerence (RD) and represents the proportion of animals with emesis during the duration of the observation period. An eVect estimate of 0 indicates that the treatment had no eVect on the number of animals with emesis, ¡1 indicates maximal eVect. The variables examined were the variant of the cisplatin model (5 or 10 mg kg ¡1 cisplatin and the duration of the observation period), the mode of administration of cisplatin, the dose of ondansetron, the regimen of ondansetron administration (mode of delivery and timing relative to cisplatin administration), the animal origin (country animals were bred) and the quality score assigned to the study where comparisons were extracted investigated with 11 diVerent anti-emetics (ondansetron, granisetron, tropisetron, indisetron, dolasetron, L-683,877, renzapride, zacopride, bemesetron, azasetron and ramosetron), this data was extracted from 22 studies; 76 comparisons were reported, involving 587 ferrets. However, for 14 comparisons, because only one animal developed emesis in the treatment group, the point estimate and conWdence interval could not be computed and was therefore not included in the calculation of the eVect estimate. Altogether, 5-HT 3 receptor antagonists increased the latency by 72% (eVect estimate: 0.72, 95% CI 0.56-0.87); this eVect was highly signiWcant (Z = 9.08, P < 0.0001) but a high degree of heterogeneity ( and Z = 4.44 for day 1, 2 and 3, respectively; P < 0.0001, see Fig. 8). There was no diVerence between the eVect of 5-HT 3 receptor antagonists on each of the 3 days (Z tests, P > 0.05); no statistical heterogeneity was detected ( 2 tests, P > 0.05 and I 2 = 0% on each of the 3 days). The eVects of granisetron and ondansetron were signiWcant for each of the 3 days, whereas indisetron did signiWcantly reduce the R+V during day 1 and 3 but not day 2. No sig-niWcant diVerences were detected between the eVects of ondansetron, granisetron and indisetron for each of the 3 days (Z tests, P > 0.05).
Discussion and conclusions
Cisplatin-induced emesis in the ferret We found that the latency to the onset of the cisplatininduced emesis was dose dependent, which is consistent with Wndings in other species such as humans [71], dogs [10] and pigeons [106]. A step-change was observed between 5 and 6 mg kg ¡1 , suggesting a diVerence in the mechanisms triggering emesis at low and high dose, Fig. 6 EYcacy of ondansetron on the latency to emesis induced by cisplatin (5 or 10 mg kg ¡1 ). Point estimates and 95% conWdence intervals for each of the ondansetron versus control comparisons ranked by dose. The eVect estimate is the impact of the treatment on the latency expressed as a proportion of the latency in the control group. An eVect estimate <0 indicates that the latency was shorter in the control group than in the treatment group, 0 indicates that the treatment had no eVect on the latency and an eVect estimate of 1 indicates that the treatment increased the latency by 100%. The size of each square represents the weight of the comparison in the WMD calculation. Note that in 3 comparisons, the eVect estimate was not estimable as only one animal developed emesis in the group treated with ondansetron possibly the activation of an additional mechanism (e.g. recruitment of less sensitive vagal aVerent branches, area postrema). Prior anaesthesia and the route of administration of cisplatin were identiWed as confounding factors; the emetic response was also modulated by some of the vehicles used and factors inherent to the ferrets such as strain and origin. The diVerence between intravenous and intraperitoneal cisplatin may only reXect prior anaesthesia, as these two factors were dependent in the present study. Certainly, anaesthesia had an impact in its own right as diVerences were detected between injectable and volatile anaesthetics. The impact of the ferrets' origin could reXect a genuine diVerence between populations of ferrets but it might also indicate diVerences between laboratories rather than a diVerence between animals; this cannot be determined from the present study. These Wndings however stress the relevance of choosing appropriate controls (e.g. vehicle control, sham-operated) and homogeneous groups of animals when using the ferret cisplatin model of emesis. Table 5 Sensitivity analyses of the eVect of ondansetron on the latency to the onset of emesis induced by cisplatin (5 or 10 mg kg ¡1 ) The eVect estimate was computed as the weighted mean diVerence (WMD) and represents the impact of the treatment on the latency expressed as a proportion of the latency in the control group. An eVect estimate <0 indicates that the latency was shorter in the control group than in the treatment group, 0 indicates that the treatment had no eVect on the latency and an eVect estimate of 1 indicates that the treatment increased the latency by 100%. The variables examined were the dose and mode of administration of cisplatin, the dose of ondansetron, the regimen of ondansetron administration (mode of delivery and timing relative to cisplatin administration), the animal origin (country animals were bred) and the quality score assigned to the study where comparisons were extracted The proWle of emesis induced by 5 mg kg ¡1 i.p. cisplatin in the ferret was clearly biphasic, which is consistent with the proWle of emesis observed in the clinic [84] but diVerences were observed in the timing and magnitude of the two phases. Whereas the acute phase is more severe than the delayed phase in humans [72,84], with an onset 1-6 h following cisplatin infusion [55,71,84], the latency to the onset of the acute phase was greater in the ferret (>10 h) and its relative intensity compared to the delayed phase was lower. The incidence of emesis on each day could not be investigated in the ferret as all reports stated that 100% of the animals developed an emetic response but no distinction was made between the acute and delayed phases. A few studies have however suggested that the acute phase was not observed in all animals [115,147], which is also consistent with our recent observations [104] and suggests that whereas the incidence of emesis during the delayed phase is close to 100%, the incidence of acute emesis at this dose is lower. In humans treated with placebo anti-emetics, however, the incidence of emesis during the acute phase (98%, [71]) is higher than during the delayed phase (44-89% [43,72]).
EVect of 5-HT 3 receptor antagonists
The eYcacy of ondansetron in the acute model of cisplatininduced emesis was assessed measuring 3 outcomes: the Fig. 8 EYcacy of 5-HT 3 receptor antagonists on the daily number of retches + vomits (R + V) induced by 5 mg kg ¡1 i.p. cisplatin during the acute (day 1) and delayed (days 2, 3) phases of emesis. Point estimates and 95% conWdence intervals for each of the 5-HT 3 receptor antagonist versus control comparisons ranked by dose. The eVect estimate was computed as the weighted mean diVerence (WMD) and expressed as the proportion of retches and vomits in the control group. An eVect estimate of ¡1 indicates that emesis was abolished in the treatment group, 0 indicates that the treatment had no eVect on the R + V response and an eVect estimate >0 indicates that the treatment increased the number of R + V. The size of each square represents the weight of the comparison in the WMD calculation Fig. 7 Funnel plots for the eVect of ondansetron on the number of retches + vomits (a), number of animals with emesis (b) and latency (c). For each comparison, the eVect estimates are plotted on the x-axis and corresponding standard errors are plotted on the y-axis number of R+V, the number of animals with emesis and the latency to the onset of emesis. Overall, all 3 outcomes permitted the detection of signiWcant anti-emetic protection, which is consistent with Wndings in humans [24], but diVerent variants of the model resulted in diVerent levels of antiemetic protection. Whereas all variants of the acute model reXected a similar reduction of R+V, ondansetron only delayed signiWcantly the onset of emesis following 10 but not 5 mg kg ¡1 cisplatin, and the number of animals with emesis was only reduced with observation periods no longer than 4 h. In the 10 mg kg ¡1 4 h variant of the model, half of the animals were completely protected from emesis, which was comparable to percentage of patients free of emesis during the acute phase of high-dose (>50 mg m ¡2 ) cisplatin-induced emesis in human patients [58,65,85,124]. Overall, the acute phase of emesis induced by 5 mg kg ¡1 cisplatin represented poorly the clinical situation and the 4 h to 10 mg kg ¡1 model was more predictive of cisplatin-induced emesis in humans.
Overall, 5-HT 3 receptor antagonists reduced the emetic response to the same extent during the acute and delayed phases, which contrasts with Wndings reported in the majority of clinical studies, describing limited or non-signiWcant eVect of 5-HT 3 receptor antagonists during the delayed phase [57,73]. This discrepancy may be explained by a diVerence in outcome usually measured in humans (daily incidence, percentage of patients developing emesis) and ferrets (severity, number of R+V). Even though daily incidence and severity (measured by visual analogue scale) of the delayed phase emesis appear to be positively correlated in the absence of anti-emetic therapy [72], they can be uncoupled following anti-emetic treatment and the number of emetic episodes may be reduced while the incidence remains unchanged [100]. Alternatively, it is conceivable that the delayed phase of emesis in the ferret is more sensitive to 5-HT 3 receptor antagonists than it is in humans. The latter would be consistent with a longer acute phase in the ferret (see [110] for details), implying that the mechanism regulating the acute phase (i.e. 5-HT-mediated activation of the abdominal vagal aVerents) remains activated longer. The ferret model thus correctly identiWed the anti-emetic potential of 5-HT 3 receptor antagonists against both the acute and delayed phases of cisplatin-induced emesis, but the magnitude of the anti-emetic eVect during the delayed phase appears greater in the ferret than it is in humans.
Methodology
In the present study, the methodology used in the metaanalysis of human clinical trials was modiWed and adapted to animal research, and several concessions had to be made. First of all, the criteria used to select studies for inclusion into the systematic review and meta-analysis, and assess quality, did not include randomization and blinding. Whereas these two parameters are considered essential for human clinical trials and it has been suggested that their absence favours positive Wndings in animal research [8], emesis is an objective measurement, which is not investigator-dependent and we have no reason to believe that the inclusion of such studies in our analysis biased our Wndings. Additionally, randomization and blinding in the iden-tiWed studies were too rarely reported to be used as inclusion criteria. The exclusion criteria were chosen to ensure the collection of reliable, clearly deWned data. The majority of excluded studies (see Table 2) were removed because emesis was not quantiWed as latency, retches, vomits and/or incidence. We chose not to include outcomes such as "emetic episodes" or "bouts of emesis" in the present analysis because of the disparity of deWnitions and possible interpretations. This may however restrict the conclusions of our study.
Secondly, whereas clinical trials usually report tens or hundreds of patients in each study arm, we found that in the cisplatin ferret model, typically 4 to 8 animals were allocated in each treatment group. This may limit the relevance of such an analysis, designed to compare much bigger samples. Additionally, because a majority of studies compared one control group to several treatment groups-typically diVerent doses of a compound-in order to maintain the data from diVerent doses as distinct comparisons, the number of animals in the control group was divided by the number of treatment groups it was compared to. The limitation of such an approach is that in the eVect estimate calculation, the weight of such comparisons is reduced, which beneWts comparisons extracted from studies that only compared one control group to one treatment group and is not justiWed by the quality of the studies.
Conclusions
We demonstrated the potential of a meta-analysis to address the 3R's (Replacement, ReWnement and Reduction), developed by Russel and Burch as criteria for a humane use of animals in research [126]. By maximising the utilisation of animal data, thus extracting novel scien-tiWc information without increasing the number of animals, such analysis addresses Reduction, as this reduces the future use of animals. Additionally, the eVects of ondansetron on the 3 outcomes highlighted a logical ReWnement of the model by reducing the observation period. The cisplatin 10 mg/kg to 4 h variant of the model stands out as the most appropriate to study the acute phase of cisplatin-induced emesis, whereas 5 mg/kg to 24-72 h remains the model of choice to study cisplatin-induced delayed emesis.
The present study is a proof of concept. In an attempt to focus the scope of the analysis, it was limited to an animal model with one emetic: cisplatin, one species: the ferret and one class of anti-emetic drugs: 5-HT 3 receptor antagonists; the drugs investigated are already successfully used in humans against chemotherapy-induced emesis. Globally, the eVects in the ferret were consistent with clinical Wndings, which were expected as they were originally developed in the ferret model; this demonstrates that a meta-analysis is an appropriate method to identify the anti-emetic potential of a drug and conWrms that the ferret model of cisplatin-induced emesis is truly predictive and relevant to humans. This method can now be applied to investigate the eVects of NK 1 receptor antagonists, which were also developed using the ferret and which have recently been introduced into the clinic. However, recent studies suggest that there may be a discrepancy between the "broad spectrum" anti-emetic eYcacy in the ferret and the eYcacy in the clinic [3,25]. This method can also be used in other "model" species (e.g. dog) to reassess "older" compounds such as dopamine receptor antagonists or opioids, whose eVects on cisplatininduced emesis are less well characterised. It can also be directly applied to other emetic models (diVerent emetogen and/or diVerent species) and adapted to assess the relevance of models arguably predictive of nausea and emesis such as pica in the rat and conditioned taste aversion [2,77]. | 2017-08-02T19:04:42.641Z | 2010-05-28T00:00:00.000 | {
"year": 2010,
"sha1": "1c0e70b6656fac69e551297998fce82d07af6198",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00280-010-1339-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "91e66157573062f2584c6ccaf6e26d423ae7e397",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59467890 | pes2o/s2orc | v3-fos-license | Mikromani ' s Artichoke ( Cynara Cardunculus Var . Scolymus )-A Mediterranean Nutraceutical
Globe artichoke is considered one of the most important vegetable crops in the European and no-European countries of the Mediterranean basin. The Mediterranean region is well known for the ‘Mediterranean diet’, with attributed health benefits based on the consumption of fruits and vegetables, olive oil, etc. The artichoke has been recognized for the treatment of several ailments and their edible parts reveal therapeutic activity. In our case we have investigated the Micromani’s artichoke, which is a local variety of the Micromani area in the South region of Peloponnese. In the present work nutritional determination of vitamins such as “vitamin C “and folic acid, minerals, fibers and total phenolics was carried out. key words: Mikromani's, Artichoke, Artichoke, Considered, Mediterranean.
introdUCtion
Globe artichoke is considered one of the most important vegetable crops in Italy (364,871t), Spain (199,100 t), France (42,465 t) and Greece (31,600 t) as well as non-European countries of the Mediterranean basin 1 (FAO 2012).Globe artichoke is also cultivated at a lesser extent in North Africa, South America, and the United States.Globe artichoke shows important nutritional characteristics due to its particularly high content of bioactive phenolic compounds, fiber and minerals.The economic use of the crop mainly focuses on the consumption of the edible immature (flower) heads, commonly referred to as 'heads'.These edible heads are eaten as fresh, canned or frozen vegetables.More recently, their demand has increased because they are very highly reputable as healthy foods.Artichokes are grown and produced locally and prepared into dishes, which often represent local specialties.In our case we have the Micromani's artichoke that is a local variety in the Micromani area in the South region of Peloponnese.The artichoke is a popular vegetable in both Greece and other Mediterranean countries 2 and is known from ancient times for its medicinal properties 3,4 .The Mikromani's artichoke is a local variety -population, with specific genetic material from Mikromani grown in an area of 650 acres with 50 producers, in Messinia perfecture, South region of Peloponnese (Greece).Artichokes are an important crop for the Messinia region and the production reaches three million (flower) heads per year.Artichokes are packed frozen or fresh and promoted in supermarkets in domestic market and abroad.Due to the considerable interest in the development of natural antioxidants from botanical sources, research has focused on the qualitative and quantitative determination of the artichoke phenolic fraction, as well as the mechanisms elucidating and underlying its therapeutic activity.Many studies confirm the popular use of artichoke for the treatment of several ailments and reveal that this therapeutic activity is probably attributed to the phenolic substances, which may inhibit free radical-mediated processes.Micromani's Artichoke could also be processed in an alternative form such as a dietary supplement or nutraceutical, transferring a high added value to the product.In the present study, the composition of nutritional ingredients such as vitamins, minerals and total polyphenols in heads of this local artichoke variety was determined and alternative ways of product processing were suggested.
MAteriALs And Methods
Five samples of immature (flower) heads were received from nine different cultivated areas in order to have a credibility and representative sampling.In total, forty five (45) samples of fresh artichoke from Mikroomani were collected.Five samples of each one of the nine areas were homogenized in one sample and taken as a representative for each area.Each of the nine samples was mashed and the mashed material was used for the determination of Vitamin C by the titration method of DPI, folic acid with the creation extracts by sodium acetate buffer and the following method of Gregory et al 1984 5 , total phenolics content was determined by the Folin-Ciocalteau method 6 .The extract (0.2 ml) was transferred into a 10.0 ml volumetric flask containing 4.0 ml water; next, 0.5 ml Folin-Ciocalteu's reagent and, after 1 min, 2.0 ml 20% aqueous solution of sodium carbonate were added.The volume was made up to 10.0 ml with distilled water.After 30 min, absorbance was measured at 760 nm against the reference solution.The results are averages of five measurements.The total phenolics concentration was calculated from a calibration curve (R 2 = 0.9954), using gallic acid as standard (0.001-0.006 mg/ml).The results are expressed as gallic acid equivalent (mgGAE/g).The antioxidant potential was measured by method of DPPh.The DPPh assay was performed as described before with some modifications 7 .Briefly, 100 mL of the methanolic solution (10 mg/mL) was added to 3.5 mL of a 0.06 mM methanol DPPh radical solution 8 .The decrease in absorbance was determined at 516 nm until it reached a plateau (after 30 min), in the dark.The DPPh antioxidant capacity was determined using a Trolox standard curve and results were expressed as ìmol Trolox equivalent per 100 g dried plant (mmol eq.Trolox/100 g).The DPPh (1,1-Diphenyl-2-picrylhydrazyl) was obtain from Sigma-Aldrich Chemie Gmbh, Germany.
The containing minerals Fe, Se Zn, Cu Mg, Mn, Ca obtained and K were determined by atomic absorption and flame emission spectrometry according to the methods of AOAC (2003).Finally, fiber content was determined by the AOAC 2009.01 method (Codex Alimentarius Commission).
And disCUssion
The total dietary fiber ranged from 6.15 to 6.47 g, just above that of the Italian artichoke.Micromani's artichoke had a significantly high folic acid content, reaching the value of 72 mg/100 g (Table 1).The total phenols content in Micromani's artichoke showed a relative abundance of phenolic phytochemicals.In fact, nine tested samples yielded an average value of 1.789,43 ± 101.16 -1.932,29 ± 109.13 mg/100 g, expressed as gallic acid equivalents (GAE).The antioxidant capacity by DPPh assay of artichoke heads gave quite high values exceeding the 5 x 10 3 mmol eq.Trolox/100 g (Table 2).
According to various studies the artichoke flower heads have a high content of vitamin C (10 mg / 100 g fresh weight) and minerals (K 360 mg / 100 g fw; Ca 50 mg / 100 g fw) 9 .In the present work we observed a higher content of minerals compared to other studies.Leaves and heads of artichoke have been found to be rich in polyphenols, fiber and minerals 4 .In our case the vitamin C content of some samples of Micromani's artichoke heads reached the value of 13 mg / 100 g f.w.
Nutritional and pharmaceutical properties of artichoke heads are linked to their special chemical composition, which includes high levels of polyphenolic compounds, fiber and minerals.The present work according to the analyzed results could consider Mikromani's Artichoke as a high nutritional value product.Thus in our case the product called "Micromani's" artichoke could be processed in an alternative form such as the dietary supplement or nutraceutical, giving the product a high added value.In the present study the composition of nutritional ingredients such as are vitamins, minerals and total polyphenols of artichoke heads from this local variety was determined and alternative ways of product processing were suggested. | 2018-12-29T10:25:37.469Z | 2016-04-29T00:00:00.000 | {
"year": 2016,
"sha1": "acbb1640eb7e3fbe8787591dd9fbd7ed5020b211",
"oa_license": null,
"oa_url": "https://doi.org/10.12944/crnfsj.4.1.03",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "acbb1640eb7e3fbe8787591dd9fbd7ed5020b211",
"s2fieldsofstudy": [
"Medicine",
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
235683363 | pes2o/s2orc | v3-fos-license | Comparison of fission and quasi-fission modes
Quantum shell effects are known to affect the formation of fragments in nuclear fission. Shell effects also affect quasi-fission reactions occurring in heavy-ion collisions. Systematic time-dependent Hartree-Fock simulations of 50Ca+176Yb collisions show that the mass equilibration between the fragments in quasi-fission is stopped when they reach similar properties to those in the asymmetric fission mode of the 226Th compound nucleus. Similar shell effects are then expected to determine the final repartition of nucleons between the nascent fragments in both mechanisms. Future experimental studies that could test these observations are discussed.
Introduction
Nuclear fission and quasi-fission are a priori very different reaction mechanisms. On the one hand, fission occurs when a heavy nucleus splits into two (or more) fragments. The fissioning nucleus can be initially in its ground-state, as in spontaneous fission, or in an excited state, as in neutron-induced fission or in fission following fusion of two heavy ions. In the latter two cases, a compound nucleus is formed with equilibrated internal degrees of freedom in such a way that the fission process only depends on its excitation energy and angular momentum. On the other hand, quasi-fission is an out-of-equilibrium mechanism occurring when two heavy collision partners transfer a significant amount of nucleons through mass equilibration, before separating in fission-like fragments without the intermediate formation of a compound nucleus [1] (see [2] for a recent experimental review).
Nevertheless, both processes also exhibit some similarities. For instance, the total kinetic energy of the fragments is well approximated by the Viola systematics [3,4], indicating a slow, damped relative motion of the fragments. In addition, the timescale for quasi-fission reactions [1,5,6] is of the same order as the minimum average timescale for the evolution from the compound system to the formation of the final fragments which is about 20 − 50 zs (1 zeptosecond = 1 zs= 10 −21 s) [7]. Another similarity is that both reaction mechanisms are impacted by quantum shell effects. In fission, shell effects are able to drive the system away from mass symmetric fission, while in quasi-fission, they are expected to stop the mass equilibration process.
The purpose of this work is to compare such quasi-fission and fission modes that are driven by shell effects. Although several Email address: cedric.simenel@anu.edu.au (C. Simenel) 1 Corresponding author shell effects are expected to occur in the compound system on its way to fission [8], our focus is on those in the nascent fragments that are responsible for the final repartition of protons and neutrons between the fragments. Neutron-induced actinide fission [9] and fission of neutron deficient actinides [10][11][12] reveal the presence of an asymmetric fission mode producing heavy fragments with Z 54 protons. Octupole (pear shape) deformed shell effects at Z = 52 and 56 [13] have been invoked to explain the constancy of the heavy fragment charge distribution centroid. Spherical shell effects in the 132 Sn region with magic numbers Z = 50 and N = 82 are also known to induce a symmetric fission mode in neutron-rich fermium isotopes [14]. In addition, other deformed shell effects are being investigated in near and sub-lead region [15][16][17][18] to explain asymmetric fission observed in this region [19,20]. Furthermore, spherical shell effects in 208 Pb are predicted to induce a super-asymmetric mode in some superheavy nuclei (SHN) [21][22][23][24][25][26][27]. However, no experimental confirmation of the latter exist so far due to the difficulty of creating superheavy compound nuclei [28].
Shell effects have also been invoked to explain quasi-fission fragment mass distributions [29][30][31][32][33][34][35][36][37][38]. In particular mass equilibration is often stopped, in reactions forming SHN, when a heavy fragment in the doubly magic 208 Pb region is produced, even at energy well above the Coulomb barrier [34,39,40]. The first experimental confirmation of this effect was only recently obtained through measurement of X-rays from the quasi-fission fragments indicating an excess of fragments with the proton magic number Z = 82 [36]. Although this observation of a quasi-fission mode produced by shell effects could potentially be associated to the predicted super-asymmetric mode in SHN fission, there has been so far no observation (either experimentally or in numerical simulations) of quasi-fission modes that could be identified to known fission modes.
Our purpose is then to investigate quasi-fission modes in a heavy-ion reaction which, in the case of fusion, would produce a compound nucleus with known fission modes. We choose the 50 Ca+ 176 Yb reaction at an energy of 13% above the Coulomb barrier. The choice for this reaction is motivated by the fact that its compound nucleus, 226 Th, is known experimentally to have two fission modes, one symmetric and one asymmetric, both with similar yields [10][11][12]. Our goal is then to investigate if quasi-fission is able to populate one or both of these fission modes. Our theoretical modelling is based on the Hartree-Fock (HF) self-consistent mean-field theory with a Skyrme energy density functional (EDF), which is known to account properly for shell effects in nuclear systems [41].
Results
Our approach is based on three steps. First, we study the fission modes in 226 Th. Although theoretical modelling of fission is still an ongoing challenge [42], microscopic approaches are commonly used to investigate fission modes. Here, we construct a potential energy surface (PES) with the constrained-HF method with BCS pairing correlations. This PES is used to confirm that our choice of EDF leads to two fission modes, associated with a symmetric and an asymmetric valleys. Second, we perform a systematic study of 50 Ca+ 176 Yb collisions with the time-dependent Hartree-Fock (TDHF) theory, searching for quasi-fission trajectories. Finally, we search for potential quasifission modes and compare them with the fission ones. This theoretical approach is motivated by the fact that the same EDF is used to describe both nuclear structure and reaction dynamics, and by the now well established applicability of TDHF to study quasi-fission in a broad range of systems [6,34,39,40,[43][44][45][46][47][48] (see [49][50][51][52] for recent reviews of TDHF applications to heavy-ion reactions). In particular, the approach has no free parameters as its only phenomenological input is the Skyrme EDF whose parameters are usually determined from properties of some nuclei and of infinite nuclear matter. We chose the SLy4d parametrisation [53] which can be used in static calculations as well as to simulate heavy-ion collisions.
To investigate the fission modes of 226 Th with the SLy4d Skyrme functional, a potential energy surface is constructed from mean-field solutions under constraints on the quadrupole moment where ρ(r) is the density of nucleons, fixing the elongation of the system, and the octupole moment fixing its asymmetry. For this purpose we use the SkyAx code which solves the constrained HF equations with BCS pairing correlations and axial symmetry [54]. The resulting PES in usually lead to Z H 52 − 56 protons in the final heavy fragments [55,56]. This is the valley explored by the system if the octupole moment is not constrained (solid line). We also see that the system may return to symmetric shapes (q 30 = 0) for little additional cost in energy (dashed line), leading to symmetric elongated fragments. Indeed, the difference in energies between the saddle point to return to the symmetric valley (overcome by the dashed line) and the first saddle point is only 1.2 MeV. We therefore expect both symmetric and asymmetric fission modes to occur with similar probabilities. These results are in good agreement with theoretical predictions using other EDF (see, e.g., [57][58][59]), as well as with experimental observations indicating similar yields for both modes at low excitation energy [11,12].
Our goal is now to investigate quasi-fission modes in 50 Ca+ 176 Yb collisions which, in the case of fusion, would form the 226 Th compound nucleus. Quasi-fission is known to rapidly increase with the charge product Z 1 Z 2 of the reactants [2]. Although experimental signatures of quasi-fission have been found in systems with Z 1 Z 2 as small as 736 in the 16 O+ 238 U reaction [60], the lightest system in which quasi-fission reactions have been observed in TDHF calculations is 50 Cr+ 180 W (Z 1 Z 2 = 1776) [44]. As TDHF predicts the most likely outcome for a given initial configuration, only a small range of orbital angular momenta L (or, equivalently, impact parameters) might lead to TDHF trajectories with quasi-fission characteristics in the 50 Ca+ 176 Yb system as it has a relatively small charge product Z 1 Z 2 = 1400.
In this work, the TDHF3D code is used with a plane of sym- Cs. The surfaces represent the initial (blue) and final (red) isodensities at half the saturation density ρ 0 /2 = 0.08 fm −3 . The solid and dashed lines represent the evolution of the centres of masses of the light and heavy fragments, respectively. The star symbols indicate the position on the trajectory used to represent the isodensity in Fig. 1. The x and y scales correspond to the full numerical box. metry (the z = 0 reaction plane) [53]. BCS correlations are included in the initial static calculations to avoid spurious deformations in open shell nuclei. These correlations are then treated with the frozen occupation approximation in the time evolution. While the 50 Ca mean-field ground-state is found to be spherical, the 176 Yb ground-state is obtained with a prolate deformation β 2 ≈ 0.33 and thus its orientation is expected to impact the reaction mechanism [61]. The centre of mass energy of the reaction is E c.m. = 172 MeV, corresponding to approximately 13% above the Coulomb barrier V B 151.8 MeV according to the systematics of Swiatecki et al. [62]. This energy is large enough to ensure that all initial orientations of the prolately deformed 176 Yb may lead to contact between the collision partners and then potentially contribute to quasi-fission [60]. The initial distance between the centres of mass of the collision partners is 22.6 fm. As the angle of emission of the fragments is unknown prior to a calculation, large Cartesian grids of 72 × 72 × (28/2) × ∆x 3 with a mesh size ∆x = 0.8 fm are used to allow for a full description of the exit channel with well separated final fragments. We performed 40 TDHF calculations with four initial orientations of 176 Yb deformation axis (forming an angle of 0, 45, 90, and 135 degrees with respect to the axis joining the initial centres of mass), and with a ∆L = 2 step in orbital angular momentum, for a total of 14,000 CPU hours on Intel Xeon Scalable 'Cascade Lake' processors. The results are compiled in Supplemental Material Table 1.
An example of resulting quasi-fission trajectory is shown in Fig. 2 for an orbital angular momentum L = 82 and an ori-entation of θ 62.0 degrees between the 176 Yb initial velocity vector and its deformation axis. The position of the fragments is obtained at each time by computing the centres of mass of the density distributions on each side of the neck. The resulting trajectories are represented by the solid and dashed lines in Fig. 2. We see that the system undergoes more than a full rotation, during which about 37 nucleons (in average) are transferred from the heavy fragment to the light one. The total contact time, defined as the time during which the neck density exceeds half saturation density ρ 0 /2 0.08 fm −3 , is τ 22.9 zs for this collision. This contact time and the large amount of mass transfer between reactants are typical of quasi-fission reactions [1,5]. Quasi-fission trajectories were searched for up to approximately 30 zs contact times. Although a slow quasi-fission component with longer contact times is observed experimentally [2], this upper limit is of the order of the longest quasi-fission times observed in TDHF calculations [6]. We therefore consider that the system has fused when the contact time reaches τ ∼ 31 zs (unless an increase of elongation indicates a likely quasi-fission at a later time, in which case the calculations is run up to τ ∼ 35 zs), which occurs essentially below critical angular momentum L c that depends on the orientation of the target. Collisions that lead to contact with the side of 176 Yb are found to have smaller critical value L c ∼ 60 , while collisions with its tip lead to L c ∼ 80 . At large L, only few nucleons are exchanged in quasi-elastic collisions, occurring at L q ∼ 68 (∼ 96 ) in collisions with the side (tip) of 176 Yb. For a given orientation, quasi-fission is obtained for L c L L q . (Note that in few cases we observe quasi-fission for L < L c , see Supplemental Material Tab. 1.) The number of protons and neutrons in the outgoing fragments are plotted in Fig. 3 as a function of the contact time. A correlation is observed at short contact times (τ 13 zs) where nucleons are transferred from the heavy to the light fragment. At longer contact times, however, this correlation is lost, with constant numbers of protons and neutrons Z L 36, N L 52, Z H 54 and N H 84 in the light and heavy fragments, respectively, indicating a stop of the mass equilibration process. Interestingly, this occurs when the fragments have reached the same numbers of neutrons and protons as the asymmetric fission fragments of 226 Th, which is a first indication that these fission and quasi-fission modes are similar. Ref. [11] and TDHF predictions of quasi-fission fragments TKE (triangles).
As most of the shell effects fixing the final asymmetry in nascent fission fragments are deformed shell effects, it is important to compare the shape of fragments as well as the number of protons and neutrons. Experimentally, the shape of the system at scission is inferred indirectly through the total kinetic energy (TKE) of the fragments. Indeed, for a similar mass and charge partition, higher TKE are larger for more compact fragments, while elongated fragments lead to lower TKE. Figure 4 provides a comparison between experimental TKE of fission fragments and TKE of quasi-fission fragments obtained from TDHF by summing kinetic and Coulomb energy between the fragments (see, e.g., [63]). We see that, for the fragments which have reached the Z L /Z H ≈ 36/54 partition, the TKE are similar in both processes, indicating similar shapes of the systems at scission. However, for more asymmetric splits, quasifission leads to smaller TKE than fission which could be attributed to differences in the dynamics. Indeed, larger asymmetries in quasi-fission are obtained for the most peripheral colli-sions (larger L) inducing shapes which can significantly differ to those in fission. The fact that quasi-fission fragments with Z L /Z H ≈ 36/54 partition have a similar shape than those produced in the asymmetric fission mode of 226 Th is further supported by a comparison of the densities near scission in Fig. 1.
The observation that the mass equilibration is stopped when the fragments have reached proton and neutron numbers, as well as shapes, that are similar to those in the asymmetric fission mode, is an indication that both modes should have the same origin in terms of shell effects. In particular, the octupole deformed shell effects at Z H 52 − 56, that were invoked as a mechanism allowing the heavy fragment to acquire pear shapes for a small cost (if any) in energy [13], could also be responsible for stopping the mass equilibration process in quasi-fission.
Conclusions
Like fission, quasifission is affected by shell effects through valleys in the potential energy surfaces [64,65]. Here, we have shown that quasifission may populate the asymmetric fission mode of 226 Th. However, the symmetric fission mode, which has similar yields as the asymmetric one in 226 Th, is not observed in our TDHF simulations of quasi-fission. A possible explanation is that the shell effects responsible for the asymmetric fission mode are strong enough to stop mass equilibration in every quasi-fission trajectory. Alternatively, longer contact times may be needed for quasi-fission trajectories leading to symmetric fragments. In that case, beyond mean-field fluctuations and correlations that build up over time may be required. It would be interesting to investigate this system with the stochastic mean-field approach which, in addition to incorporating such fluctuations as demonstrated in the case of fission [66], might allow the system to explore paths with smaller probabilities thanks to these fluctuations (whereas in TDHF, only the most likely mean-field drives the dynamics).
Experimentally, quasi-fission properties are often investigated by comparing reactions forming similar compound nuclei from different entrance channels [67][68][69]. A simultaneous investigation of quasi-fission and fission modes in the same system could be achieved by comparing 48,50 Ca+ 176 Yb (beams of 50 Ca with sufficient intensity should be available at FRIB [70]) in which one expects both quasi-fission and fusion-fission reactions, with 16,18 O+ 208 Pb (forming the same compound nuclei) in which quasi-fission is expected to be negligible.
An experimental confirmation of strong similarities between (at least some) fission and quasi-fission modes could help finding indications of the existence of theoretically predicted fission modes in nuclei that are experimentally difficult to produce, such as in superheavy nuclei. Naturally, quasi-fission could not entirely replace the experimental investigations of fission fragment distributions for the following reasons: (i) There is no guarantees that a mode observed in quasi-fission would also be present in the fission of the compound nucleus; (ii) The relative abundance of competing quasi-fission modes could be very different to that of fission modes; (iii) Not all fission modes are expected to be necessarily produced in quasi-fission.
Acknowledgments
We thank D. J. Hinde for his continuous support to this work, as well as R. Bernard 50 Ca + 176 Yb collisions at E c.m. = 172 MeV. The 176 Yb deformation axis has an angle of 0, 45, 90 or 135 degrees with the line connecting the centre of masses of the nuclei in the TDHF initial condition. θ is the angle between the 176 Yb velocity vector in the initial TDHF condition and its deformation axis. L is the initial orbital angular momentum of the collision. The contact time τ is given in zeptoseconds (1 zs= 10 −21 s). The calculations are stopped when the contact time exceeds τ ∼ 31 zs (unless the system is on its way to split in two fragments, in which case the calculation is run up to τ ∼ 35 zs). The number of protons and neutrons in the heavy and light fragments are obtained from integration of the proton and neutron densities in the outgoing fragments. The total kinetic energy (TKE) is the sum of the kinetic energy of the fragments and of their Coulomb potential energy assuming point like fragments in the last TDHF iteration. | 2021-07-01T01:42:09.536Z | 2021-06-30T00:00:00.000 | {
"year": 2021,
"sha1": "c6fc8a61a719059c8a3d29bb1ef17a75cc800fa1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2021.136648",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "c6fc8a61a719059c8a3d29bb1ef17a75cc800fa1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
252509047 | pes2o/s2orc | v3-fos-license | Transnational migrants and the socio-spatial superdiversification of the global city Tokyo
Tokyo illustrates a particularly interesting case of differential inclusions of transnational migrants in urban spaces, as the novel turn in migration policy in coordination with urban economic development has induced the arrival and diversification of migrant populations into the city. With the recent historic opening of the country to lower-skilled labour migration as well as measures to (re-) attract the global economy, thus incentivising transnational corporate professionals to relocate to specific national economic zones within the city, Tokyo is in a new socio-spatial diversification process. With a non-ethno-focal lens on transnational migration and focusing on upper-class transnational corporate migrants, this article discusses diversification regarding the newer arrivals of migrants who are differently included in the urban spaces as compared to older generations of migrants. It delivers novel accounts of a diversifying transnational migrant groups’ socio-spatial patterns within Tokyo, which illustrate the dynamics of differential inclusions resulting from the superdiversification of urban societies. The article gives new insights into the socio-spatial diversification dynamics of transnational urban spaces in a long-neglected but highly topical Asian arrival city, and conceptually reflects such localised superdiversification of urban spaces on a global scale.
Introduction
Taking Japan as an example for a study on migration issues might appear unusual at first sight, especially as discourses on the myth of homogeneity are persistent in the country (Oguma, 2002;Onuma, 1993;Weiner, 1997). However, there is a clear change as regards migration issues in Japan which can be seen as the start of a superdiversification (Vertovec, 2007). Particularly in urban arrival cities, such as Tokyo, there has been a shift from ethno-focal migration to a more diverse notion to transnational migration (Liu-Farrer, 2020;Vertovec, 2019;Yamamura, 2018). In fact, the change in Japan's migration policy to open its doors to lower skilled migration, as a belated countermeasure to the labour shortage caused by the shrinking and ageing society, is now causing changes in the differential inclusions in cities (Ye, 2017(Ye, , 2019Yeoh, 2006), including in Tokyo. Whereas spatially, ethnic towns and the concentration of ethnic communities in less privileged or even deprived areas of Tokyo have much been described and discussed (Ishikawa, 2021;Lie, 2004;Oishi, 2008), the interesting phenomenon of newer arrivals from the diversified group of high-status migrant professionals of transnational corporations has not yet found much attention.
By focusing on the particular case of transnational corporate elites (highly skilled, affluent and mobile) as part of the increasing diversification of transnational migration, this article brings a novel perspective to the discourse on differential inclusion in cities. It connects differential inclusion and the sociospatial diversification of transnational professionals' patterns and brings them into the larger context of spatial superdiversification with its local and global dynamics. It contributes to the research on transnational professionals from a migration perspective and discusses their differential inclusion specifically in Tokyo but also in the network of global cities on a larger scale.
Following the research question of how Tokyo's diversification regarding the newer arrivals which go beyond the ethnic lens are differently included in the urban spaces as compared to older generations of migrants based on ethnic group affiliations, this article discusses how such migration-led diversification results in socio-spatial differential inclusions in arrival cities, both locally but also from a global urban perspective. 1 It delivers novel accounts of non-ethno-focally based transnational migrant professionals' socio-spatial patterns within Tokyo, which illustrate the dynamics of differential inclusions resulting from the superdiversification of urban societies. It thus calls for an approach which brings together the sociospatial (super-)diversification of global cities and the changing constellations of the network of such arrival cities from a dynamic perspective.
In the first section, the spatial superdiversification of global cities is discussed against the background of transnational migrations. Based on the socio-spatial patterns of transnational migration from above, specifically transnational financial professionals in Tokyo, it will be illustrated that transnational spaces reach beyond simple understandings of ethnic towns or locations of workplaces, to encompass a larger area used for different socio-spatial practices. Reflecting on the diversification processes already found within this particular social group, it is then argued that socio-spatial diversification is indeed occurring in manifold ways through the different transnational migrations, where these are not limited to one specific global city but are spread throughout the network of global cities. The main contributions of this article are empirical evidence on the differential inclusions of new arrivals within the city of Tokyo, which demonstrates the diversity of socio-spatial patterns of transnational migrants, and a conceptual reflection on the spreading of this phenomenon of local socio-spatial (super-)diversification within the global scale of the network of global arrival cities.
Socio-spatial diversification and transnational migration
The aim of this article is to further develop the theoretical understanding of urban diversity, which encompasses literature on differential inclusion (Ye, 2016;Yeoh, 2006) but also on superdiversity (Meissner, 2015;Vertovec, 2007). By bringing together research on transnational professionals and expatriates (Beaverstock, 2013(Beaverstock, : 2917Cranston, 2016), especially in the context of global cities (Yeoh and Chang, 2001;Yeoh and Willis, 2005), and delivering novel empirical data on an under-researched group of transnational corporate elites (at least regarding their socio-spatial patterns beyond their business roles), the article contributes both to the discussion of transnational professionals and their differential inclusion in cities and to the further contextual embedding into the debate on the 'spatiality of superdiversity' (Yamamura, 2022a).
Contemporary societies are increasingly characterised by the 'diversification of diversity', or superdiversity (Vertovec, 2007). Whereas scholars still tend to focus more on the variety of ethnicities as the main aspect of such superdiversity (Vertovec, 2019), the concept goes beyond different ethnic or national backgrounds, and points also to legal status, socioeconomic status, skill level, religion, language and sex, as part of the diversity of individual characteristics of a population. In fact, what is diversifying is the constellation not only of the migrant population but also of non-migrants who are affected by diversification. Such 'growing complexity, acceleration of changes and increased interconnectedness across societies as well as diversification of migrants' (Grzymala-Kazlowska and Phillimore, 2018: 179) challenge our understanding of the world. Diversity indeed has become a novel phenomenon widely discussed in research (Meissner, 2015;Vertovec, 2012). It is also becoming an important paradigm in policymaking, challenging policy-makers to accommodate different aspects of such complexifying societal contexts for the different social groups (Lo´pez Pela´ez et al., 2022), especially with regards to issues of integration (Crul, 2016;Grzymala-Kazlowska and Phillimore, 2018) but also urban planning and policies (Oliveira and Padilla, 2017;Pemberton, 2017). Vertovec's (2007) seminal work takes London as an example for the novel dynamics of superdiversity and argues that the urban as a lens through which to study superdiversity has become the norm in research (see also Meissner and Vertovec, 2015). 'Cities are the sites of negotiations of differences' (Geldof, 2016: 127) and are where the different facets of diversity, be they related to religion, migration or any other social aspect, come together (e.g. Becci et al., 2017;Geldof, 2016;Phillimore, 2013). Urban diversity has thus become an increasingly important research field not only for urban planning and design but also for the social sciences in the broader sense. Recent discourses in the field of urban diversity have begun focusing on the diversity of cities and neighbourhoods (Yamamura and Lassalle, 2020). They examine particularly issues of conviviality and social encounter in mixed cities or neighbourhoods (Heil, 2020;Valentine, 2008;Vincent et al., 2017;Wessendorf, 2013Wessendorf, , 2014Wilson, 2017;Ye, 2016) and 'everyday multiculturalism' (Wise and Velayutham, 2010). This thread of research mostly focuses on the opportunities embedded in the social encounters of people of different backgrounds and the urban places in which these encounters take place.
The discourse on the urban spaces of superdiversity, thus, concentrates on the common places of encounter within neighbourhoods (Yamamura and Lassalle, 2020). The concept of 'commonplace diversity' (Wessendorf, 2013) describes the development of high levels of conviviality and inclusivity in urban space to a degree that 'ethnic, religious and linguistic diversity [is] experienced as a normal part of social life and not as something particularly special' (Wessendorf, 2013: 407). While it is suggested that 'social spaces [.] play an important role in the process of familiarisation with people who are different and in getting accustomed to communicating across difference' (Wessendorf, 2013: 410), the urban spaces and the differential inclusion in cities remain abstract and the actual 'spatiality of superdiversity' (Yamamura, 2022a) under-researched. Recent research at the nexus of transnationalism and urban studies highlights the spatial expression of diversity in urban spaces, such as in debates around planetary or transnational gentrification (Fernandez et al., 2016;Hayes and Zaban, 2020;Lees et al., 2016;Sigler and Wachsmuth, 2016), where transnational migrants play a key role in closing global rent gaps and contribute to the transformation of urban spaces locally. However, while discussing the 're-spatialisation of global inequality' (Hayes and Zaban, 2020: 3010), the question on the actual differential diversification in the cities that arise from such migration patterns remains unanswered. In fact, though transnational migration from above, i.e. of transnational corporate professionals (Beaverstock, 2013(Beaverstock, , 2017Carroll and Fennema, 2002;Faulconbridge, 2007Faulconbridge, , 2008Morgan, 2001) or even the transnational capitalist class (Robinson, 2017;Sklair, 2001), has been much discussed for its economic and societal roles (Hoyler et al., 2018), little is known about its impact on urban transformations through the migrants' own socio-spatial practices. Studies on gated communities (Atkinson and Blandy, 2013;Blakely and Snyder, 1997) or on the concept of upperclass citadels (Marcuse, 2000) are classical approaches to differential inclusions, or actually exclusions, of upper-class residents and often also of upper-class migrants in city spaces. Yet socio-spatial differentiations within cities that go beyond physical exclusive spaces are still rare. There have been calls to investigate them further as part of discourses on global migration and transnationalism, particularly in the context of these privileged migrants' implications for the destination sites (Croucher, 2012;Kunz, 2016;van Bochove and Engbersen, 2015), which translates into their differential inclusion in cities. Empirics are especially limited when it comes to transnational migrants beyond the ethno-focal groups.
Linking to the critique to ethno-focality in transnationalism research, and also taking the superdiversity lens to contemporary migration into cities (Meissner and Vertovec, 2015;Vertovec, 2007;Wimmer and Schiller, 2003), this article delivers a novel view on the differential inclusion of highly mobile transnational professionals in their sociospatial patterns as not simply economic actors but also social beings in the cities. In such globally connected cities, the dual migration of transnational migration from above and below is thought to be channelled more strongly than in other arrival cities (Sassen, 2001; see also Yeoh, 2013;Yeoh and Chang, 2001). The superdiversity approach calls for looking at the spatial diversification that results from the different transnational migration types and patterns in urban spaces (Yamamura, 2022a(Yamamura, , 2022b, i.e. the differential inclusion in cities. Observing Tokyo as an arrival city Japanese society, led by the capital city Tokyo, has long lost its unrivalled dominant role in Asia, actually now even lagging behind through its 'Lost Decades' of economic downturn while other strong Asian countries and cities, such as Singapore or Hong Kong, have emerged as the new Asian global cities. Winning a stronger role and eventually heading the global economy in the Asian region (again) is a clear focus of the Tokyo metropolitan government (Tokyo Metropolitan Government, n.d.). As has often been observed in Asian developmental states' involvement in the global city-making and also particularly in Tokyo's past (Hill and Fujita, 2000;Kim, 2000, 2000;Kamo, 2000;Olds and Yeung, 2004;Perry et al., 1997;Saito, 2003;White, 1998), the Japanese and Tokyo metropolitan governments are currently putting forward several measures in close collaboration and coordination aiming to bring Tokyo back onto the global economic parquet. The global city of Tokyo is being 're-made', supported inter alia by the national government designating a national strategic zone with deregulation and other incentives to attract foreign companies and talents back to Tokyo. Winning momentum also through the organisation of the mega-event the Tokyo Olympics 2020, 2 for which public and private investments flowed into urban revitalisation and development projects in the whole of the metropolitan area, Tokyo is currently undergoing a major urban transformation.
With the new turn in migration policies allowing not only highly skilled but also lower skilled migration, with the aim of alleviating the severe labour shortages, 3 Japan and in particular Tokyo appears to be on the verge of an important turning point. The urban population is separating slowly but surely from the myth of the homogenous Japanese society (the 'myth of homogeneity' or the 'monoethnic myth', see Murphy-Shigematsu, 1993;Onuma, 1993;Weiner, 1997). As recent research shows (Liu-Farrer, 2020), the share of foreign population is increasing slowly, yet the range of nationalities and legal statuses has been diversifying. 4 As of 2020, approximately 2.3% of Tokyo's population were foreigners, above the national average of 1.7% which has been increasing over the last decade. With larger population groups of Chinese and Koreans but also of other Southeast Asian nationals in the metropolitan region, Tokyo has areas with relatively high percentages of foreign residents, such as Shinjuku (over 10%) or Minato (7%), in contrast to the less than 2% of the total Japanese population. In this respect, ethnic towns and other ethnic group-based changes in the neighbourhoods within Tokyo have been well researched, with different transnational or multicultural groups as study objects (Ishikawa, 2021;Lie, 2004;Oishi, 2008). However, the dynamics and complexity of spatial diversification that are not yet empirically thoroughly covered are the migration and spatial inclusion of the migrant groups of transnational professionals. Additionally, their high-qualified and more exclusive migration to Tokyo is further accompanied by novel migration policies aimed also at lower skilled labour migrants to service them in specific designated neighbourhoods. Against this background, it is not an understatement to point out that there is a migrant-led diversification underway, a novel migration trend of new(er) arrivals in Japan (Liu-Farrer, 2020) transforming the urban society and space, making Tokyo an interesting arrival city with regards to urban differential inclusion.
Methods and empirical case
This article is based on extensive empirical research in Tokyo, with problem-centred qualitative interviews (Witzel and Reiter, 2012) with 45 highly skilled, highly affluent and highly mobile transnational corporate elites. 5 These professionals were chosen because they worked for specifically transnational corporations in the financial industry and in higher managerial positions, such as Vice Presidents, CFO or CTOs, and so were a group of transnational migrants involved in the business-wise decision-making processes of transnationalisation. Further, they had extensive experience abroad with shortand long-term assignments, and so qualified as highly mobile transnational migrants with social practices crossing national borders. The duration of stays ranged from one and a half years to long-term residence of five years and even multiple residence (in Japan for the second time). Due to the specificity of this group, no self-initiated expatriates, family migrants or highly educated but middling migrants 6 were included in the sample. They were mostly highly skilled migrants, business visa holders or inter-company transferees.
To incorporate critique on the ethno-focal lens within transnationalism research voiced by Glick-Schiller et al. (2006), transnational migrants were selected not according to their ethnicities or nationalities but according to their affiliation to transnational corporations of the financial industry. The diversity of ethnic backgrounds was thus high, with ethnicities and nationalities including US-American, UK British including Scottish and English, Australian, German, Japanese, Indian, Singaporean, French, Italian, Korean, Chinese-Singaporean and Slovak (including mixed backgrounds). Japanese transnational professionals were also included in the sample as long as the profile as a highly mobile higher management professional in a transnational corporation in the financial industry applied. Despite these diverging ethnic backgrounds, the interviewees make their own transnational group through their professional affiliation. By such industrial and professional commonality, the selection allows intragroup comparisons (Denzin and Lincoln, 2011).
Interviews lasted between 30 minutes and two and a half hours (average 45-60 minutes); the time constraint was unavoidable due to the occupational context of transnational financial professionals. Another constraint was the use of telephone interviews due to professionals' high mobility. As is common in executive and elite research, access to the field was limited, so gatekeepers and the snowballing technique were used to reach professionals with the required characteristics (Desmond, 2004;Hertz and Imber, 1995;Littig, 2009).
The 'problem' of the problem-oriented interviews was the socio-spatial patterns of the transnational professionals as migrants. They were interviewed about their sociospatial patterns both in their business and private lives, consisting of social activities and interactions with different social groups and the location choices for these sociospatial practices. Such practices go beyond capturing work or residential locations, also encompassing locations for social mingling and interacting with other foreigners or with locals, e.g. for going out after work, doing sports, socialising with befriended families or running daily errands. The interviews first focused on socio-spatial patterns in Tokyo and then on general patterns of travel, both business and private, as well as travel to other global cities. Interviewees' reflective discussions on their socio-spatial patterns also encompassed reflections on changes in their life courses, as well as on their cultural or professional identities and their perceptions and evaluations of their lifestyles.
The interviews with transnational migrants were further complemented by expert interviews. Representatives from the five largest real estate and relocation companies servicing transnational corporate professionals were interviewed for their perspectives on the residential locations of their customers. Interviews with employees of the district administration of the Minato ward and of the main transportation company were also conducted to understand the larger infrastructural policies, or nonexistence thereof, in which foreign residents are embedded.
The interviews were conducted mainly in English but, where requested or naturally occurring, Japanese or German were also spoken. In some cases, interviewees codeswitched during the interview or even within sentences. The interviews were recorded and transcribed for further analysis. Data were selectively coded according to Mayring's (1994) approach of qualitative content analysis. From the content analysis, two idealtypical patterns emerged. The two patterns of the gaijin ghetto and the Pro-Tokyoite can be translated into a homogenisation of these transnational professionals as diversified new arrivals, distinctly different from ethno-focal 'old' migration, and also a localising group within these new arrivals leading to a socio-spatial differential inclusion within the cities, mirroring the superdiversification of cities. There were indeed differences and nuances within the group as well as some fluidity, however these two ideal types emerged as dominant in the spectrum of different socio-spatial patterns.
Local socio-spatial patterns of transnational professionals in Tokyo
Based on qualitative interviews with 45 transnational corporate professionals, instead of the non-ethno-focal lens on transnational migration that has been adopted before, this article presents the local sociospatial patterns of this specific migrant group, who are work-wise transnational in their corporate contexts but also transnational in their private lives as migrant individuals. What makes their socio-spatial patterns distinct from other migrants is primarily their affiliation to transnational corporations and thus to a transnational community different from each of their own ethnic or national backgrounds. The corporate and even industrial affiliation, that is, to the financial industry or more broadly advanced producer services to transnational corporations, is the linchpin of their sociospatial patterns. With the resources they have, particularly financial capital to maintain the living standards but also social networks within this particular transnational capitalist class, they constitute their own transnational spaces in areas that are in different locations from ethnic towns within a city, their so-called gaijin ghetto, which is the place of non-ethno-focal diversification in Tokyo.
Transnational expat bubbles or the gaijin ghetto
The gaijin ghetto is the place people normally associate with the 'expat area', 7 with at least bilingual (English and the local, here: Japanese) services and also Western goods to be bought. Shops and restaurants are directed towards foreign residents, as is the housing, which is equipped with amenities adapted to the Western lifestyle. These areas are very limited in geographical extent, covering merely 4 km 2 in the upper-class areas within the Yamanote central area. 8 They specifically span an imagined triangle between Roppongi and Azabu in the Northeastern corner and Hiroo and Ebisu in the Southwestern corner, with Shibuya as the Northwestern corner. They also include a few suburban exclaves of expatriate-dominated residential areas, such as Denenchofu, 9 and work-related areas in the newly developed waterfront area around Tennozu-Isle and Odaiba. Everyday practices are characterised by socialising with other transnational professionals and their families, little interaction with Japanese in the local neighborhoods 10 and attending events of institutions and organisations related to the consulate or the corporation, such as sports or other leisure activities within the community of transnational professionals.
Characterised by such multilingual environment and linguistic landscapes, one transnational professional describes the gaijin ghetto as follows: Oh, people call it the expat bubble. If you want to, you never have to interact with anybody Japanese. You can go to the American schools. There are two American . I call them American . there are two Western grocery stores. Ehm, you don't ever have to explore, but we were really quite good at it. (KW45) Beyond the local economy adapting to these residential groups, the interviews with transnational professionals also clarify the connectedness of these socio-spatial patterns not only to the availability of services and products but also to a lifestyle related to the availability of resources: Well, I think, you know, first of all there's, I don't know which comes first, the chicken or the egg, but if you want to live in a luxury house or apartment, one that's large and has all the types of amenities you're used to, the greatest concentration of them are in the Azabu area, some are now, yeah, and there's another Gaijin area ... But, so if you prefer a certain type of apartment, you're looking for an apartment that is at least -whatever -200 square metres and has top appliances and several bedrooms and so forth . the greatest concentration of them is in the areas, and then the schools are nearby, the shopping is nearby, the American club is nearby, I work in, ehm, nearby, in, near Miyako Hotel, Ark Hills. So, everything is, there's the greatest concentration of housing and the other amenities and schools and that's what attracts people to these areas. (DS37) Transnational professionals are supported by different 'expat packages', that is, the services and resources provided by their corporations to assist them to integrate in the local city. Depending on corporate position, the extent of support differs, yet the key role in the socio-spatial inclusion of these migrants in the city is played by intermediary actors of the auxiliary industry of real estate and relocation companies. They direct them into such areas as the gaijin ghetto and lead to business and service providers aimed at this customer base concentrating in these areas: So I didn't want to live somewhere where I had to spend a long time commuting, ehm so that's why I said, okay, all right, I am not doing that somewhere and that was where I spoke to the real estate agent about interviews to do a few places in Minato, Azabu, a couple of places in my current neighbourhood [.] It was an agency that actually does handle a lot of expat-type people, but they were pretty good, they spoke English which was pretty important for me at that time. (HU29) The involvement of such real estate or relocation agents in the socio-spatial dynamics of differential inclusion of these transnational migrants also becomes clear from the narrative of the service provider side: Well, these customers from overseas want their comfort; that is where we take them.
Differences between transnational professionals: Pro-Tokyoite
The gaijin ghetto or the transnational expat bubbles may sound familiar from journalistic reports and anecdotal evidence, as well as from research on gated communities (Atkinson and Blandy, 2013;Blakely and Snyder, 1997), on the urban upper class concentrating in specific residential areas (Marcuse, 2000) and on transnational professionals or expatriates (Beaverstock, 2018;Farrer, 2018;Kunz, 2018;Spiegel et al., 2019;Yeoh and Willis, 2005). A novel sociospatial pattern distinct from these 'typical' expatriate patterns that emerged in the research was that of the Pro-Tokyoite. This pattern is a clear distinction within the already diversified transnational professionals' group. These Pro-Tokyoites are part of the transnational professionals group, yet by their more pro-localising behaviours and social interactions, they lead to different socio-spatial patterns within the urban space and result in the urban forms of differential inclusion of these transnational migrants into the otherwise predominantly Japanese (upper-class) areas beyond the gaijin ghetto.
The Pro-Tokyoite type is a more diversifying pattern, but it also covers the gaijin ghetto due to the similar residences and workplaces, thus also social interactions with co-workers; however, transnational migrants of this pattern also venture out of the small ghetto. The locations are still centred in the Yamanote area but are more dispersed and also go into areas which were characterised by gaijin ghetto migrants as being too Japanese or too local. These areas are still upper class but are more dominated by Japanese peers. The spatial difference goes hand in hand with these professionals' social practices and also their attitudes regarding socio-spatial preferences. For example, whereas the gaijin ghetto type stays close to the transnational professionals' community and places their children in international schools, Pro-Tokyoites also place their children in international schools but if possible in those with more emphasis on a bicultural curriculum. Further, instead of sending their children to extracurricular activities organised by the international school or even the consulate, they prefer, for example, to send them to art classes at a local art school: Actually it is funny, we are talking to our daughter, our older one, about joining an art class that is run through a Japanese school and in a Japanese part of town. And I think she will be doing that and it is part of our goal for her to get more exposure to Japanese language and culture. (LK25) Such behaviour can also be seen in the leisure activities of the parents and the whole family. The parents take yoga classes with locals instead of going to the Tokyo American Club or a corporate club. It is interesting to note that language proficiency is not a prerequisite for such behaviour. Even with just a few broken Japanese phrases, they try Japanese restaurants with no English menu or waiting staff, following advice they have actively sought from local Japanese co-workers instead of asking for recommendations from foreign co-workers: We have a lot of help with searching for things. A lot of it comes from recommendations from Japanese spouses of friends, who were friends with these couples. Some of them from . come from clients [who] recommend areas or specific places. Certainly, the entire staff here speaks Japanese and, you know, is able to help make reservations in an easier way. So, if I call the ryokan 11 hotel, I say: please tell them that we do not speak Japanese so don't ask us many questions while we are there unless [laughs] they speak English. We go to restaurants and again, well, we don't know whether they speak English or not before we go, or if they don't have an English menu we'll just learn how to say 'Osusume wa nandesuka' 12 very quickly, so that we can usually get fed anywhere we go. (AB10) Indeed, language skills are crucial for inclusion in the local social environment, yet it is more an issue of attitude towards openness that distinguishes the Pro-Tokyoite from those of the gaijin ghetto. In fact, interestingly, there were Pro-Tokyoites with little or even no Japanese language skills, as well as, vice versa, fluent Japanese-speakers among the gaijin ghetto. Similarly, having a Japanese spouse was not necessarily a prerequisite or causally related to the tendency to one or the other socio-spatial pattern. 13 As the following quote demonstrates, the venturing out of the gaijin ghetto is an intentional choice for a different lifestyle, a 'conscious effort' (BM21) as an interviewee pointed out. Yet, it also shows how the class-based socio-spatial pattern remains clearly a dominant characteristic also for these transnational migrants: Well, I didn't wanna live in an expaty kind of ghetto. I didn't want to live somewhere like that. And I just liked the area. It's a beautiful area and it's more local, you know. Don't get me wrong, it's a high neighbourhood, like three movie stars in my building, but they were all Japanese, right? (JN17) As the Pro-Tokyoite's socio-spatial patterns show, such different inclusion of transnational migrants beyond the ethnic towns but also beyond the expatriate ghetto is a complex interwoven diversification of sociospatial patterns. The Pro-Tokyoite's transnational spaces are not exclusive and are more fluid depending on the capacities and resources of the transnational professionals: We also say that we, where we shop for food varies . we go to Japanese supermarkets when, often because ingredients we can find were better or more interesting and we were willing to [...] experiment a lot with that sort of thing. But there are definitely times where we feel like we need to just take it easy and we do not want to challenge . we can go to the National Azabu where everything is products from around the world that is very easy, everybody speaks English. So, there is definitely a [.] it is easier to feel safe and when you do not feel like making the effort of really being in a place where you do not speak the language, you can make it easy. (AB34) What these empirical findings demonstrate is that the socio-spatial patterns of transnational migration from above -encompassing both gaijin ghetto and Pro-Tokyoite -and below diverge from each other. It is not the whole of Tokyo that is affected by migrantled diversification but only specific areas, thus the socio-spatial divergence of migrants from above and below is not surprising. The crucial insight that these empirics bring is that there are already diversification processes visible within the rather small social group of transnational managerial elites, which is presumably reflected in all social groups along the socioeconomic strata. Beyond the common sense of ethnic communities living distinctly next to each other, there is an increasing diversification of sociospatial patterns within these groups. This diversification thus impacts also the urban landscape and social practices within the arrival cities globally.
Differential inclusions in arrival cities as a global phenomenon
The implication of the Tokyo narratives is that there is also a replication of such transnational socio-spatial patterns in other cities that are affected by the global transient migration or sojourns of transnational professionals. This is exemplary especially within the network of global cities.
Universal convergence: Homogenisation to gaijin ghettoes
The gaijin ghetto elaborated above has probably come as little surprise because the phenomenon is commonly well known to global professionals and business expatriates. The topic has been much discussed in the context of transnational professionals and global city research. Mass media, non-fiction authors and academics alike have been reporting on these 'expats' or global elites. Yet, the socio-spatial implications of their social behaviour on the spatiality have not been fully explored. In line with research on globalising cities and more recently also on transnational gentrification that goes beyond a local phenomenon, one of the interviewees with gaijin ghetto patterns states: Well, just because if you live, like, in most business hotels there, a template, and they offer very much the same thing. I mean, that's the business model, like, India is an emerging market but they have Hiltons, they have Grand Hyatts and everything, same stuff. [...] And so that is generally where I'd be and you look at the room -they look the same. Menus, I mean, maybe they've got a little bit of the local food but generally you have the same basic Westernized stuff on there, plus local so it's pretty ... you can get pretty lost and not get a sense of where you are. (IN04) The gaijin ghetto can be regarded as part of this ubiquitous trend of Westernisation or generally globalisation. Mobile professionals, like those interviewees of transnational financial corporations, find such spaces within Tokyo but also the respective global cities on their business trips and other assignments. Such spaces show increasing similarity and -more importantly with respect to these professionals' usually limited time and resources -are associated with convenience. The dispersion of the 'same' businesses and the 'same' services was described by one interviewee as 'Roppongisation'. This neologism expresses the reproduction of places like Roppongi -one of the core gaijin ghetto places -in other global arrival cities: And yeah, it's true that Ebisu or Roppongi, what I can see, is quite similar to what I see in Singapore.
[.] Maybe they [Jakarta] will try to copy what they see in Singapore. But now, it is the centre of Jakarta: some of the shopping malls or the office buildings, all these restaurants, they are quite similar. Even this Roppongisation gets through but that takes much longer. (MM09) The overall context of the production of such gaijin ghetto transnational spaces lies in the dynamics of the global economy, in which transnational corporate strategies and, as part of them, transnational professionals are embedded. As already noted, mobility in a restricted time context is typical for the industry, as is the dependence of these migrants on the motions of the global economy. It becomes more than clear how much the speed and frequency of migration, and the attached transnational socio-spatial patterns, are dependent on the global flows of capital. Also, the dynamics of the global economy in the form of the shifting foci of the global cities can be observed in the transnational professionals. In the course of this global economic development, shifts of not only flows of capital but also transnational migrants and with them their social networks can be observed. As one of the interviewees, who had left for Singapore, recounted: 'Virtually all the expats had left. All of my friends had dwindled away until, finally, it was my turn to go' (MG04).
The overall locations of the gaijin ghettos are constantly on the move on the global scale due to the global mobility of these transnational corporate professionals, but also of the balances of global economy. As we can see, for example with Hong Kong nowadays or London and Brexit a few years ago, or even Tokyo with changing its significance in the global economy, the balances or the status of cities in the global circuit are changing (see global city discourses, Derudder, 2012). Moreover, discussions on global city-regions have also shown shifts within the metropolitan regional development and thus have shown strong dynamics (Scott, 2001;Yamamura, 2019). Even on the local side, urban development projects, such as new waterfront areas, show dynamics. Last but not least, the different dimensions of diversification happening on the transnational migrants' side, including changes in career or socialisation due to family or friendship, lead to changes in the urban forms of differential inclusion.
However, the actual nature of the gaijin ghetto remains actually rather constant. What remains 'constant' though is the general tendency of the socio-spatial pattern, where clustering or cocooning of expats occurs in specific high-end areas. The gaijin ghettos can be found in virtually any global city where transnational professionals sojourn (see also debates on transnational gentrification, Hayes and Zaban, 2020;Sigler and Wachsmuth, 2016). With the reproduction of their social networks and socio-spatial patterns in the cities in which they move, they contribute to a kind of global socio-spatial convergence of the global cities (see Figure 1, depicting the similar 'cocoons' in the global cities within the network). This trend could also be called Roppongisation, based on the previous narratives. What Roppongi is to Tokyo, so Holland Village or the Orchard are to Singapore, Happy Valley and Southside (e.g. Stanley and Tai Tam) are to HongKong and Canary Wharf and South Quay are to London (Butler, 2007;Choi et al., 2020;Pow, 2017; see Figure 1). Interestingly enough, this convergence itself is already diverse in that these transnational spaces are produced by a transnational professional class which is socio-economically homogenous with its own lifestyle and culture of globalism (Yamamura, 2022a). Yet, this social group itself is increasingly diverse in its social constitution of ethnic and national backgrounds. This homogenisation amongst professionals of multicultural and multiethnic backgrounds is in line with recent transnational corporate strategies aimed at creating a global mindset (see the 'universalist perspective of HRM', as in Brewster, 2007).
The universality of the superdiversification of Pro-Tokyoites
The other side of the spectrum of the transnational socio-spatial patterns is fairly different from these high-speed and constantly dynamic, yet characteristically universal, gaijin ghettos. As discussed in previous sections, the socio-spatial characteristics of Pro-Tokyoite transnational spaces emerged as a distinct and novel phenomenon. Although there must have always been adventurous individuals venturing out of comfort zones or allowed spatial areas, the collectivised socio-spatial pattern of the pro-localising persons within the particular group of transnational elites has not been empirically and conceptually discussed in recent research. The Pro-Tokyoite space consists of proximity to the local upper-class socio-spatial patterns, yet also overlaps with those of the gaijin ghetto (see comparison of Figures 1 and 2, with the same core 'cocoon' in both, yet diverging areas going beyond them in Figure 2). The overall attitudes of Pro-Tokyoite transnational migrants tend to be different and the social interactions and building of networks include a larger diversity of people. In particular, they include local non-migrants but also transnational migrants from different countries, encompassing also locals who are returnees or who have previous migration experience.
The diversity of people is also reflected in the even stronger overall diversification of transnational space on a global scale. The Pro-Tokyoites do not only intermingle with the Japanese in Tokyo. When they move to other global cities -which they do as much as other transnational professionals of the gaijin ghetto type -they also start to bind ties to the local people there. Thus, Pro-Tokyoite as a terminology becomes a mere archetype of the universal trend of transnational professionals merging with the local peer group. They produce the transnational spaces through their social practices, yet with uniquely local characteristics. So, the Pro-Tokyoite will also become a Pro-Parisien, Pro-New Yorker, Pro-Hong Konger, etc. wherever they move within the global cities network (see Figure 2). Although the dynamics of the transnational space of the Pro-Tokyoite are rather constant in terms of the upper-class lifestyles, the accumulation of these Pro-Global-Cityites, so to say, brings about a group of transnational migrants each with multilayered and highly diverse socio-spatial patterns. The Pro-Tokyoites themselves are a superdiverse group of transmigrants, with each being a hybrid in that their sociospatial patterns overlap with the gaijin ghetto, especially due to their involvement in the transnational business communities and their corporate context. They also venture out to the more local upper class, not adopting the universalist view but instead incorporating local cultures and socio-spatial practices. At the same time, the group of these transnational migrants, though similar in their Pro-Tokyoite attitudes and behaviours, is still diverse in itself. The multiple Pro-Global-Cityite characters accumulate with the experience of transnational migrations and also differ in the constellation depending on the individuals' professional and personal migration destinations (as depicted by the multi-shade areas beyond the gaijin ghetto circles in Figure 2). Additionally, further diversity is added on depending on each individual's own ethnic, religious and cultural backgrounds. These Pro-Tokyoites, thus, illustrate the diversity within transnational corporate migrants, or the extent of superdiversity.
The dynamics of transnational migrations for urban superdiversity
As gaijin ghetto and Pro-Tokyoites are ideal types on a spectrum of socio-spatial patterns, it also needs to be emphasised that such socio-spatial practices, that is, localised and localisable interactions and activities among transnational migrants, are dynamic. The dynamism of the differential inclusion in the urban space is multidimensional and related closely to discourses on class-based transnational migration patterns (Yamamura, 2022a).
As noted in the introduction to the Tokyo case, it is not only the new and preferential arrival of transnational professionals that causes novel dynamics in the city space. It is also the lower skilled migration specifically aimed at servicing upper-class migrants within the city. One the one hand, there are schemes in Japan to allow domestic workers to be brought over by highly skilled professionals. 14 On the other hand, since 2015 the foreign housekeeper's scheme has allowed the hiring of low-skilled migrants in specific national strategic zones, explicitly accommodating the needs and services of the global upper class. Such migration policies can be found not only in Japan but also in other arrival cities and countries (e.g. Singapore, Hong Kong, Canada). Such policies not only enable but also enhance the polarised transnational migration on both ends of the socioeconomic strata. This reflects the dual migration of 'transnational elites' and 'permanent underclasses' into world cities (Friedmann and Wolff, 1982;Sassen, 1988). Through such polarised new arrivals of transnational migration, intersecting spaces are produced. The differential inclusions of migrants in the cities are a combination of the transnational migrants from below working in these upper-class areas, partly also as live-in domestic workers, and them also being socialised and networked in ethnic towns, following the 'old' arrivals' patterns (Ye, 2017;Yeoh, 2006). Another aspect of differential inclusion in arrival cities is the dynamics among the transnational professionals, which is connected to their industrial and socio-economic class affiliation similar to other migrants. Those migrants who lose their jobs or positions due to economic downturns cannot maintain the lifestyle and socio-spatial patterns with the lower income. This was the case with one of our interviewees, 15 who had kept his work but with a local contract exempting him from the privileges of 'expat packages'. With a change in available capital, issues arise around rent costs, international school fees and generally the living standards of these upper-class neighbourhoods. Relocation within the city, usually to a less central area, would be the consequence of such dynamics, resulting also in the change of social networks and practices bound to the locations.
The case of such transnational migrants, who start losing their status as upper-class transnational professionals, thus becoming upper middle-class migrants, in a certain sense reflects the point that Sassen (2011) and other scholars make in the discourse on the difference between the global city and the global city-region (see also Pain, 2011;Scott, 2001). Although it is less the case of a positive connotation of equal distribution of capital for the middle class but more of a dramatic change in the lives of transnational families falling victim to global economic dynamics, their pattern can be still regarded as exemplary for the middle classes who are drawn out of the city centre. Just as urban sociologists such as Fainstein (2011) have noted, such migrants are forced to relocate, not fully to the periphery but to the less central parts within the metropolitan area, as a consequence of the high living costs which they cannot afford anymore due to decreased monetary resources. Simultaneous with the forced relocation comes the risk of children having to change schools, which in turn is successively connected to the change in the social communities and socio-spatial patterns. These dramatic situations -which have occurred during the recent economic crises -are a reflection of what Fainstein described as the phenomenon of 'those global cities whose fortunes are particularly tied to financial markets' (Fainstein, 2011: 295) and by which the residents in them are vulnerable to these dynamics.
Differential inclusion in arrival cities: Spatial superdiversity
This multidimensionality of migrations into cities nowadays, which is what Vertovec has conceptualised as superdiversity (Vertovec, 2007(Vertovec, , 2017, creates an overall phenomenon in arrival cities, which can be called spatial superdiversification. As part of the sociospatial forms of differential inclusion in these cities, they can be said to experience an almost exponential diversification. They diversify through the multiple and frequent transnational migrations and mobilities of people themselves, but also through the intersections of transnational spaces of an even larger diversity of other international and transnational migrants. 16 Features of social practices from other urban contexts of the Pro-Tokyoites start to mix together as the ethnically and culturally diverse group of transnational professionals diversifies even more. Such multi-layered diversity leads to a pure socio-spatial superdiversification of these diverse global cities. Moreover, as dynamic as these transnational migrants' lives can be, changes in their careers (intended or unintended) but also importantly in their private lives, e.g. intergroup marriages or friendships, can lead to diversifications of their socio-spatial patterns. Changes in family socialisation through children or spouses can substantially contribute to differing socio-spatial patterns within the same city. Especially in the long run, with further generations of transnational migrants growing up in these transnational spaces -such as third-culture children 17 -the multidimensional superdiversification of transnational spaces, in particular spanning the increasing number of globally connected cities, will progress even more.
The spatiality of superdiversity needs to be understood as a dynamic process, thus more adequately perceived and discussed as a spatial superdiversification. The sociospatially different patterns and the consequential spatial diversification are inherently connected to the different contexts of differential inclusions in arrival cities. They are dynamic in their nature, shifting spatially and with time, whereas the socio-spatial dynamics are processes that are influenced by but also find their spatial expressions at both the global and local levels. The urban phenomenon of spatial superdiversity is one of a differential inclusion because the different migration types that cause multidimensional diversification are inherently connected to policies and legal schemes that differently include the migrant groups in the cities (Ye, 2016;Yeoh, 2006). Indeed, in addition to policies differentially including migrants, with schemes privileging highly affluent and highly skilled migrants but also channelling accompanying domestic workers with such corporate professionals, it must also be noted how much the corporate context also contributes to the reproduction of the geographies of differential inclusion, and to superdiversification in (global) cities.
These socio-spatial forms of differential inclusions actually also bring new perspectives of hierarchisations of spaces in these arrival cities. The hierarchies go beyond ethnicity, but along the line of institutionalpolitical embeddedness as migration policies coupled with economic policy give a structural framework to the different migration schemes. Such approaches have recently been taken up by scholars on the migration industries of highly skilled migrants (Cranston, 2016;Koh and Wissink, 2018). Access to global but also local mobilities is bound to the privileged legal status of corporate professionals. Moreover, the corporate hierarchy of positions and attached privileges, especially financial capital but also social capital, regarding access to local resources becomes crucial for the forms of differential inclusion of arrival cities. Indeed, the global and local contexts and spatiality of superdiversification demonstrate the inherent connection between these socio-spatial dynamics and socio-economic, class-based behavioural patterns of transnational migrations related to the differential inclusion of arrival cities.
Conclusion
The differential inclusion of new or at least increasing diversification of arrivals into specific cities, such as global cities, creates hubs for the diversification of migration inflows. The phenomena of gaijin ghetto and Pro-Tokyoites appear to be universally spreading in society, especially in the context of the transnational spaces spanning arrival cities. The multiple layering of such socio-spatial patterns contributes to an even stronger socio-spatial diversification in these cities. Such differential socio-spatial inclusions in the urban society occur along the line that is not ethno-focal anymore, but superdiverse according to socio-economic status, industrial or corporate affiliation and even lifestyles -the socio-spatial forms of differential inclusion depend less on ethnic background and more on socio-economic and sociocultural capital. The intersections of different transnational migrant groups in specific areas of the arrival city add to the dynamics of spatial superdiversity. Indeed, as the concept of superdiversity implies the diversification of the migrant population itself, with different socioeconomic or legal statuses and further dimensions of diversities of individuals (Vertovec, 2007), the spatial superdiversity can be expected to expand also to the overall urban population beyond the group of privileged migration and even to the overall urban population.
The multi-layering also refers to the fact that each transmigrant has different sets of global cities they have migrated to before, thus bringing a diversity of Pro-Global Cityite experiences of socio-spatial patterns. With all of the different personal backgrounds and the multitude of experiences in arrival cities, there is ultimately created a spatial diversity in these cities unique to the current global migration era. In addition to the diversity already observable within this group of transnational corporate professionals of the financial industry, which as the socio-economically elite of the global society is marginal in total population, the diversity even becomes exponentially diversified if all other transnational migrant groups and each of the sub-groups' socio-spatial patterns are taken into consideration. This article delivers novel insights into an otherwise elusive group of transnational corporate elites regarding their underresearched socio-spatial patterns beyond their role as economic actors in the network of global cities. The novelty of this research is also the attempt to analyse such sociospatial patterns that can be observed locally and to contextualise them into the global phenomenon of urbanisation. By providing this empirical evidence on transnational corporate elites and analysing it from a transnational migration perspective, as well as discussing spatiality in cities, the article offers a novel socio-spatial take on superdiversity and brings spatial superdiversification into debates on urban diversity, particularly on differential inclusion in cities. The forms of differential inclusion in these arrival cities do not only encompass the framework of migration policies (Ye, 2016;Yeoh, 2006) and migration industries (Yamamura, 2022b) but are also connected to and contextualised in the corporate context. The corporate context impacts the reproduction of specific geographies of superdiversity within the network of global cities. This article thus is not solely an empirical contribution to the global city of Tokyo. It contributes theoretically to the research on transnational professionals from a migration perspective and discusses their local socio-spatial forms of differential inclusion specifically in Tokyo but also in the network of global cities on a larger scale. Its main contribution lies in its discussion of the actual different socio-spatial forms of differential inclusion and the bringing of spatial superdiversification into the larger structural debates on differential inclusions.
In fact, as a future research agenda, it will be not only academically but also sociopolitically interesting to further look into such intersecting (or 'merging') spaces where different migrant groups beyond the ethnofocal lens cross socio-spatial paths. This would lead to an even more fine-grained understanding of socio-spatial forms of differential inclusions of migrant-led diversity in cities. Moreover, the multi-scalar embeddedness particularly of the politicalinstitutional frameworks for these different types of migrants is another avenue of research that could bring insights into the trends in global urbanisation with regards to homogenisation on the one hand, and differentiations on the other, leading to new socio-spatial diversifications in arrival cities.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article. Covid-19 pandemic, when the Japanese government closed its borders and introduced drastic measures regarding immigration but also regarding the arrival of foreigners in general. 5. The choice of the financial industry was motivated by the prominent role of transnational corporate professionals in contributing to the economy and structures of global cities (Hoyler et al., 2018, on global citymakers). 6. Those such as English teachers or highly educated but recent graduates of higher education institutions were included. 7. See, for example, van Bochove and Engbersen (2015); Kunze Van Bochove and Engerbersen (2015); Kunze (2018); Yamamura (2018), for debates on highly skilled (temporary) migrants' specific local spaces. 8. The Yamanote area is the 'upper town' of Tokyo, both physio-geographically and socio-economically. Historically, this is the area where the feudal elite lived, in contrast to the lower town of the labourers and lower class closer to the swampy bay area (see more in Seidensticker, 1983). 9. Beyond the inner Tokyo areas, there are also more nationality-based 'exclaves', such as around the German School in Tsuzuki-ku (specifically around the Centre Minami/ Nakamachidai stations) in Yokohama. 10. This does not exclude socialisation with other Japanese transnational professionals who can be similarly living in the gaijin ghetto with comparable socio-spatial patterns. 11. Ryokan (Japanese): traditional Japanese accommodation which typically features tatami (rice straw)-matted rooms and communal (hot spring) baths and compared to other types of accommodation is usually more expensive. 12. 'Osusume wa nandesuka' (Japanese): translates to 'What is the recommendation?'. 13. As mentioned in the context of the spectrum of socio-spatial patterns as well as the sample of transnational corporate professionals, there were also nuances of differences regarding socialisation and socio-spatial patterns regarding those with Japanese national backgrounds or those Japanese with international experiences.
ORCID iD
14. In the interviews, domestic workers or helpers were not mentioned, except for au-pairs and babysitters. It must also be noted that 43 out of the 45 transnational corporate professionals were male, many of them with trailing wives; this is, especially regarding the high managerial positions within the financial industry, a rather representative sample but it also resulted in the omission of details on housework issues. 15. As the sample of transnational corporate professionals in this study encompassed professionals of higher managerial positions, such case was very rare, yet this one case demonstrates well how a change in status can result in a 'merging' of social groups, with mobility on the socioeconomic strata going hand in hand with changes in sociospatial patterns. 16. It must also be noted that there are also unpredictable yet realistic potentials of changes with regard to border controls and/ or migration policies, such as during the Covid-19 pandemic, that can contribute to inhabitation of diversification. At the same time, the limitation of new inflows and the overall immobility and social isolation can also lead to novel collaborative dynamics between foreign and local populations. 17. That is, children of mobile transnational families who, through the interactions with peer transnational children, develop their own social practices different from their parents' cultures and socio-spatial patterns. | 2022-09-25T15:16:44.346Z | 2022-09-23T00:00:00.000 | {
"year": 2022,
"sha1": "72d9db53f5ccbd6d1ea756080477f1a025f9aabb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1177/00420980221114213",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "dd31089154b5f35056d6441f8eadcf36d6b2f5e2",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
} |
92999205 | pes2o/s2orc | v3-fos-license | Temporal variation of renal function in people with type 2 diabetes mellitus: A retrospective UK clinical practice research datalink cohort study
Abstract Aim To characterize the longitudinal variability of estimated glomerular filtration rate (eGFR) in people with type 2 diabetes mellitus (T2DM), including variation between categories and individuals. Methods People with T2DM and sufficient recorded serum creatinine measurements were identified from the Clinical Practice Research Datalink (T2DM diagnosis from 1 January 2009 to 1 January 2011 with 5 years follow‐up); eGFR was calculated using the CKD‐EPI equation. Results In total, 7766 individuals were included; 32.8%, 50.2%, 12.4%, 4.0% and 0.6% were in glomerular filtration rate (GFR) categories G1, G2, G3a, G3b and G4, respectively. Overall, eGFR decreased by 0.44 mL/min/1.73 m2 per year; eGFR increased by 0.80 mL/min/1.73 m2 between index and year 1, then decreased by 0.75 mL/min/1.73 m2 annually up to year 5. Category G1 showed a steady decline in eGFR over time; G2, G3a and G3b showed an increase between index and year 1, followed by a decline. Category G4 showed a mean eGFR increase of 1.85 mL/min/1.73 m2 annually. People in categories G3‐G4 moved across a greater number of GFR categories than those in G1 and G2. Individual patients' eGFR showed a wide range of values (change from baseline at year 5 varied from −80 to +59 mL/min/1.73 m2). Conclusion Overall, eGFR declined over time, although there was considerable variation between GFR categories and individuals. This highlights the difficulty in prescribing many glucose‐lowering therapies, which require dose adjustment for renal function. The study also emphasizes the importance of regular monitoring of renal impairment in people with T2DM.
| INTRODUCTION
Diabetes is a leading cause of chronic kidney disease (CKD) 1 and it is expected that between 40% and 50% of people with type 2 diabetes mellitus (T2DM) will be affected by CKD in their lifetimes. [2][3][4] However, only a small number of glucose-lowering therapies can be used safely in people with renal impairment without requiring a dose adjustment. 5 Therefore, renal function is an important factor to consider when prescribing glucose-lowering medications in people with T2DM.
Previous research has showed that renal function, as measured by estimated glomerular filtration rate (eGFR), can vary considerably, especially among people with diabetes. [6][7][8][9][10][11][12][13][14] These studies have also suggested that eGFR improvement among people with T2DM is possible, 11 leading to increased complexity when considering optimal treatment. Published studies have tended to investigate renal variation at the population or category level, with one such study reporting eGFR trends in the UK. 11 There are no recent studies reporting patient-level variation in renal function in a T2DM population.
Using primary care clinical records, this study aims to further characterize the longitudinal variability of eGFR in a cohort of people with T2DM with availability of consistent eGFR measurements over a period of 5 years to further explore eGFR trends and patterns over a longer period, including analysis at the individual patient level.
| Data source
Patient records were obtained from the UK Clinical Practice Research Datalink (CPRD), a primary care database that includes data from general practices throughout the UK. As of November 2018, the database contained anonymized data for approximately 10 million people, with over 1 in 10 practices in the UK contributing data. 15 CPRD data have been used in over 2000 peer-reviewed publications 15 , and have been found to be broadly representative of the UK population in terms of age, sex, ethnicity and body mass index (BMI). 16 Medical records are updated monthly from participating practices, including complete clinical information, pathology tests, anthropometric data, referral and prescription records. CPRD is linked to Hospital Episode Statistics (HES), a database containing details of all hospital admissions, accident and emergency attendances and outpatient appointments, to improve ethnicity recording for glomerular filtration rate (GFR) estimation. 17
| Study population
Individuals were identified in CPRD based on their first diagnosis code of T2DM (codes are reported in the supporting information). Eligibility criteria included diagnosis of T2DM between 1 January 2009 and 1 January 2011; individuals also had to have a measure of serum creatinine after T2DM diagnosis (index measurement) and at least one measure of serum creatinine recorded in 5 yearly intervals post-first serum creatinine after diagnosis. In addition, the following inclusion criteria were applied: individuals must have at least 12 months' registration in practice prior to the index date; belong to an "up-to-standard" practice at the index date; have a record of ethnicity (identified through HES linkage, or CPRD if unavailable in HES). Individuals with a history of type 1 diabetes mellitus were excluded from the analysis.
| Renal function classification
Renal function was measured via eGFR using the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation. To estimate GFR, the CKD-EPI equation requires data for serum creatinine, age, sex and ethnicity (see equation in the supporting information). 18 The CKD-EPI equation was selected as it is the recommended formula by the National Institute for Heath and Care Excellence (NICE). 19 Individuals were grouped into GFR categories, as adopted by NICE guidelines, according to their eGFR at baseline and follow-up. 19 These are Category G5 (<15 mL/min/1.73 m 2 ) was also considered, but none of the study population had an eGFR that fitted within this group.
Baseline characteristics, including age at T2DM diagnosis, age at the index date, BMI, HbA1c, systolic blood pressure, diastolic blood pressure and eGFR, were compared among individuals included and excluded from the analysis using Student's t-test. Renal function was described for each yearly interval based on the last recorded value per year and compared with baseline using mean values, counts and percentages to identify the raw change in eGFR as well as individual category changes. The analysis was performed using Stata version 14.
On average, there were no relevant differences in the baseline characteristics of those included and excluded from the analysis in terms of age, BMI, HbA1c, systolic and diastolic blood pressure and eGFR (Table S1).
During follow-up, patients changed GFR categories 1.5 times on average [standard deviation (SD) 1.6]. Those with reduced renal function below 60 mL/min/1.73 m 2 (G3 and higher categories) changed GFR categories more often compared with people with eGFR ≥60 mL/min/1.73 m 2 (G1 and G2) ( Table 2). In particular, people in categories G1 and G2 changed GFR categories 1.3 times on average (SD 1.6 and 1.5, respectively), and people in categories G3a, G3b and G4 changed GFR categories 2.6 (SD 1.7), 2.1 (SD 1.7) and 2.9 (SD 1.9) times, respectively. Table S4). Change in renal function in the GFR categories and at the individual patient level showed the same trends as reported in the main analysis (Table S4 and Figure S4, respectively).
| Sensitivity analyses
Finally, we also looked at eGFR trends according to ACR. The results in both categories (A1 and A2) followed a similar trend to that observed in the main analysis ( Figure S5). T2DM. This may also produce some bias in the results, as we are unable to identify how the practices included in our study perform against any clinical quality metrics. However, because all the practices included met the "up-to-standard" metric, it is probable that each possessed a reasonable level of quality and were suitable for research. | 2019-04-04T13:02:50.012Z | 2019-05-06T00:00:00.000 | {
"year": 2019,
"sha1": "2586056584049abf3bce5c14f17ba3041ab01095",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1111/dom.13734",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "76753b99b2baa14969c46a7f66c031b31c48c268",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258428095 | pes2o/s2orc | v3-fos-license | Shifting paradigms: Reframing coverage of antiobesity medications for plan sponsors
With the recent approvals of highly effective glucagon-like peptide-1 (GLP-1) receptor agonists for weight loss, there has been renewed interest in the debate over pharmacy coverage for antiobesity medications. All stakeholders, including plan sponsors, pharmacy benefit managers (PBMs), and drug manufacturers, have responsibility to ensure that people living with obesity have access to weight management medications. Managed care pharmacy has a role to play in the accessibility of antiobesity medications through benefit design. Typical PBM benefit elections exclude the category of weight loss medications and offer the plan sponsor to opt-in for coverage. Best practices for PBMs should be to include the category of weight loss medications as a core element of coverage. Switching to an opt-out election will encourage plan sponsors to include the coverage as a standard benefit offering. PBMs can also ensure the appropriate patient populations are receiving treatment by enforcing utilization management parameters or authorization criteria. Plan sponsors provide varying coverage of antiobesity medications, despite studies demonstrating that even a modest 5% reduction in weight from baseline translates to improved health outcomes and reduced medical costs. Employers have struggled in trying to promote healthy lifestyles and weight loss for employees by offering wellness programs and financial incentives, amounting to an estimated $8 billion industry. Historically, these programs have failed to demonstrate evidence of value and result in no significant changes in weight among their targeted employees. Employer’s wellness dollars would be better reallocated to providing coverage of antiobesity medications such as GLP-1 receptor agonists with demonstrated, sustained weight loss in clinical studies. Perhaps the largest concern facing plan sponsors is the estimated budgetary impact of providing coverage for these agents. Drug manufacturers can step in to encourage formulary uptake of high-cost antiobesity medications. Manufacturers could consider outcomes-based contracts to link coverage and reimbursement to real-world performance to temper plan sponsors’ apprehension. Plan sponsors are already shouldering the increased costs in medical expenditures associated with obesity. By partnering with drug manufacturers, plan sponsors would be able to reduce their financial risk and see improvements in their medical spend. As the prevalence of obesity continues to rise in the United States, these and other collaborative best practices are essential to ensure equitable treatment options and to protect the sustainability of the health care system. The increase in prevalence of obesity is a major focus of concern among national and global health organizations. Obesity is a common, chronic disease that affects adults and children and is a serious health-risk. The Centers for Disease Control and Prevention defines obesity as body mass index (BMI) (weight in kilograms divided by height in meters squared) greater than 30 for adults and BMI-forage in the 95th percentile or greater for children.1 In the past 2 decades, the prevalence of obesity in adults in the United States increased from 30.5% to 42.4%.2 During the same period, the prevalence of severe obesity in adults (defined as BMI >40) nearly doubled from 4.7% to 9.2%. All states and territories are affected by obesity, with the highest prevalence in the South and the Midwest, and disproportionately impact racial and minority groups.3 Obesity is associated with poorer health outcomes and increased medical expenditures. Cardiovascular disease, hypertension, type 2 diabetes mellitus, hyperlipidemia, stroke, certain cancers, sleep apnea, liver and gallbladder disease, osteoarthritis, and gynecological problems are associated with obesity.4 The Diabetes Author affiliations
Shifting paradigms: Reframing coverage of antiobesity medications for plan sponsors
Libbi Green, PharmD; Patty Taddei-Allen, PharmD, MBA, BCACP, BCGP With the recent approvals of highly effective glucagon-like peptide-1 (GLP-1) receptor agonists for weight loss, there has been renewed interest in the debate over pharmacy coverage for antiobesity medications. All stakeholders, including plan sponsors, pharmacy benefit managers (PBMs), and drug manufacturers, have responsibility to ensure that people living with obesity have access to weight management medications.
Managed care pharmacy has a role to play in the accessibility of antiobesity medications through benefit design. Typical PBM benefit elections exclude the category of weight loss medications and offer the plan sponsor to opt-in for coverage. Best practices for PBMs should be to include the category of weight loss medications as a core element of coverage. Switching to an opt-out election will encourage plan sponsors to include the coverage as a standard benefit offering. PBMs can also ensure the appropriate patient populations are receiving treatment by enforcing utilization management parameters or authorization criteria.
Plan sponsors provide varying coverage of antiobesity medications, despite studies demonstrating that even a modest 5% reduction in weight from baseline translates to improved health outcomes and reduced medical costs. Employers have struggled in trying to promote healthy lifestyles and weight loss for employees by offering wellness programs and financial incentives, amounting to an estimated $8 billion industry. Historically, these programs have failed to demonstrate evidence of value and result in no significant changes in weight among their targeted employees. Employer's wellness dollars would be better reallocated to providing coverage of antiobesity medications such as GLP-1 receptor agonists with demonstrated, sustained weight loss in clinical studies.
Perhaps the largest concern facing plan sponsors is the estimated budgetary impact of providing coverage for these agents. Drug manufacturers can step in to encourage formulary uptake of high-cost antiobesity medications. Manufacturers could consider outcomes-based contracts to link coverage and reimbursement to real-world performance to temper plan sponsors' apprehension. Plan sponsors are already shouldering the increased costs in medical expenditures associated with obesity. By partnering with drug manufacturers, plan sponsors would be able to reduce their financial risk and see improvements in their medical spend.
As the prevalence of obesity continues to rise in the United States, these and other collaborative best practices are essential to ensure equitable treatment options and to protect the sustainability of the health care system. The increase in prevalence of obesity is a major focus of concern among national and global health organizations. Obesity is a common, chronic disease that affects adults and children and is a serious health-risk. The Centers for Disease Control and Prevention defines obesity as body mass index (BMI) (weight in kilograms divided by height in meters squared) greater than 30 for adults and BMI-forage in the 95th percentile or greater for children.¹ In the past 2 decades, the prevalence of obesity in adults in the United States increased from 30.5% to 42.4%.² During the same period, the prevalence of severe obesity in adults (defined as BMI >40) nearly doubled from 4.7% to 9.2%. All states and territories are affected by obesity, with the highest prevalence in the South and the Midwest, and disproportionately impact racial and minority groups.³ Obesity is associated with poorer health outcomes and increased medical expenditures. Cardiovascular disease, hypertension, type 2 diabetes mellitus, hyperlipidemia, stroke, certain cancers, sleep apnea, liver and gallbladder disease, osteoarthritis, and gynecological problems are associated with obesity.⁴ The Diabetes Prevention Study demonstrated that a 5% reduction in weight from baseline translates to improvement in health outcomes. 5 Adults with obesity are estimated to double their medical expenditures compared with their reference-weight counterparts, on average incurring $2,505 higher annual medical costs with increasing costs associated with higher severity of obesity. 6 Third-party payers paid for 88.5% of the total cost increase, with the largest increases in inpatient services and prescription drug expenditures. 6 ( Figure 1) The direct medical costs of obesity among adults in the United States was estimated to be $260.6 billion in 2016.6 Indirect (nonmedical) costs of obesity have also been studied, with time away from work owing to obesity being the most commonly measured. Estimates in 2008 of the national costs of obesity-attributable absenteeism in the United States range from $3.38 billion to $6.68 billion ($4.5 billion to $8.9 billion in 2022 USD) annually. 7
Plan Sponsors Should Consider Coverage of Antiobesity Medications
Given the high-cost burden on third-party payers for obese patients relative to reference-weight patients, and the significant advances in obesity treatment with GLP-1-related agents, plan sponsors should cover antiobesity medications to reduce excess weight and improve health for their members.
For decades, physicians, plan sponsors, and the public debated whether obesity is a chronic illness, a risk factor for other disease states, or a poor lifestyle choice and lack of willpower in people. With advances in technology leading to discovery of hormonal regulators directly impacting obesity, multiple professional organizations have published position statements defining obesity as a chronic illness. 8 Since the approval of the first antiobesity medication in the 1930s, dozens more have entered the market but were subsequently withdrawn because of severe adverse side effects and safety issues, including death. 9 Further, patients on antiobesity medications struggled to sustain clinically significant weight loss at chronically administered, tolerable doses. 9 In 2007, the US Food and Drug Administration (FDA) provided guidance that a 5% or higher weight loss should be used to demonstrate efficacy, a target that most antiobesity medications, although able to achieve short-term, were not able to achieve long-term. 9,10 This likely relates to the pathophysiology differences between patients who are in active weight loss and those maintaining weight loss. Treating obesity as a disease state remains a challenge for many patients and providers, who are faced with few treatment options. Without safe and effective medications to target weight loss, patients contend with nonpharmacologic interventions such as diet and exercise, often with disappointing results. Bariatric surgery is still the most effective, albeit invasive, treatment option, with sustained weight loss approaching 30% from baseline. 9 In recent years, the discovery and approval of glucagonlike peptide-1 (GLP-1) receptor agonists changed the obesity treatment landscape. Semaglutide, the first weekly GLP-1 agonist approved for chronic weight management, demonstrated significant long-term weight loss. Not only did individuals in the treatment arm experience a decrease in body weight of 14.9% vs 2.4% in placebo, but the medication was generally well tolerated, a stark contrast from the previous decades of weight management pharmacotherapy. 11 Results from the phase 3 SURMOUNT-1 trial demonstrated that tirzepatide, a dual glucose-dependent insulinotropic polypeptide/GLP-1 agonist, achieved sustained weight loss after 68 weeks between 15% and 20.1% across 3 doses compared with a 3.1% decrease with placebo. 12 The sustained weight loss results patients experienced with recently approved GLP-1 agonist-based agents far exceeds what patients historically achieved by 2 to 4 times. 9 Social media visibility, combined with direct-to-consumer advertising, fueled demand for GLP-1 agonists across the United States, which, along with supply chain issues, resulted in limited availability of semaglutide for several months in 2022. The media's attention to drug shortages also highlighted the lack of plan-sponsored coverage of these agents for most people. And even when there is coverage, patients usually have higher out-of-pocket costs. This combination of scenarios is thought to worsen existent health disparities because obesity rates are higher in lowincome communities. 13,14 Coverage of antiobesity medications varies across plan sponsors. A 2018 study of individually purchased health plans found that covered agents were generally for older drugs, which have lower efficacy and more adverse effects, whereas newer therapies tended to be covered with higher cost shares. 15 The largest employer-sponsored health care program in the United States, the Federal Employee Health Benefit (FEHB) completed a survey of their plans that revealed limited or no coverage for antiobesity medications. The FEHB recently issued communication for its 2023 plans that FEHB carriers are not allowed to exclude antiobesity medications from coverage based on a benefit exclusion or a carve out. 16 Shifting paradigms: Reframing coverage of antiobesity medications for plan sponsors semaglutide 2.4 mg provides an incremental or better clinical rating when compared with lifestyle modification, the current cost far exceeds industry commonly used cost-effectiveness thresholds. ICER estimates that to be considered cost-effective, semaglutide 2.4 mg would need to be discounted between 44% and 57% from current prices. 17 Although plan sponsors may have varying coverage of antiobesity medications, they use a variety of wellness programs, which typically include a weight loss/weight management component. Studies demonstrate mixed reviews with the ability of wellness programs to significantly impact health and economic outcomes for both patients and employers. A 2013-2015 study of employer-based financial incentives for weight loss exposed the failures of such workplace programs, resulting in no significant changes in weight among the study groups. 18 Despite the paucity
Best Practices on Implementing Coverage of Antiobesity Medications
One of the largest concerns facing plan sponsors is the estimated budgetary impact of providing coverage for the GLP-1 agonist-based medications. Although some of the other commonly used agents, such as phentermine, have been around for decades and are relatively inexpensive, the newer GLP-1 agonist-based agents indicated for weight loss retail for approximately $1,500 monthly. At annual costs approaching $18,000 with potentially a large percentage of members meeting the FDA-labeled indication, plan sponsors are concerned that there may not be enough health care dollars. 17 The Institute for Clinical and Economic Review (ICER) report published in October 2022 concluded that although
FIGURE 1
Adults With Obesity BMI=body mass index; PBM=pharmacy benefit manager.
FIGURE 2
Recommended Best Practices Drug manufacturers may want to consider outcomes-based contracts to encourage formulary uptake of high-cost antiobesity medications. Outcomes-based contracts, which drug manufacturers and plan sponsors link coverage and reimbursement to real-world performance, are an emerging trend in European and overseas single-payer markets. 21 To date, their use has been limited in the US private sector, despite opportunities in which using outcomes-based contracts can reduce a plan sponsor's risk. Drug manufacturers and plan sponsors should continue to explore these outcomes-based arrangements.
In January 2023, the drug manufacturer Eisai announced pricing based on societal value for its new Alzheimer disease treatment, lecanemab, to promote broader patient access and support health care system sustainability. 22 For promising antiobesity medications manufacturers should demonstrate the antiobesity medication's price to real-world outcomes. Payers would likely want to see real-world evidence demonstrating not only sustained weight loss but also impacts to lower health care resource utilization and improvements in biomarkers in obesity-related comorbidities.
In summary, all stakeholders, including plan sponsors, PBMs, and drug manufacturers, are tasked with the responsibility to ensure that people living with obesity have access to weight management medications. of evidence demonstrating value, the wellness program industry, estimated as an $8 billion industry, is used by approximately 82% of large firms and 53% of small employers in the United States. 19 Employers' wellness dollars would be better spent going toward evidence-based weight loss interventions, such as providing coverage of antiobesity medications through their employee prescription benefits. A percentage of wellness program dollars could be reallocated to help cover the increase in pharmacy spend on antiobesity medications, with regular reassessment intervals to track improvement in other chronic diseases over time. To avoid disparities because of high out-of-pocket costs for patients, employers could offer programs to reduce coinsurance or bypass deductibles based on weight management program engagement. This could be a program to promote accountability, such as patients tracking their weight loss journey or medication adherence.
Pharmacy benefit managers (PBMs) can play a role in the uptake of antiobesity medications ( Figure 2). Typical PBM benefit design excludes the category of antiobesity medications on a standard plan offering. The plan sponsor has to opt-in for coverage for its members, sending the message to plan sponsors that coverage of antiobesity medications is an unnecessary benefit. Instead of framing coverage as an add-on, PBMs should set the default benefit design to include antiobesity medications and allow the option for plan sponsors to opt-out. This will encourage plan sponsors to adopt antiobesity medications as a standard benefit election.
Plan sponsors considering the addition of antiobesity medications to their formulary coverage have several options to ensure the appropriate patient populations receive Although these best practices are great first steps, additional innovative solutions are necessary. Managed care professionals must consider the potential unintended consequences of widening disparities while attempting to close the affordability gap. Obesity affects the lives of almost half of the US population and accounts for diminished health outcomes and increased health expenditures, disproportionately affecting low-income communities. To ensure sustainability of the health care system and to reverse the rising prevalence of obesity and obesity-related comorbidities, stakeholders can take steps to encourage effective and equitable treatment options.
Shifting paradigms: Reframing coverage of antiobesity medications for plan sponsors | 2023-05-02T06:17:41.105Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "5970e4e50312d2c86b2b3ed66dedbf5e4a3894b0",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ed580f2f70a510b98426492fb75ad131d66e17b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233248943 | pes2o/s2orc | v3-fos-license | The Effects of the FIFA 11+ and 11+ Kids Training on Injury Prevention in Preadolescent Football Players: A Systematic Review
Background. Most football players (58%) around the world are younger than 18 years and almost three quarters of these young players are under the age of 14 years old. The characteristics of football injuries in children aged 7 12 years old are different from those of the young and adult players. Objectives. The aim of this systematic review was to evaluate effects of FIFA 11+ and 11+ Kids warm-up programs in preventing the injuries on the pre-adolescents football players. Methods. PubMed and Science Direct databases were used using the search terms including FIFA 11+, 11+ kids, injury prevention, football, and pre-adolescent. A total of 520 studies were identified, of which 10 met the inclusion criteria of the review. Methodological quality of the studies were assessed through the PEDro score. Results. The 11+ Kids exercises reduce the injury and improve the physical fitness factors such as balance, jumping activities and lower limb isokinetic strength. Although 11+ exercises are designed for players aged over 14 years, they result in an improvement in movement patterns, stability, and trunk muscle endurance. The methodology quality of the randomized studies was in the range of 4 to 7 (out of 10) and the mean score of the studies was obtained 5.6, indicating moderate quality of the methodology. Conclusion. 11+ program alone or in a combination with the newlydeveloped 11+ Kids program may be helpful in preventing the injury and improving the performance, especially if implemented for a longer period or with more exercise sessions per week.
INTRODUCTION
Most football players (58%) around the world are younger than 18 years and almost three quarters of these young players are under the age of 14 years old (1).The characteristics of football injuries in children aged 7 -12 years old are different from those of the young and adult players. For example, the rate of the bone and upper extremity injuries in the children is higher than that of the older players (2,3).It is probably due to low skill (4), reduced muscle strength (5) , lower muscle endurance and coordination (6). Anterior cruciate ligament injuries also begin to increase between the ages of 10 and 12 years (7). Children at these ages exhibit risky motor patterns during the landing activities (8,9). They include decreased knee flexion and increased knee valgus (10).The risk factors are traditionally divided into two categories, including intrinsic (athlete-related risk factor) and extrinsic (environmental risk factor) categories (11). In terms of the prevention and management of the sports injuries, risk factors are divided into modifiable and non-modifiable factors. To prevent or reduce the sport injuries, it is necessary to manipulate the modifiable factors (neuromuscular and biomechanical risk factors) (12) to ultimately reduce the risk of the injuries. Physical fitness is among the intrinsic and modifiable risk factors. The important components of the physical fitness are strength, muscle endurance, cardiorespiratory endurance, coordination, balance, flexibility, and body composition (13). Studies have indicated that people with lower levels of the physical fitness are at a higher risk of injures (13). In recent years, many preventive programs have been designed and implemented to prevent the football injuries (14)(15)(16)(17)(18). Soligard et al. (2008) stated that the 11+ program can prevent the injuries in the young female football players and can generally reduce one-third of the injuries (19). Several studies have investigated the "11+" injury prevention program in 14 years of age and older players and reported a reduction (between 32 and 72%) in the incidence of the lower extremity injuries (20)(21)(22).These programs have a relative success in preventing the injuries and are widely accepted and applied by the coaches and players. In addition to preventing the injuries, they are effective in improving the performance and physical strength of the football players (17). The 11+ program has been reported to have a significant effect on the speed (23), dribble speed, accuracy of shoot, agility, and vertical jump of the football players (24). Zarei et al. observed a significant improvement in Sargent jump, Bosko repetitive jump, and dynamic balance tests after one season of "11+"exercise in the male adolescent football players. Significant improvements were not observed in Illinois agility test, 20-yard and 40yard sprint, Yo-Yo test, flexibility, and dribbling (25). Taghizadeh et al. reported that the 11+ exercise program significantly increased strength flexion and extension of the dominant and non-dominant leg and dominant leg balance in the posterior, posterior-lateral (18). Recently, specialists at FIFA Medical Assessment and Research Center (FMARC) have designed the "FIFA 11+ Kids" while preserving the puberty and the most common injuries in children (26). This exercise program has been designed to enhance the spatial orientation, prediction, attention, increase the body stability, and movement coordination, and finally train the proper landing techniques (26). The main goal of this program is to manipulate the intrinsic risk factors such as muscle strength and balance to reduce the risks of injury. Weakness of muscle strength is thought to be important risk factors of injury in children. Thus, two separate sections of the Kids 11+ program are allocated to plyometric and jump exercises. Rossler et al. (2015) investigated the effects of this program on neuromuscular function of the pre-adolescents compared to a conventional warm-up program and indicated the effectiveness of this program in enhancing their motor function (27).Hence, effective prevention programs in late adolescence or adult player should be considered for younger groups to improve the trauma profile and maturity status of the pre-adolescents (2). A number of studies have been conducted on the effects of FIFA injury prevention exercises on pre-adolescents and contradictory results have been obtained in this regard. The benefits of injury prevention exercise on the preadolescents have remained unknown. Hence, this review study was designed to investigate the effects of FIFA 11+ and 11+ kids warm-up programs in preventing the injuries in the preadolescents football players.
MATERIALS AND METHODS
This systematic review has been reported using the PRISMA guidelines (28) (Figure 1). The researcher searched the combination of keywords in PubMed and Science Direct databases. The keywords included FIFA 11+, 11+ kids, injury prevention, football, and preadolescent. We included studies published since 2006 (FIFA 11+ was launched) to 2019.The two reviewers examined the abstracts and titles independently according to inclusion criteria Ultimately 10 paper met the inclusion criteria of the review.
Inclusion Criteria. Studies conducted on preadolescent player under the age of 14 years.
All studies were randomized controlled trial, case control.
Exclusion Criteria. Subjects over 14 years.
Training programs other than FIFA exercises. Review articles and case reports. To evaluate the quality of the methodology, PEDro scale was used for randomized studies (37). The score of each study was determined by two authors. The PEDro scale includes 11 items and the first item evaluates the external validity. This item is usually not included in the study evaluation. Thus, the evaluation based on the items 2 to 11 in the present study was performed according to the Moher et al. guidelines. The score 1 was given for the option "yes" and the score zero was given for the option "no". The studies with this scale ranged from 0 to 4 as poor methodological quality, 5 or 6 moderate, and those with scores of 7 and above had high methodological quality.
RESULTS
The methodology quality of the randomized studies was in the range of 4 to 7 (out of 10) and the mean score of the studies was obtained 5.6, indicating moderate quality of the methodology. Table 1 has presented the scores of the reviewed articles according to the PEDro scale.
The details of the reviewed articles have been presented in Table 2. Studies were randomized controlled trial. The age range of the participants was 9 to 14 years old. FIFA warm-up exercise program included 11+ kids and 11+ exercises, which differed in the duration, frequency, and content. Some researchers evaluated the physical factors (29)(30)(31)38) and others evaluated movement pattern through FMS (31,34).In one study, isokinetic strength (32),and in two other studies, the rate of incidence of the injuries and costs (33,35) were examined. Lower extremity angles and torque were assessed in tests such as preplanned cutting, double-leg jump, and singleleg jump in the study conducted by Thompson et al. (2017) (12). Sixteen young soccer players (aged 10 years) RCT 1) exp: 11+ FIFA program 2) con: conventional two session per week five week training Standing long jump performance and body stability Significant improvements in the stability index in both groups.
Training had no effect on standing long jump performance. The overall injury rate in the intervention group was reduced by 48% compared with the control group. Severe 74% reduction and lower extremity injuries of 55% reduction.
Functional Movement Screen
There was no significant differences between the postintervention results of the EG and the CG.
Rossler et al. (35), 2018
Cluster randomized controlled trial under-9 to under-13 age groups) RCT program 2)con: conventional training Preplanned cutting, unanticipated cutting, double-leg jump, and singleleg jump tasks. Lower extremity joint angles and moments No significant differences in the change in peak knee valgus moment were found between the groups for all activities. Improvement in peak ankle eversion moment after training during preplanned cutting, unanticipated cutting, and the doubleleg jump, compared with the control group.
DISCUSSION
The studies conducted on the effectiveness of the 11+ Kids exercises have shown how a simple warm-up program can reduce the rate of the injuries and the medical costs for both boys and girls under the age of 14 years (33,35).In general, the teams that implemented the 11+ exercise program had 30 to 70 percent fewer injured players (19,39,40).The effect of the 11+ Kids exercises on the lower extremity injuries was consistent with the results of other prevention programs presented as warm-up exercises. In a randomized controlled trial among 4,564 Swedish players aged 12 to 17 years old, Walden et al. (41) reported that neuromuscular warm-up program consisted of 6 trunk and lower extremity fitness and jumplanding exercises significantly reduced the incidence of anterior cruciate ligament (ACL) injuries in adolescent female football players, while a number of other randomized controlled interventions also showed that prevention programs targeting the football players could reduce the rate of injuries (42). The FIFA 11+ kids program two times per week for 4 weeks lead to small to moderate improvements in some [dynamic postural control, agility run, and jumping (standing long jump, CMJ, and DJ) measures] but not all [20 m sprint time, slalom dribble, wall volley, and ROMs (with the exception of the knee flexion ROM) measures] of the physical performance parameters analyzed (30).This was in line with the study conducted by Zarei et al., in which a significant difference was observed in the balance and triple hop and no significant differences was observed between two groups in the skills of wall volley and slalom dribble. In the balance test, these results were not in line with those of the study conducted by Parsons et al. (38).Pre-adolescents did not show a progress in the dynamic balance scores. One of the most important reasons can be considered the lack of a similarity in some stages of the 11+kids and 11+ exercises. 11+ Kids exercises mainly focus on improving coordination, balance, landing technique, strengthening the leg muscles and core stability muscles and may be more appropriate than 11+exercises for the pre-adolescents (36).Two and three balance exercises, especially on one leg and jump exercises (Exercises 1, 2 and 3) might be reasons for the success of 11+ Kids exercises in enhancing the dynamic balance and jump tests in the pre-adolescent players (36). The balance exercises increase the neural adaptation and inhibitory stimulation of spinal reflexes, such as stretching reflexes, and increase the co-contraction pattern in the agonist and antagonist muscles, ultimately leading to improved balance (43). Similar improvements in postural control (anterior Y balance), agility, and jumping activities have been reported after twice weekly for 10 weeks in a large cohort of young football players (27).However, in contrast to this article, the same authors found progresses in the slalom dribble and wall volley tests. A possible explanation for these inconsistent results may be different duration of the intervention phase, the number of participants, and the level of physical activity. Therefore, 4 weeks of 11+ Kid exercise program may not result in exercise responses in the speed tests and specific coordinated activities of slalom dribble and wall volley (29).Nemati et al. used the FMS test to evaluate the underlying movement patterns and concluded that the FIFA 11+ program significantly increased FMS scores in the intervention group compared to the control group. Also, 57% of the subjects in the intervention group obtained scores above 14, while no changes was observed in the control group (31).These results can be compared with the observations of Kiesel et al. and Bodden et al., which obtained an increase of 52% and 66%, respectively, in people who scored higher than 14 (10,44).These results were in contrast to those of the research conducted by Baeza et al,, which did not find a significant difference between the two groups. However, in the intervention group, after 6 weeks of 11+ exercise program, a progress was observed in 4 tests out of 7 tests. A significant increase in the overall scores and the scores above 14 showed a possible reduction in the injury based on the clinical outcomes (34).
FIFA 11+ is a special football warm-up program that can improve the strength, balance, core stability and proprioception (26).It can improve the quality and movement patterns of regular football players. Thompson et al. evaluated the changes in the biomechanical risk factors for anterior cruciate ligament injury after participating in the 11+ program in pre-adolescent players and observed a reduction in maximal torque of knee valgus in the double leg jump. However, there was no differences between the two groups in maximal torque of knee valgus in single-leg jump and cutting tests (12). Since the methodological quality of this study is poor, the results of this study cannot be sufficiently satisfied. In another study conducted by Zarei et al., the effect of the 11+Kids exercise program on isokinetic strength was examined and showed positive effects on hip adductor, knee flexor, ankle evertors and invertors in the intervention group, compared to the control group (32). Of the ten reviewed studies, none of the studies investigated the principle of blinding a therapist and a subject. Of these, five studies have focused on the blinding of the assessor. Therefore, in order to obtain more reliable results, it is suggested to consider these cases in future studies. Among the reviewed studies, most articles evaluated the physical factors and the rate of injuries in the pre-adolescents. It is recommended for future studies to investigate the effect of the 11+ Kids and 11+ exercise programs on the biomechanical and neuromuscular risk factors in children aged below 14 years old.
CONCLUSION
Based on the available evidence, The FIFA 11+ and 11+ kids program for pre-adolescent can potentially influence some of the factors related to sport injuries, which can benefit players by positively manipulating documented internal risk factors in favor of preventing sport injuries. The 11+ Kids exercises reduce the injury and improve the physical fitness factors such as balance, jumping activities and lower limb isokinetic strength. But there is No significance in the slalom dribbling, Illinois, sit and reach, standing long jump, 20-yard sprint, plank, and side plank between the groups.
Although 11+ FIFA exercises are designed for players aged over 14 years, they result in an improvement in movement patterns, stability, and trunk muscle endurance for children. 11+ kids exercises do not focus on soccer skills and cannot be expected to improve dribble speed. Exercises are also performed at low speeds and few changes in direction, so that they cannot make significant progress in agility. Therefore, using the 11+ program alone or in a combination with the newly-developed 11+ Kids program may improve performance of players and may contribute to a reduction of injury risk, especially if implemented for a longer period or with more exercise sessions per week. Further studies are needed to examine the effects of comprehensive football warm-up exercises on the injury prevention in preadolescents.
APPLICABLE REMARKS
• To improve Players performance, minimize injury risk and medical costs, coaches and trainers are recommended to implement FIFA 11+ and 11+ kids training programs at a pre-adolescent age. • Football Players should increase their awareness of injury prevention strategies and be familiar with FIFA warm-up programs more over learn to how perform each exercise with proper movement patterns. | 2021-04-16T05:40:17.677Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "7ce09ba85f34c69c331e93bad7d28135a2b44d56",
"oa_license": "CCBYNC",
"oa_url": "http://aassjournal.com/files/site1/user_files_dbc6fd/maedeh-A-11-1131-1-ccbc082.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7ce09ba85f34c69c331e93bad7d28135a2b44d56",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
49230429 | pes2o/s2orc | v3-fos-license | Attenuation of inflammatory responses by (+)-syringaresinol via MAP-Kinase-mediated suppression of NF-κB signaling in vitro and in vivo
We examined the anti-inflammatory effects of (+)-syringaresinol (SGRS), a lignan isolated from Rubia philippinensis, in lipopolysaccharide (LPS)-stimulated RAW 264.7 cells using enzyme-based immuno assay, Western blotting, and RT-PCR analyses. Additionally, in vivo effects of SGRS in the acute inflammatory state were examined by using the carrageenan-induced hind paw edema assay in experimental mice. As a result, treatment with SGRS (25, 50, and 100 μM) inhibited protein expression of lipopolysaccharide-stimulated inducible nitric oxide synthase (iNOS), cyclooxygenase-2 (COX-2), and nuclear factor kappa B (NF-κB) as well as production of nitric oxide (NO), prostaglandin E2 (PGE2), tumor necrosis factor-alpha (TNF-α), interleukin-1beta (IL-1β), and interleukin-6 (IL-6) induced by LPS. Moreover, SGRS also reduced LPS-induced mRNA expression levels of iNOS and COX-2, including NO, PGE2, TNF-α, IL-1β, and IL-6 cytokines in a dose-dependent fashion. Furthermore, carrageenan-induced paw edema assay validated the in vivo anti-edema effect of SGRS. Interestingly, SGRS (30 mg/kg) suppressed carrageenan-induced elevation of iNOS, COX-2, TNF-α, IL-1β, and IL-6 mRNA levels as well as COX-2 and NF-κB protein levels, suggesting SGRS may possess anti-inflammatory activities.
(iNOS) along with production of other pro-inflammatory cytokines, such as interleukins (IL-1β and IL-6) and tumor necrosis factor-α (TNF-α). These pro-inflammatory biomarkers are known as important mediators of inflammatory responses 5 . Activation of nuclear factor-kappa B (NF-κB) plays a significant role in the regulation of protein expression levels of iNOS and COX-2, which eventually produce nitric oxide (NO) and prostaglandin E2 (PGE2) 6,7 . NF-κB is involved in the trans-activation of a number of genes as an important transcriptional factor, which regulate both immune-inflammatory and acute-inflammatory responses, including the cell survival and tumorigenesis 8,9 . Furthermore, there is a great deal of involvement of cytokines such as tumor necrosis factor (TNF)-α, interferon (IFN)-γ and interleukin (IL)-6 in the development of diseases associated with inflammation and inflammatory responses 10 .
Mitogen-activated protein kinases (MAPKs) such as extracellular signal-regulated kinase (ERK), p38 mitogen-activated protein kinase (p38 MAPK), and c-Jun NH2-terminal kinase (JNK), comprising a group of signaling pathways, play vital roles in the regulation of cell differentiation and growth, and their phosphorylation is known to be a critical component in the production of NO and pro-inflammatory cytokines in activated macrophages 9,11 . Thus, mounting research has focused on identifying safe candidate materials with a preventive ability to treat inflammatory diseases through their diverse inhibitory action against upstream signaling events involved in the expression profile of inflammatory genes.
Rubia philippinensis is a rambling and a low climbing perennial herb that grows in the Southern part of Vietnam. Local communities have long utilized this medicinal plant to treat ordinary ailments such as wounds, inflammation, and skin infections 12 . Previous investigations of the species have resulted in the purification of arborinane triterpenoids, which show promising effects on the prevention and treatment of atherosclerosis 13 . Additionally, rubiarbonone C, a popular chemical entity isolated from R. philippinensis, has been shown to inhibit abnormal proliferation and migration of vascular smooth muscle cells, which plays an important role in the pathophysiology of atherosclerosis. The mechanism by which rubiarbonone C regulates vascular remodeling was further clarified through focal adhesion kinase (FAK), MAPK, and STAT3 Tyr705 14 . In searching for bioactive components from R. philippinensis, in this study, (+)-syringaresinol was also isolated as a major compound. In addition, Cai et al. 15 also reported isolation and characterization of (+)-syringaresinol from Acanthopanax koreanum along with some other phytoconstituents, including eleutheroside E, tortoside A, and hemlarlensin which were enough cable to inhibit a cytoplasmic protein NFAT playing a significant role in the induction of immune responses.
There is a great increasing demand of natural products as herbal medicines due to their being less toxic, affordable, easily available, and with fewer adversary effects on human. As a part of prior research examining the biological potential of effective phytochemicals and to minimize the side effects of commercial anti-inflammatory drugs, such as non-steroidal anti-inflammatory drugs (NSAIDs), a lignan, (+)-syringaresinol (SGRS) isolated in this study from R. philippinensis was assessed for its potent anti-inflammatory effects both in vitro and in vivo. In addition, relevant targets involved in the regulation of inflammatory responses were studied to estimate the precise anti-inflammatory mode of action of SGRS. Current research focused on the evaluation of detailed anti-inflammatory mechanism of SGRS in terms of its effect on LPS-stimulated macrophages that influence MAPK signaling pathways. The findings demonstrate that SGRS attenuated the inflammatory response via down-regulation of NF-κB by activating p38 and JNK proteins in RAW 264.7 cells.
Materials and Methods
Plant materials. Root
Preparation of nuclear extracts.
After dishes were washed with ice-cold PBS, cells were scraped and transferred to microtubes. Cells were swollen by adding lysis buffer [10 mM HEPES (pH 7.9), 10 mM KCl, 1.5 mM MgCl 2 , 1 mM dithiothreitol, 0.2% NP-40, and protease inhibitor cocktail (Roche Diagnostics, Indianapolis, IN, USA)] and then incubated for 10 min on ice and centrifuged 15,000 × g for 5 min at 4 °C. Pellets containing crude nuclei were re-suspended in 50 μL of extraction buffer (20 mM HEPES (pH 7.9), 1.5 mM MgCl 2 , 1 mM dithiothreitol, 420 mM NaCl, 20% glycerol, and protease inhibitor cocktail), incubated for 30 min on a shaker at 4 °C, and centrifuged at 16,000 × g for 10 min in order to obtain supernatants (nuclear extracts).
Carrageenan-induced paw edema.
Eight-week-old ICR mice were obtained from Central Lab Animals, Inc. (Seoul, Korea) and housed in an air-conditioned animal room at a temperature of 23 ± 1 °C, humidity of 55 ± 5%, and 12 h/12 h light/dark cycle with ad libitum access to water and standard laboratory diet. The animals were acclimatized for 1 week and randomly divided into five groups of five mice each. The experiment was conducted in accordance with the guidelines for animal experiments issued by the Kyungpook National University and approved by the Institutional Animal Care and Use Committee of Kyungpook National University (KNU-2017-0035). Mice (N = 25) were randomly divided into four groups (five animals/group): treatment naïve control group (group-1), CA control group (group-2), indomethacin group (group-3), and 50 mg/kg/day SGRS group (group-4). For oral administration, SGRS (dissolved in 40% polyethylene glycol) was administered at the dose of 50 mg/kg/day for 4 consecutive days. For positive control, standard anti-inflammatory drug, indomethacin was employed. For induction of acute phase inflammation, a subcutaneous injection of carrageenan (1%) was administered (60 μL per animal) into the right hind paws of mice after 1 h SGRS or vehicle treatment. A plethysmometer was used for measuring the paw volumes hourly for 4 h after carrageenan injection, after which mice were euthanized. The right hind paw skin was then expunged and immediately frozen in a nitrogen tank for RT-PCR and Western blotting analyses. mRNA analysis by semi-quantitative RT-PCR. To evaluate mRNA expression levels, RAW 264.7 cells were pre-treated with SGRS (25, 50, and 100 μM) for 30 min before incubation with LPS (1 μg/mL) for 6 h. Total RNA was isolated with TRIzol Reagent (Invitrogen Co., Carlsbad, CA, USA) according to the manufacturer's instructions. Semi-quantitative RT-PCR reactions were conducted as previously reported with minor modifications 17 . In brief, to prepare a cDNA pool from RNAs, total RNA (2 μg) was transcribed using an RT-&GO Mastermix (MP Biomedicals, Seoul, Korea), and the product was used as the PCR template. Reverse transcription PCR (RT-PCR) was performed using a PCR Thermal Cycler Dice TP600 (TAKARA Bio Inc., Otsu, Japan) using the specific primer sequences. Information on specific oligonucleotide primers used in this study for mouse transcripts is given in Table S1. For the visualization of PCR products, ethidium bromide staining was preformed following electrophoresis. An Image Lab ™ Software (version 5.2.1) was used for analyzing the bands.
Western blot analysis. Macrophage RAW 264.7 cells were pretreated using above-mentioned concentrations of SGRS or vehicle for 2 h followed by stimulation with LPS (1 μg/mL) for 6 h. Primary and secondary antibodies were obtained commercially (Santa Cruz Biotechnology, Cruz, CA, USA). Ten micrograms of total proteins were separated by SDS-PAGE. Proteins were electro-transferred to nitrocellulose membranes after electrophoresis, blocked with 5% non-fat milk in TBST buffer, and blotted with each primary antibody (1:1000) and with corresponding secondary antibody (1:5000). The antigen-antibody reaction was detected using an ECL solution system (Perkin Elmer). An Image Lab ™ Software (version 5.2.1) was used for analyzing the bands. The membrane was stripped by using stripping buffer (Restore TM Western Blot Stripping Buffer, Thermo Scientific, Rockford, IL, USA) for the screening of various biomarkers. Briefly, the membrane was immersed completely in the stripping buffer for 15 min in shaking motion. The stripping buffer was then carefully, but thoroughly washed with TBST twice for 10 min. Then the membrane was again blocked with 5% non-fat milk in TBST buffer, and blotted with β-actin primary antibody (1:1000) and with corresponding secondary antibody (1:5000). The Statistical analysis. Data were presented as the mean ± SD followed by one-way ANOVA analysis. The value (p < 0.05) was considered significant for the differences. For all data analyses, Window's SPSS Software (Version 10.07 (SPSS, Chicago, IL, USA) was used.
Results
Identification and characterization of (+)-syringaresinol (SGRS). (Fig. S1). The 13 C NMR spectrum (Fig. S2) showed a total of 18 signals for typical lignan derivatives along with four aromatic methoxy groups (δ C 56.7). Based on the analysis of NMR spectroscopic data and specific rotation values, the compound was identified as (+)-syringaresinol ( Fig. 1) 15 . (25,50, and 100 μM) were used to evaluate its inhibitory effects on LPS-induced production of NO and PGE 2 in RAW 264.7 cells. Compared to untreated control cells (column 1 of Fig. 2A), treatment with LPS significantly increased production of NO (column 2 of Fig. 2A). However, treatment with SGRS significantly and dose-dependently reduced NO production (column 3-5 of Fig. 2A). In addition, we examined the effect of SGRS on LPS-induced production of PGE 2 (Fig. 2B). Compared to control cells (column 1 of Fig. 1B), LPS caused an increase in PGE 2 production (column 2 of Fig. 2B), whereas SGRS treatment significantly reduced PGE 2 production (column 3-5 of Fig. 2B) in a concentration-dependent manner. In contrast, SGRS did not affect cell viability, as measured by MTT assay at concentrations that inhibited the LPS-induced inflammatory response (Fig. S3). These results indicate that SGRS inhibits the LPS-induced inflammatory response without affecting cell viability.
Effect of SGRS on production of inflammatory mediators in LPS-induced RAW264.7 cells. Different dosages of SGRS
To investigate whether or not the inhibitory effect of SGRS on NO and PGE2 production was due to inhibition of corresponding gene expression, mRNA and protein expression levels of inducible nitric oxide synthase (iNOS) and cyclooxygenase-2 (COX-2) were evaluated by RT-PCR and Western blot assays. As shown in Fig. 2C,D, LPS treatment markedly augmented transcription and translation levels of iNOS and COX-2, whereas cells pretreated with the indicated concentration of SGRS significantly attenuated LPS-induced iNOS and COX-2 gene and protein levels in a concentration-dependent manner (Fig. 2E). These data suggest that SGRS acts principally by suppressing NO and PGE 2 production through regulation of gene transcription in activated macrophages.
Effect of SGRS on the production of pro-inflammatory cytokines in LPS-induced RAW264.7 cells. Next, we investigated whether or not SGRS inhibits production of the pro-inflammatory cytokines
TNF-α, IL-1β, and IL-6 in LPS-stimulated RAW 264.7 cells by enzyme immunoassay. Compared with untreated controls (column 1 of Fig. 3A-C), LPS significantly increased production of TNF-α, IL-1β, and IL-6 in the culture supernatants of RAW 264.7 cells. However, treatment with SGRS significantly inhibited production of TNF-α, IL-1β, and IL-6 in a concentration-dependent manner (columns 3 to 5, Fig. 3A-C).
Since SGRS significantly inhibited LPS-induced production of pro-inflammatory cytokines (TNF-α, IL-1β, and IL-6), we performed RT-PCR to determine whether or not these inhibitory effects were related to changes at the mRNA level. As illustrated in Fig. 3D, mRNA levels of TNF-α, IL-1β, and IL-6 were markedly up-regulated in response to LPS, whereas treatment with SGRS inhibited mRNA expression in a dose-dependent manner. These results suggest that SGRS is effective in the inhibition of pro-inflammatory cytokine production via gene transcriptional regulation of TNF-α, IL-1β, and IL-6 in activated macrophages.
Effect of SGRS on upstream signaling for NF-κB activation in LPS-induced RAW264.7 cells.
This study also investigated whether or not SGRS has ability to block activation of the NF-κB pathway because regulation of inflammatory mediators in LPS-stimulated macrophages is transcriptionally implicated with the NF-κB. Phosphorylation of inhibitory kappa B (IκB) and its subsequent degradation by various stimuli is a critical step in NF-κB activation 20 , therefore, the effects of SGRS on LPS-induced degradation and phosphorylation of IκBα protein were investigated by immunoblotting analysis. SGRS inhibited LPS-induced phosphorylation of IκB in the cytosol in a dose-dependent manner (Fig. 4A). Free dimer activated subunits of NF-κB (p50/p65) can be translocated from the cytosol to the nucleus upon dissociation of IκB-α from NF-κB. Thus, in order to more specifically evaluate whether or not SGRS can affect the nuclear translocation of NF-κB, Western immunoblotting analysis for NF-κB was conducted using nuclear extracts of LPS-stimulated RAW 264.7 macrophages. Exposure of LPS alone significantly increased the amount of NF-κB in the nucleus (column 2, Fig. 4B). SGRS also inhibited LPS-induced nuclear translocation of NF-κB dose-dependently (column 3-5, Fig. 4B). An NF-κB-driven reporter construct in LPS-stimulated RAW 264.7 cells was employed, since in this system, luciferase reporter activity mediated by NF-κB is greatly induced 11 . Consistent with our previous data, NF-κB-driven luciferase activity in LPS-stimulated RAW264.7 cells was significantly reduced by SGRS in a dose-dependent manner (Fig. 4C), indicating that this extract blocked NF-κB activity. These findings indicate the potential role of NF-κB in the possible mode of action of SGRS in suppressing NO, PGE2, and pro-inflammatory cytokines in activated macrophages.
SGRS attenuates MAPK phosphorylation in LPS-stimulated RAW264.7 cells. To confirm
whether or not inhibition of NF-κB activation is mediated through MAPK pathways, we examined the effect of SGRS on LPS-stimulated phosphorylation of ERK1/2, JNK, and p38 MAPK in RAW264.7 cells. As depicted in Fig. 4D, LPS markedly induced phosphorylation of ERK1/2, JNK, and p38. Pre-treatment with SGRS significantly inhibited LPS-stimulated phosphorylation of p38 MAPK and JNK, whereas phosphorylation of ERK remained unchanged (data not shown). This result suggests that phosphorylation of p38 and JNK was inhibited by SGRS. However, the degree of inhibition was different for each MAPK, with the maximum inhibitory effect exerted on JNK. This result indicates that signal transduction by p38 and JNK might be effectively blocked by SGRS in activated macrophages.
Inhibitory effects of SGRS on carrageenan-induced mouse hind paw edema. Treatment of mice
with carrageenan resulted in significantly increased paw swelling in comparison with the control. However, pretreatment with indomethacin (10 mg/kg/day, p.o.), a positive control, significantly reduced paw edema formation. Similarly, treatment with SGRS (50 mg/kg/day, p.o.) significantly reduced paw edema volume (Fig. 5A). In addition, as expected, SGRS treatment significantly mitigated mRNA expression of inflammatory mediators (iNOS and COX-2) and various pro-inflammatory cytokines (TNF-α, IL-1β, and IL-6) compared to levels in the CA insult groups (Fig. 5B). Subsequently, protein expression of COX-2 and NF-κB was also suppressed in the SGRS-treated group compared to levels in the CA insult groups (Fig. 5C). These data suggest that SGRS attenuated carrageenan-induced inflammation in mice, likely via suppression of NF-κB signaling.
Discussion
(+)-Syringaresinol (SGRS) is a naturally occurring lignan found in various plants, including flax seed (Linum usitatissimum), sesame seed (Sesamum indicum), Brassica vegetables, and grains (rye bran, wheat bran, oat bran, and barley bran) [21][22][23] . In this study, SGRS was isolated from the roots of R. philippinensis using various chromatographic techniques and characterized based on the spectral data analyses 15 . SGRS has also been characterized from the roots of A. koreanum with an ability to inhibit nuclear factor of activated T-cell protein 15 . Recently, SGRS has shown a significant potential to act as a neuromodulating agent by suppressing synaptic transmission via presynaptic transmitter release modulation 24 . Also, a furofuran-like lignan, syringaresinol-4-O-β-d-glucoside showed a potential efficacy in the treatment of lipid and glucose-based metabolic disorders 25 . Moreover, the role of SGRS has also been confirmed in the induction of mitochondrial biogenesis via activating PPARβ pathway in muscle cells 26 . However, the molecular mechanism responsible for the anti-inflammatory action of SGRS has not been evaluated so far. Hence, we evaluated anti-inflammatory effects of SGRS in LPS-induced RAW 264.7 cells as well as in a murine model of carrageenan-induced acute edematous inflammation in order to elucidate the relevant mechanism of action of SGRS.
Inflammation refers as a complex of biological responses to toxic stimuli during the process of host-defense 27 . Production of NO by NOS after carrageenan administration has significant involvement in the inflammation progression, whereas NO which is produced by iNOS is lately involved in maintaining the inflammatory responses 28 . More specifically, high levels of NO generated by inducible NO synthase (iNOS) have been defined as cytotoxic molecules in inflammation and endotoxemia 29 . Several studies investigated the anti-inflammatory effects of lignans from various plant sources on the production of NO/iNOS, PGE 2 /COX-2, TNF-α, and IL-1β in murine macrophages such as RAW 264.7 cells and observed down-regulation of inflammation-associated gene transcription [30][31][32] . In the present study, SGRS significantly inhibited LPS-induced production of NO and PGE 2 in RAW 264.7 cells (Fig. 2A,B). Furthermore, SGRS attenuated LPS-induced gene expression as well as translation of iNOS and COX-2 (Fig. 2C,D). These findings indicate that NO inhibition and PGE 2 production by SGRS might be due to the inhibition of iNOS and COX-2 up-regulation during macrophage activation by LPS.
TNF-α and IL-1β are pro-inflammatory cytokines involved in the pathogenesis of carrageenan-induced inflammation 33 . Also, IL-6 has been found to be interacted with a variety of target cells and associated with diverse immunological reactions 34 . In our study, we found that secretion (Fig. 3A-C) and mRNA expression of LPS-stimulated pro-inflammatory cytokines (Fig. 3D) was significantly inhibited by the treatment of SGRS. These findings indicate that inhibition of pro-inflammatory cytokines by SGRS may offer an ideal mean to treat inflammatory disorders.
Expression of iNOS, COX-2, and pro-inflammatory cytokines is regulated at the transcriptional level by NF-κB, which acts as their major transcriptional regulator 7 . In a resting cell, IκB-α retains NF-κB in the cytoplasm by masking nuclear localization sequences on NF-κB subunits 9 . Since IκB is dissociated from NF-κB upon phosphorylation, its content in the cytosol reflects the status of NF-κB, i.e. a higher IκB level indicates cytoplasmic localization of NF-κB while a higher p-IκB level indicates the nuclear localization of NF-κB. Many chemo-preventive and anti-inflammatory agents have been shown to reduce inflammatory symptoms by suppressing NF-κB expression. Our results show that treatment with LPS resulted in increased levels of NF-κB and p-I-κBα, whereas treatment with SGRS inhibited nuclear translocation of NF-κB (Fig. 4A) and levels of p-I-κBα (Fig. 4B). These findings suggest that treatment with SGRS inhibited NF-κB activation by suppressing the p-I-κBα level and nuclear translocation of NF-κB in LPS-induced RAW 264.7 cells. Taken together, the current study shows that inhibition of NF-κB activation by SGRS is associated with reduced induction of iNOS, COX-2, TNF-α, IL-1β, and IL-6.
It is well known that extracellular stimuli can activate the members of the MAPK family, such as serine and threonine kinases that have the ability to mediate cell surface signal transduction to the nucleus. The intracellular signals arising from MAPK cascades invariably lead to the activation of molecules that ultimately cause activation of NF-κB 35 . Hence, inhibition of any or all three MAPKs can be sufficient to block the inflammatory response. A number of lignans with anti-inflammatory properties have been reported to efficiently block LPS-induced phosphorylation of MAPKs 20,36 . However, the mode of action of lignans depends upon the substitution pattern in the core structure as well as the resultant derivatives that may target various proteins to bring about anti-inflammatory effects 37 . The present study demonstrated that SGRS significantly reduced the phosphorylation and degradation of IκB-α, thereby inhibiting translocation of NF-κB subunits from the cytosol into the nucleus 20 . Saucerneol F, a new tetrahydrofuran-type sesquilignan isolated from Saururus chinensis, has been shown to directly inhibit IKK activity by oxidizing the critical cysteine residue and further inhibiting IκB-α phosphorylation 20 . A similar mechanism of action may be responsible for SGRS inhibiting the phosphorylation of IκB-α and regulating the transcriptional activity of NF-κB. Further, the present study showed that SGRS prevented phosphorylation of p38 and JNK in response to LPS stimulation; however, inhibition of JNK phosphorylation was dominant compared to inhibition of p38 phosphorylation (Fig. 4D). It is evident that SGRS mediated inhibition of MAPKs, leading to transcriptional inactivation of NF-κB, which in turn further down-regulated COX-2 and iNOS expression and suppressed cytokine production. Thus, our findings suggest that SGRS can modulate NF-κB directly via IκB modification or indirectly via MAPK inhibition.
Carrageenan as an important phlogistic factor has the ability to induce a variety of inflammatory responses, such as neutrophil-mediated production of free radicals and mediators, neutrophil infiltration, paw edema, and capillary permeability 38,39 . Mounting amount of research has proven carrageenan-induced hind paw acute edematous inflammation assay as an ideal animal model for evaluating the anti-inflammatory potential of any drug molecule 39 . In our study, treatment with SGRS (50 mg/kg/day) resulted in significant reduction of mice paw edema volumes (Fig. 5A). In addition, SGRS treatment also mitigated mRNA expression of iNOS, COX-2, TNF-α, IL-1β, and IL-6 in carrageenan-induced mice compared to the control (Fig. 5B). Subsequently, Western blot analysis revealed that SGRS treatment also suppressed COX-2 as well as NF-κB protein expression in carrageenan-induced mice (Fig. 5C). These findings advocate that the inhibitory mechanism of SGRS against LPS-induced NO, PGE2, TNF-α, IL-1β, and IL-6 production in RAW 264.7 cells may represent an important molecular action resulting in the inhibition of carrageenan-induced formation of paw edema. A systematic mechanism of anti-inflammatory effects of SGRS is summarized in Fig. 6.
This study first time reports isolation and characterization of a lignan, (+)-syringaresinol (SGRS), from Rubia philippinensis and demonstrated the anti-inflammatory efficacy of SGRS in LPS-stimulated RAW 264.7 cells in vitro and in a carrageenan-induced hind paw edema assay in experimental mice. Collectively, these results conclusively demonstrate that SGRS is an active ingredient of R. philippinensis that mediates anti-inflammatory effects by down-regulating NF-κB expression through interference with JNK and p38 phosphorylation and by reducing mRNA levels of iNOS, COX-2, TNF-α, IL-1β, and IL-6, thus suggesting its significant therapeutic potential. | 2018-06-16T13:08:48.032Z | 2018-06-15T00:00:00.000 | {
"year": 2018,
"sha1": "4e08886ce6de412b6f18e37729130ad58f03412b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-27585-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e08886ce6de412b6f18e37729130ad58f03412b",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
526726 | pes2o/s2orc | v3-fos-license | Role of Mitochondrial Oxidative Stress in Glucose Tolerance, Insulin Resistance, and Cardiac Diastolic Dysfunction
Background Diabetes mellitus (DM) is associated with mitochondrial oxidative stress. We have shown that myocardial oxidative stress leads to diastolic dysfunction in a hypertensive mouse model. Therefore, we hypothesized that diabetes mellitus could cause diastolic dysfunction through mitochondrial oxidative stress and that a mitochondria‐targeted antioxidant (MitoTEMPO) could prevent diastolic dysfunction in a diabetic mouse model. Methods and Results C57BL/6J mice were fed either 60 kcal % fat diet (high‐fat diet [HFD]) or normal chow (control) for 8 weeks with or without concurrent MitoTEMPO administration, followed by in vivo assessment of diastolic function and ex vivo studies. HFD mice developed impaired glucose tolerance compared with the control (serum glucose=495±45 mg/dL versus 236±30 mg/dL at 60 minutes after intraperitoneal glucose injection, P<0.05). Myocardial tagged cardiac magnetic resonance imaging showed significantly reduced diastolic circumferential strain (Ecc) rate in the HFD mice compared with controls (5.0±0.3 1/s versus 7.4±0.5 1/s, P<0.05), indicating diastolic dysfunction in the HFD mice. Systolic function was comparable in both groups (left ventricular ejection fraction=66.4±1.4% versus 66.7±1.2%, P>0.05). MitoTEMPO‐treated HFD mice showed significant reduction in mitochondria reactive oxygen species, S‐glutathionylation of cardiac myosin binding protein C, and diastolic dysfunction, comparable to the control. The fasting insulin levels of MitoTEMPO‐treated HFD mice were also comparable to the controls (P>0.05). Conclusions MitoTEMPO treatment prevented insulin resistance and diastolic dysfunction, suggesting that mitochondrial oxidative stress may be involved in the pathophysiology of both conditions.
Although diastolic dysfunction appears to play an important role, there has been poor understanding of the underlying pathophysiology. 2 Several epidemiologic studies have shown that type 2 diabetes mellitus (DM) and hypertension are closely associated with heart failure with preserved ejection fraction. 3,4 We have shown that increased oxidative stress in cardiomyocytes causes S-glutathionylation of the myofibrillar protein, cardiac myosin binding protein C (cMyBP-C), leading to hypertension-induced diastolic dysfunction. 5 Administration of tetrahydrobiopterin (BH 4 ), a cofactor of nitric oxide synthase (NOS), can prevent S-glutathionylation of cMyBP-C and diastolic dysfunction. 6 Furthermore, we have shown that other conditions that can increase oxidative stress in cardiomyocytes, such as angiotensin II exposure and mitochondrial manganese superoxide dismutase (MnSOD) depletion, also lead to diastolic dysfunction. 5 Mitochondrial oxidative stress plays a major role in the pathophysiology of type 2 DM and its complications. 7,8 In humans and animal models, insulin resistance and type 2 DM are associated with increased production of free radicals or impaired antioxidant defenses. [7][8][9] Therefore, we hypothesized that DM leads to mitochondrial oxidative stress in cardiomyocytes and S-glutathionylation of cMyBP-C, leading to diastolic dysfunction.
High-Fat Diet (HFD)-Induced Obesity and Insulin Resistance
Animal care and interventions were provided in accordance with the National Institutes of Health Guide for the Care and Use of Experimental Animals, and all animal protocols were approved by the Institutional Animal Care and Use Committees of the University of Illinois at Chicago and Lifespan. Sixweek-old male C57BL6/J mice were purchased from Jackson Laboratory (Bar Harbor, MA). The HFD group was fed 60 kcal % fat diet (Research Diets Inc, New Brunswick, NJ) for 8 weeks. The age-and gender-matched control group was fed normal chow (Harlan, Indianapolis, IN) for 8 weeks. Mito-TEMPO (2-(2,2,6,6-tetramethyl-piperidin-1-oxyl-4-ylamino)-2oxoethyl-triphenylphosphonium chloride) was administered at 0.5 mg/kg twice a day intraperitoneally for 8 weeks while mice continued HFD. Pioglitazone was administered at 30 mg/kg by oral lavage once a day for 8 weeks. Following 8 weeks of HFD, mice underwent cardiac magnetic resonance (CMR) followed by euthanasia to harvest tissues for ex vivo studies. Body weight and food intake were determined weekly for 8 weeks during the midportion of the light cycle. Preweighed food was placed in the food hoppers and measured on a per-cage basis every week. Food intake was determined as grams consumed per day.
Measurement of Plasma Glucose and Insulin
Serum glucose levels were measured by a glucometer (ACCU-CHEK; Roche Applied Science, Indianapolis, IN) after drawing blood from the tail vein. After euthanasia, blood was also collected by cardiac puncture and centrifuged to separate plasma. Plasma insulin level was measured using an enzymelinked immunosorbent assay kit (Millipore, Billerica, MA). Glucose tolerance test was performed after 8-hour fasting.
Myocardial Tagged Magnetic Resonance Imaging
While mice were receiving general anesthesia using 1% to 1.5% isoflurane, myocardial tagged CMR was performed on a 600-MHz Bruker Avance console (Bruker Biospin, Billerica, MA) equipped with an actively shielded 14.1-T, 89-mm-bore vertical magnet and a 1000-mT/m, 110-ls rise-time microimaging gradient system. 10 Three short-axis cine slices (1 mm thickness) were acquired covering the entire left ventricle (LV) with cardiac and respiratory gating. From these cine images, LV volume and mass were calculated by contouring the endo-and epicardium using Osirix imaging software (Geneva, Switzerland). In addition, these cine images allowed accurate timing of end-systole, which was defined as the smallest LV cavity volume. A myocardial tagged midventricular short-axis image was obtained using a cardiac-and respiratory-gated spatial modulation of magnetization sequence. 11 After tagging-grid generation, multiple taggedimages were acquired from end systole throughout LV diastole with a temporal resolution of 5 ms. Image analyses were processed using Matlab (MathWorks, Natick, MA). Serial motions of the tagging grids were tracked manually. Deformed tagging square-like elements were divided into 2 adjacent triangles for homogeneous strain calculations from the reference time point of end-systole. 12,13 Maximal circumferential strain rate (Ecc) rate during the rapid filling phase was calculated to assess diastolic function. 14,15 Echocardiography Mitral inflow velocity (E) and longitudinal tissue velocity of the mitral anterior annulus (E 0 ) were assessed in the subcostal 4chamber view using a Vevo 770 high-resolution in vivo imaging system (Visual Sonics, Toronto, Canada). 6 During the image acquisition, mice were anesthetized with 1% to 1.5% isoflurane so that a heart rate of 350 to 390 beats/min was maintained.
Invasive Hemodynamic Measurement
With mice under the general anesthesia using 1% to 1.5% isoflurane, a pressure-volume catheter was inserted into the right common carotid artery and advanced into the LV. Inferior vena cava occlusion was performed via a diaphragm incision. Following calibration of volume and parallel conductance, baseline hemodynamic measurements were obtained. Multiple pressure-volume loops were acquired during compression of the inferior vena cava. End-diastolic pressure volume relationship was calculated using linear regression. 5 Na 2 HPO 4 1.2, HEPES 10, MgSO 4 1.2 mmol/L and 0.1% BSA) at serially increasing Ca 2+ concentrations (0.2, 0.5, and 1 mmol/L), cardiomyocytes were suspended in Modified Eagle's Medium with 1% insulin-transferrin-selenium, 0.1% BSA, and 1% glucose in a 95% O 2 /5% CO 2 incubator at 37°C. 6 The mechanical properties of cardiomyocytes were assessed using an IonOptix Myocam System (IonOptix Inc., Milton, MA). 16 Unloaded cardiomyocytes placed on a glass slide for 5 minutes were imaged with an inverted microscope and perfused with a normal Tyrode's buffer (NaCl 133, KCl 5.4, MgCl 2 5.3, Na 2 PO 4 0.3, HEPES 20, glucose 10 mmol/L, pH 7.4) containing 1.2 mmol/L calcium held at 37°C with a temperature controller. Cardiomyocytes were paced with 10 V, 4-ms square-wave pulses at 1.0 Hz, and sarcomere shortening and relengthening were assessed using the following indices: diastolic sarcomere length (lm), peak fractional shortening (%), relaxation time constant s (calculated as a 0 +a 1 e t/s where t=time, s), relengthening time (s), and maximum relaxation velocity (dL/dt, lm/s).
Cardiomyocytes were loaded with 1 lmol/L fura 2acetoxymethyl (AM) ester for 15 minutes and de-esterized for additional 15 minutes at 37°C. After loading, cells were washed twice and then imaged with an inverted microscope. To mimic HFD conditions, cells were perfused with a modified Tyrode's buffer containing 1.2 mmol/L calcium, 1.0 mmol/L pyruvate, and 1% fatty acids mixture (Sigma-Aldrich) including 2 ng/mL arachidonic and 10 ng/mL each linoleic, linolenic, myristic, oleic, palmitic and stearic, 0.22 lg/mL cholesterol from New Zealand sheep's wool, 2.2 lg/mL Tween-80, 70 ng/ mL tocopherol acetate, and 100 lg/mL Pluronic F-68 and under 37°C perfusion. Cardiomyocytes were paced at 1.0 Hz for 10-ms duration and the fluorescence measurements were recorded with a dual-excitation fluorescence photomultiplier system. Cardiomyocytes were exposed to light emitted by a 75-W Xenon lamp and passed through either a 340-or 380-nm wavelength filter. The emitted fluorescence was detected at 510 nm. To take into account any interference, the background fluorescence for each cardiomyocyte was determined by moving the cardiomyocyte out of the view and recording the fluorescence from the bath solution alone. The time course of the fluorescence signal decay was fit to a single exponential equation, and the time constant (s) was used as a measure of the rate of intracellular Ca 2+ decay.
Measurement of Mitochondrial Oxidative Stress
Following isolation of cardiomyocytes as described above, cells were stained with both MitoSOX (5 lmol/L) and MitoTracker green (100 nmol/L) (Molecular Probe, Carlsbad, CA) for 15 minutes at 37°C and washed twice. Flow cytometry evaluated cardiomyocytes using the Cyan ADP analyzer (Beckman Coulter, Brea, CA). Five thousand cardiomyocytes were selected by appropriate forward and side scatter gating. After a second gating by pulse-width, MitoSOX fluorescence was detected. Unstained cells were used as a reference standard. The mean of the fluorescence intensity was obtained from the MitoSOX histogram. For confocal microscope images, isolated cardiomyocytes were attached by laminin-coated glass dish. Confocal images were obtained using 963 magnification objective by LSM 510M (Carl Zeiss, Inc, Thornwood, NY). 17 The cell-permeant dye 2 0 ,7 0 -dichlorodihydrofluorescein diacetate (H 2 DCFDA; Life Technologies, Grand Island, NY) was used to measure generalized oxidative stress in cardiomyocytes. Isolated cardiomyocytes (10 000 cells) from each group (N=5 in each group) were plated in laminin-coated plate with MEM medium including 1% insulin, transferrin, selenium, 5% FBS, and 1% lipid mixture (Sigma) with or without fresh-dissolved antioxidants, mitoTEMPO (10 lmol/ L), BH 4 (10 lmol/L), and apocynin (100 lmol/L) for 1 hour. After washing twice with plating medium, cells were incubated with H 2 DCF-DA (5 lmol/L) for 15 minutes at 37°C, and then washed twice with medium. Fluorescence intensity was read every 2 minutes for 20 minutes using a microplate reader (Synergymx, Winooski, VT) at 495/530 nm and 37°C. After reading, cells were fixed with 4% paraformaldehyde for 15 minutes, and DAPI was added (0.5 lg/mL). Fluorescence data were normalized by DAPI-positive cell counts.
Immunoblotting and Immunoprecipitation
Proteins (30 lg) were isolated from the frozen ventricles (N=5-6 in each group) and separated on a 4% to 12% SDS-PAGE gel and transferred onto a 0.2 lm polyvinyl difluoride membrane for S-glutathionylation of cMyBP-C. Myofibrils were prepared from mouse hearts as described previously. 6 Myofibrils were separated on a 4% to 12% SDS-PAGE gel and transferred onto a 0.2-lm polyvinyl difluoride membrane. Following blocking the membrane in 5% nonfat dry milk with 2.5 mmol/L N-ethylmaleimide for 1 hour, antiglutathione mouse monoclonal primary antibody (Virogen, Watertown, MA) was applied to detect for S-glutathionylation and analyzed with Quantity One imaging analysis software (Bio-Rad). For slot blots of 3-nitrotyrosine, slot blot systems were used (Bio-Rad). After hydration of nitrocellulose membrane with Tris-tricine transfer buffer (25 mmol/L Tris, 192 mmol/L glycine, 20% methanol pH 8.3), total lysates (5 lg) were blotted onto a 0.2-lm nitrocellulose membrane and vacuum dried. Proteins on slot blots were fixed at 25 V, 1.3A for 5 minutes using semidry transfer system Turbo (Bio-Rad). Anti-3-nitrotyrosine antibody (Abcam) was used to detect protein nitrosylation. The following procedures were the same as for immunoblotting.
Total lysates (200 lg) were incubated with monoclonal MnSOD (SOD2) antibody (2 lg, Abcam) and 10 lL of antibody capture affinity ligand (Millipore, capture and release reversible immunoprecipitation kit) for 30 minutes at room temperature. After washing of the spin column, precipitated MnSOD was eluted and immunoblotted with acetylated lysine antibody (Cell Signaling #9441). The immunoblotting procedure was the same as previously described.
Statistical Analysis
Descriptive statistics are meanAESEM or meanAESD where indicated. Comparisons for each group were performed using nonparametric Mann-Whitney test (Wilcoxon rank sum) or unpaired Student t tests. For comparisons between multiple groups, one-way ANOVA with post hoc Bonferroni's multiple comparison test comparing all groups was used for single cardiomyocytes data. Ecc rate was correlated with invasive hemodynamics and echocardiographic tissue Doppler imaging by the nonparametric Spearman correlation method. All data analyses were performed using Graphpad Prism 5.0, Origin 8.5, or SPSS 16.0. Significance was defined when P<0.05. *denotes P<0.05, **denotes P<0.01, and ***denotes P<0.001.
HFD-Induced Metabolic Syndrome and Type 2 DM
HFD mice developed significant obesity following 8 weeks of a HFD as shown in Table 1 and Figure 1. Random serum glucose was significantly elevated compared to controls. Although fasting serum glucose levels were similar in both groups, fasting serum insulin levels were significantly higher in HFD mice, indicating insulin resistance. Glucose tolerance tests showed significantly increased serum glucose in the HFD mice compared with the control as follows: 566AE20 versus 375AE24 mg/ dL at 30 minutes, 595AE6.1 versus 256AE15 mg/dL at 60 minutes, 597AE3.7 versus 229AE23 mg/dL at 90 minutes, and 509AE42 versus 249AE4.4 mg/dL at 120 minutes (N=5 in control, N=7 in HFD) following intraperitoneal glucose administration. These results demonstrate that HFD mice developed metabolic alterations similar to metabolic syndrome and type 2 DM.
HFD Induced Diastolic Dysfunction
To evaluate the effect of these metabolic alterations on cardiac structure and function, myocardial tagged CMR was Table 2) despite comparable systolic blood pressure. Systolic function was preserved as shown by similar ejection fractions in both groups. In addition, cardiac output and cardiac index were also similar in both groups. These results show that HFD mice developed concentric LVH with preserved systolic function compared with controls.
Diastolic function was assessed using 3 different modalities as demonstrated in Figures 2 and 3. CMR enabled direct measurement of myocardial strain during diastole (Figure 2A). Ecc rate was significantly reduced during diastole in HFD mice (4.85AE0.15 1/s) compared with controls (7.02AE0.59 1/s, P<0.05), indicating significant relaxation impairment in HFD mice. During HFD feeding, longitudinal assessment of diastolic function by myocardial tagged CMR revealed progression of diastolic dysfunction ( Figure 2C). LVH also progressed similarly in the HFD group ( Figure 2D). Echocardiography showed E/E 0 was significantly increased in the HFD group compared with controls (42AE1.4 versus 8.6AE3.3, P<0.01, Figure 3A through 3D). Finally, invasive hemodynamic assessment revealed that the slope of end-diastolic pressure volume relationship was significantly higher in the HFD mice (0.37AE0.04) compared with the control (0.21AE0.03, P<0.05; Figure 3E). Ecc rate was highly correlated with invasive hemodynamics ( Figure 3F) and echocardiographic tissue Doppler imaging by nonparametric Spearman correlation ( Figure 3G). All 3 modalities indicated that HFD mice developed significant diastolic dysfunction compared with control animals.
A Mitochondrial-Targeted Antioxidant Improved Glucose Tolerance and Insulin Resistance
The MitoTEMPO-treated HFD group had similar body weight to the controls ( Figure 4A). Glucose tolerance tests revealed significantly reduced serum glucose levels in the MitoTEMPOtreated HFD group at 60 minutes after intraperitoneal glucose challenge when compared with the nontreated HFD group ( Figure 4B). Despite differences in glucose tolerance, 6-hour fasting serum glucose levels were not significantly different between groups ( Figure 4C). Nevertheless, the 6-hour fasting serum insulin levels were significantly elevated in the HFD group and significantly reduced in the MitoTEMPO-treated group ( Figure 4D).
MitoTEMPO Reduced Reactive Oxygen Species (ROS), Preserved Mitochondrial Ultrastructure, Prevented MnSOD Acetylation, and Altered NOS
Confocal microscopy and flow cytometry indicate that HFD mice have significantly increased mitochondrial and cytosolic reactive oxygen species (ROS) compared to controls without significant changes in mitochondrial mass as measured by MitoTracker (Figure 7). Mitochondrial superoxide was measured by MitoSOX, and general cytosolic hydrogen peroxide (H 2 O 2 ) was measured by H 2 DCF-DA. Quantitative assessment of the corresponding flow cytometry data showed a significant increase in the MitoSOX signal from the HFD mice (137AE7) compared with the controls (91AE3, P<0.01) or the MitoTEMPO-treated group (100AE6, P<0.01; Figure 7B). General cellular ROS levels were measured using H 2 DCF-DA and were significantly elevated in HFD hearts (HFD, 3473AE200 a.u. versus control 1562AE37 a.u., P<0.001, Figure 8). MitoTEMPO-treated HFD mice had a reduced ROS level (1985AE145, P<0.001). The major ROS source was detected using each scavenger, including mitoTEMPO for mitochondrial ROS, BH 4 for NOS uncoupling, allopurinol for xanthine oxidase, and apocynin for NADPH oxidase. The rates of hydrogen peroxide accumulation were significantly inhibited by mitoTEMPO and BH 4 , indicating mitochondria and uncoupled NOS are the major ROS sources. To further verify the mitochondrial ROS effect, we measured the NADH level. The ratio of [NADH]/[NAD + ] was increased in HFD mice (3.60AE0.78 HFD versus 2.02AE0.23 control, P<0.05), consistent with increased mitochondrial oxidative stress (Figure 9). 18 Consistent with a general increase in cellular oxidation, total protein 3-nitrotyrosine level was increased with HFD and decreased by mitoTEMPO treatment (Figure 10).
Mitochondrial ultrastructure of HFD mouse hearts from electron microscopy showed evidence of morphological abnormalities ( Figure 11). MitoTEMPO treatment improved mitochondrial ultrastructure. Paralleling the ultrastructural changes, MnSOD lysine-acetylation levels were increased significantly in the HFD group as compared to controls, and this increase was prevented by mitoTEMPO treatment (Figure 12). Since acetylation decreases MnSOD activity, 19 this observation may provide an explanation for the increase in mitochondrial ROS with DM.
Previously, we have implicated reduced nitric oxide (NO) in the pathogenesis of diastolic dysfunction. 5,16 To evaluate the role of NO in HFD-induced diastolic dysfunction, NOS levels and regulatory phosphorylation were assessed. Figure 13 shows that mitoTEMPO was sufficient to suppress most of the cardiac oxidative stress produced by HFD. Nevertheless, HFD did cause changes in the NOS/NO pathway. Figure 7 shows that BH 4 reduced ROS to a lesser extent than MitoTEMPO. In Figure 13, HFD reduced eNOS S1177 phosphorylation without alteration in T495 phosphorylation, suggesting that eNOS activity may be downregulated with HFD. The change in eNOS 1177 was partially reversed by mitoTEMPO. There was no change in eNOS expression with HFD, but HFD reduced nNOS expression slightly. This was reversed by mitoTEMPO. These results suggested an interplay between mitochondrial ROS and the NO system. Therefore, we treated cardiomyocytes isolated from diabetic hearts with an NO donor, SNAP (S-Nitroso-N-Acetyl-D,L-Penicillamine). SNAP increased resting sarcomere length, decreased diastolic relaxation time, and improved fractional shortening (Figure 14), suggesting that reduced NO may be part of the pathology of diastolic dysfunction in DM.
Discussion
Epidemiological risk factors for diastolic dysfunction include type 2 DM, hypertension, obesity, and age. Diastolic dysfunction is observed in 40% to 75% of asymptomatic, normoglycemic patients with type 2 DM patients. 2,20,21 Previous reports indicate that diastolic dysfunction represents the earliest preclinical manifestation of diabetic cardiomyopathy and that this can progress to symptomatic heart failure. 22 In this study, we demonstrated HFD-induced insulin resistance results in diastolic dysfunction at the organ and cellular levels. Diastolic dysfunction was associated with mitochondrial oxidative stress, mitochondrial morphological changes, and myofilament cMyBP-C S-glutathionylation. Treating HFD mice with a mitochondrial antioxidant, MitoTEMPO, was able to prevent HFD-induced diastolic dysfunction.
We have demonstrated that hypertension-associated diastolic dysfunction is caused by uncoupled NOS-dependent oxidative stress leading to S-glutathionylation of cMyBP-C, slower myofilament relaxation kinetics, and increased myofilament Ca 2+ sensitivity. 6 In this study, HFD-associated diastolic dysfunction was also associated with increased oxidative stress and S-glutathionylation of cMyBP-C. Moreover, the oxidative stress appears to have arisen as the result of mitochondrial dysfunction. This is consistent with the known effect of HFD on mitochondrial ROS and suggests that oxidative modification of cMyBP-C may be a final common mechanism to cause diastolic dysfunction associated with hypertension or DM. 23 There has been considerable debate about the most suitable noninvasive method for evaluating diastolic function in small animals. 5 Our experiments showed close correlation of both CMR and echocardiography, compared to the "gold standard" of invasive hemodynamics.
Several small studies have suggested that elevated serum glucose may not be a cause of diastolic dysfunction. [24][25][26][27] In this study we have shown that although pioglitazone lowered blood glucose levels, it did not improve diastolic dysfunction. On the other hand, mitoTEMPO was able to improve glucose tolerance and diastolic dysfunction, suggesting that glucose lowering alone was insufficient to prevent impaired cardiac relaxation.
Our findings suggest that HFD-mediated diastolic dysfunction was associated with mitochondrial morphological changes and increased ROS production. Recent studies have shown that insulin resistance is accompanied by reduced mitochondrial function and enhanced mitochondrial oxidative stress. 28 In addition to preserving diastolic function, our experiments showed that mitochondrial superoxide scavenging protects HFD mice from weight gain, insulin resistance, and LVH. This suggests that mitochondrial superoxide may mediate insulin resistance, glucose intolerance, weight gain, LVH, and diastolic dysfunction in response to excessive calorie intake. These results are similar to a study showing that metallothionein overexpression or resveratrol administration can prevent diabetic mice from developing diastolic dysfunction and LV hypertrophy. 29,30 Our work is consistent with that of Anderson et al, who suggest that HFD is linked to insulin resistance by mitochondrial ROS production, specifically H 2 O 2 . 31 In our case, we found that HFD resulted in excess mitochondrial and cytoplasmic oxidative stress that contributed to diastolic dysfunction.
Another potential mechanism for diastolic dysfunction in the type 2 DM might be altered calcium transients. Nevertheless, it appears that changes in calcium cycling did not play a major role in diastolic dysfunction or the effect of mitoTEMPO in this HFD mouse model. This is consistent with reports by Flagg et al 29,32 Nevertheless, this mechanism cannot be entirely excluded.
We did not perform a detailed investigation of potential side effects of MitoTEMPO in the current study. Nevertheless, none of the MitoTEMPO-administered HFD mice showed any signs of significant systemic toxicity, and MitoTEMPO-treated control animals showed no changes in hemodynamic parameters, consistent with our previous use of this drug. 17
Conclusions
We have shown that a HFD leads to insulin resistance, glucose intolerance, mitochondrial ROS production, modification of cMyBP-C, and diastolic dysfunction. These changes can be prevented by a mitochondria-targeted antioxidant. This suggests that mitochondrial ROS contributes to glucose intolerance and diastolic dysfunction. Mitochondrial antioxidants may have a role in treatment or prevention of diastolic heart failure. | 2016-08-09T08:50:54.084Z | 2016-05-01T00:00:00.000 | {
"year": 2016,
"sha1": "b55762d04854d08bc597bcc9bd4e19cfcf7ce8c8",
"oa_license": "CCBYNC",
"oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.115.003046",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b55762d04854d08bc597bcc9bd4e19cfcf7ce8c8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.