id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
259992148 | pes2o/s2orc | v3-fos-license | HOME-BASED TELEREHABILITATION FOR COMMUNITY-DWELLING PERSONS WITH STROKE DURING THE COVID-19 PANDEMIC: A PILOT STUDY
Objectives To determine the feasibility and safety of use of asynchronous telerehabilitation for community-dwelling persons with stroke in the Philippines during the COVID-19 (SARS-CoV-2) pandemic, and to evaluate the change in participants’ telerehabilitation perceptions, physical activity, and well-being after a 2-week home-based telerehabilitation programme using a common social media application. Design Pilot study. Participants Nineteen ambulatory, non-aphasic adult members of a national university hospital stroke support group in the Philippines. Methods Pre-participation screening was performed using the Physical Activity Readiness Questionnaire. The participants were medically cleared prior to study enrollment. They then engaged in telerehabilitation by watching original easy-to-follow home exercise videos prepared and posted by the study authors on a private group page on Facebook™ every other day for 2 weeks. Descriptive statistics was performed. Results All 19 participants (mean age: 54.9 years) completed the programme with no significant adverse events. The majority of subjects improved their telerehabilitation perceptions (based on the Telepractice Questionnaire), physical activity levels (based on the Simple Physical Activity Questionnaire), and perceived well-being (based on the Happiness Scale). Conclusion Asynchronous telerehabilitation using a common low-cost social media application is feasible and safe for community-dwelling persons with chronic stroke in a lower-middle-income country. LAY ABSTRACT The COVID-19 (SARS-CoV-2) pandemic led us to find alternative ways to connect patients and healthcare providers despite physical distance. For instance, telerehabilitation via available telecommunication technologies can be used to provide consultation and therapy services to persons living with disability. In resource-limited countries, such as the Philippines, telerehabilitation was not widely practiced prior to the pandemic, due to several factors, such as lack of acceptance and high costs. This pilot study demonstrates the feasibility, effectiveness, and safety of telerehabilitation using a common low-cost social media application for patients with chronic stroke. Nineteen adult members of a stroke support group safely completed a 2-week telerehabilitation programme by watching original easy-to-follow home exercise videos posted on a private group page on Facebook™. The majority of subjects had positive experiences with the programme, and had improved perceptions of telerehabilitation, physical activity levels, and perceived well-being after 2 weeks.
T elerehabilitation remains an underutilized tech- nology in the practice of rehabilitation medicine in the Philippines, a lower-middle-income archipelagic country in Southeast Asia, despite the potential of telerehabilitation to overcome the barriers of distance, costs, and limited healthcare resources (1,2).Stakeholders (e.g.patients, healthcare providers, policymakers) have apprehensions about the use of telemedicine in general due to concerns about feasibility, data privacy, safety, cost-effectiveness, and evidence, among others (3,4).To date, telerehabilitation has limited evidence for specific patient populations.
Studies show that stroke survivors are prone to recurrent stroke and cardiac disease (5).A modifiable risk factor common to these diseases is physical inactivity, which may be prevalent during the home quarantine period due to coronavirus disease 2019 (COVID-19) (SARS-CoV-2).The cornerstone of prevention of
LAY ABSTRACT
The COVID-19 (SARS-CoV-2) pandemic led us to find alternative ways to connect patients and healthcare providers despite physical distance.For instance, telerehabilitation via available telecommunication technologies can be used to provide consultation and therapy services to persons living with disability.In resourcelimited countries, such as the Philippines, telerehabilitation was not widely practiced prior to the pandemic, due to several factors, such as lack of acceptance and high costs.This pilot study demonstrates the feasibility, effectiveness, and safety of telerehabilitation using a common low-cost social media application for patients with chronic stroke.Nineteen adult members of a stroke support group safely completed a 2-week telerehabilitation programme by watching original easy-to-follow home exercise videos posted on a private group page on Facebook™.The majority of subjects had positive experiences with the programme, and had improved perceptions of telerehabilitation, physical activity levels, and perceived well-being after 2 weeks.cardiovascular events among stroke survivors is the combination of appropriate pharmacological and nonpharmacological treatment, including rehabilitation (6).However, during COVID-19 measures, outpatient centre-based rehabilitation was restricted for a time in many healthcare settings worldwide, especially in resource-limited areas.
As an alternative to centre-based rehabilitation, telerehabilitation could be used to promote physical activity among community-dwelling persons with stroke while at home, using a common social media application.According to a systematic review, homebased rehabilitation should be the trend in providing rehabilitation for people with stroke living in the community (7).However, the included studies used various interventions and lacked data on adverse events and experiences of stakeholders.
Mobile device ownership and internet use have increased in the Philippines in recent years (8).The Philippines is known as the social media capital of the world; Filipinos across different demographics primarily use the internet to access various, mostly free or low-cost, social media platforms (9).This pilot study leverages available and relatively inexpensive telecommunication technologies to support the health maintenance of community-dwelling persons with stroke, mainly through asynchronous (i.e.store-and-forward) telerehabilitation.The aim of this study was to determine the feasibility and safety of a short-course telerehabilitation programme for a stroke support group in the country's national university hospital.The study also determined any change in the participants' perceptions of telerehabilitation, physical activity level, and perceived well-being after the 2-week intervention, and their telerehabilitation experiences and recommendations.These data could potentially contribute to the growing body of evidence on the use of this emerging rehabilitation technology, especially in resource-limited countries, such as the Philippines, wherein telerehabilitation was neither accepted nor implemented widely prepandemic (2,10).
METHODS
This was a pretest-posttest study approved by the University of the Philippines Manila Research Ethics Board (number: 2020-412-01).Inclusion criteria were: age ≥ 18 years; stroke survivor; members of the Rehabilitation Medicine Stroke Support Group at Philippine General Hospital; internet access at home; non-aphasic; community ambulant.Exclusion criteria were: unable to personally complete an online form to provide consent; no adult companion at home to ensure safety during exercises.
Pre-intervention
Individual pre-participation screening, using the Physical Activity Readiness Questionnaire and medical evaluation, was administered by a rehabilitation medicine resident and consultant through a video-based teleconsultation.Only participants medically cleared for exercise were deemed eligible and oriented to the study accordingly.They then provided their clinicodemographic information and baseline responses to the following: Telepractice Questionnaire, Simple Physical Activity Questionnaire (SIMPAQ), and Happiness Scale.Subsequently, each patient and their adult companion were taught simple self-monitoring and safety measures, such as obtaining the blood pressure (when a home device was available), heart rate, respiratory rate, and Borg's rate of perceived exertion.
Intervention
All participants commenced the telerehabilitation programme on the same day.They were asked to watch on Facebook™ using their available gadget and follow the same set of home exercises demonstrated in original videos, which were newly recorded and uploaded by the authors every other mor ning for 2 weeks.Adapting the recommendations of the American Heart Association for stroke survivors (5), the exercises consisted of flexibility (stretching), strengthe ning (using make-shift weights readily available at home), aerobic (involving large-muscle activities), and neuromuscular (balance) programmes.They were made simple and easyto-follow, mostly consisting of calisthenic exercises.Appendix S1 contains more details about the exercises.Each video contained step-by-step exercise demonstrations, along with precautions and safety measures.The participants were instructed to perform the exercises on their own for 30 min every other weekday (for a total of 6 sessions) at their most preferred time (supervised as needed by an adult companion) and indicate their completion after each session in the private group chat.For clarifications regarding the exercises or any related untoward events, the participants could message or call the telerehabilitation team at any time.They could also utilize a private group chat to interact with them and their fellow participants.
Post-intervention
After 2 weeks, the participants were interviewed again using the same questionnaires they answered at baseline.In addition, they were asked about their telerehabilitation experience and recommendations.
Outcome measures and statistical analysis
To determine the telerehabilitation perceptions of participants and compare them at baseline and post-intervention, we adapted the validated 6-item Telepractice Questionnaire (11), which we translated to Filipino in consultation with a language professor and telerehabilitation experts.Higher per-item and overall summative scores indicate better perception and acceptance of telerehabilitation.
The SIMPAQ is a short clinical physical activity measurement tool that can be administered within 3-8 min (12).It evaluates combined physical activities across different domains (e.g.sleep time; leisure time; home-related activities; walking/exercise and sedentary periods), providing a snapshot of a 24-h period in the past 7 days.The methods of asking and computing for the duration of each activity were standardized by adhering to the SIMPAQ instruction manual (12).
Lastly, well-being was assessed using the validated Happiness Scale, answerable with any number from 0 to 10, with 10 being the happiest (13).All data encoding and descriptive statistical analysis were performed on Microsoft Excel for Mac (version 16.70, Microsoft 365, Redmond, Washington, USA).
RESULTS
Nineteen out of 50 members of the Rehabilitation Medicine Stroke Support Group met the eligibility criteria.Table I shows a summary of their clinicodemographic profile.All participants had chronic stroke with ictus from 1999 to 2015 and were community ambulant.
Prior to the COVID-19 lockdown in Manila, the majority (58.8%) of subjects had attended centrebased physical therapy sessions at least once a week.Some (11.8%) had stopped attending their sessions due to travel and therapy expenses before the pandemic.Nonetheless, 82.4% performed some form of physical activity at home (e.g.walking, stretching, home chores/errands).During the COVID-19 lockdown, however, 76.5% continued exercising at home, while others did not, due to various reasons (e.g.not motivated; not supervised; did not know what exercises were safe).
The majority (82.4%) preferred to use Facebook™ to engage in telerehabilitation, due to availability and ease of use, while the remainder preferred to use YouTube™.None of the patients preferred using Telegram™, WhatsApp™, Viber™, Twitter™, Instagram™, or other platforms.
Prior to telerehabilitation, the participants generally felt neutral about the quality of virtual care compared with in-person care (Table II).After 2 weeks, however, the majority of subjects had highly favourable telerehabilitation perceptions.In terms of physical activity, improvements were observed in all 5 SIMPAQ domains post-intervention (Table III).Of note, the mean days per week that the participants exercised improved from 2.5 to 5.6 days per week, and their sedentariness decreased.Regarding their happiness level, the mean score of participants improved from 7.4 ± 2.1 to 8.8 ± 1.8.Even after the study period, some of the participants continued to perform the exercises and remained active in the group chat, motivating or reminding their fellow stroke survivors to exercise.
Furthermore, there was an increasing trend in the participants' adherence to the telerehabilitation programme over time.Only 14 participants were able to exercise during the first session, but there was complete attendance during the fifth and sixth sessions.The reasons for missing a session included: family problems; feeling unwell post-vaccine; knee
DISCUSSION
Telerehabilitation is a feasible and effective service delivery model for providing ongoing rehabilitation care to persons with physical, cognitive, and/or social impairments (14).The use of telerehabilitation increased from necessity during the COVID-19 pandemic, leading various healthcare settings to be more resourceful and innovative.This study exemplifies how telerehabilitation using familiar low-cost technologies can positively impact persons living with stroke despite limited resources.
The Stroke Support Group of the Department of Rehabilitation Medicine in the national university hospital of the Philippines is a non-profit organization established nearly 2 decades ago, comprising > 50 stroke survivors, who assist each other to become healthy and productive individuals amid their disability.Their regular in-person activities, such as group therapy sessions and social gatherings, were suspended due to COVID-19.However, through telerehabilitation, the participants in this study were able to reconnect among themselves in an enjoyable and interactive way, despite their physical distance.The remote telerehabilitation team not only provided technical assistance when needed, but also acted as a virtual coach to each participant, ensuring their safety, progress, and motivation.
Stroke remains one of the leading causes of long-term disability worldwide.Survivors are often deconditioned in the acute phase of stroke and thereafter may be predisposed to a sedentary lifestyle.This situation worsened because of stay-at-home policies: the Philippines had one of the longest COVID-19 lockdowns, and, in addition, social isolation may negatively affect people's overall well-being.
Despite the increasing number of stroke telerehabilitation studies, including a Cochrane review suggesting that telerehabilitation is not inferior to in-person therapy, the field is still emerging (15).A recent systematic review identified different challenges influencing stroke telerehabilitation delivery and recommended strategies to overcome them, such as through: "adequate training and technical infrastructure; shared learning and consistent reporting of cost and usability and acceptability outcomes" (16).In the Philippines, wide-scale implementation of telerehabilitation is hampered by several human, organizational, and technical factors.Scepticism or resistance to overcoming the long tradition of in-person healthcare, a lack of clear national telehealth policies, and unstable internet connectivity were found to be the most common barriers that need to be considered when re-evaluating and rebuilding telerehabilitation capacities amid and beyond the pandemic (2).The current study found that participants generally improved their telerehabilitation perceptions after a 2-week trial, suggesting that the scepticism is reversible.Using a familiar mobile application that also does not entail too much internet cost and bandwidth may be a pragmatic solution.The caveat, however, is its inherent data privacy risk, making it unideal for telemedicine.Nonetheless, we minimized this risk in the current study by performing telerehabilitation asynchronously (i.e.watching recorded exercise videos) rather than synchronously (i.e.live videoconferencing), and by properly orienting the participants on risk-mitigation measures (e.g.avoiding taking screenshots of chats and posting them on social media) and obtaining their consent.This study had several limitations: small sample size; recruitment bias (i.e.participants who agreed to join the study were mildly impaired and probably the ones who were anyway motivated to exercise); lack of longer follow-ups (i.e.duration of intervention effects was uncertain); and lack of generalizability of findings.Nonetheless, the data that this study gathered will hopefully contribute to the limited evidence of low-cost telerehabilitation from resource-limited countries and catalyse larger-scale studies with more robust methodologies.Incorporating educational infographic materials, asynchronous exercise videos, and private group chats for social interaction and technical and clinical support seemed to yield positive results across different biopsychosocial health-related outcomes.Such preliminary findings appear to be consistent with those of a pilot test performed in the USA, wherein the authors found that a home-based telerehabilitation programme was a viable treatment for patients with chronic stroke (17).However, the methods employed in the current study were not as advanced or elaborate as in the US study, given that our pilot test was performed with limited resources and in the context of the COVID-19 pandemic.Large-scale studies on telerehabilitation are necessary to further explore the experiences of other stakeholders across various demographics and health conditions using more secure and standard telerehabilitation protocols.Based on the experiences and recommendations of the current study participants, we also suggest the following: individualized telerehabilitation programmes customized to each patient's needs; a mix of synchronous and asynchronous sessions (i.e.synchronous sessions can give real-time feedback on exercise performance); a telerehabilitation starter kit containing a user manual, vital signs monitoring equipment, and multi-purpose exercise gadgets.
In conclusion, the COVID-19 pandemic catalysed and continues to enhance the awareness of stakeholders (patients, families, carers, and healthcare providers) regarding the utility of telerehabilitation for various conditions, including stroke.Whether in high-or low-income countries, telerehabilitation can be feasible and safe in overcoming barriers to in-person rehabilitation, such as distance, time, costs, staffing, resources, and even viral spread.However, we have yet to address the various challenges hindering the full potential of telerehabilitation as we move beyond its mere necessity due to the pandemic.
Table I .
Characteristics of the participants (N = 19)
Table III .
Participants' physical activity level based on the Simple Physical Activity Questionnaire before and after the 2-week telerehabilitation programme (N = 19)
Table II .
Participants' perceptions on telerehabilitation based on the Telepractice Questionnaire before and after the 2-week telerehabilitation programme (N = 19) J Rehabil Med 55, 2023 | 2023-07-11T06:16:19.437Z | 2023-07-10T00:00:00.000 | {
"year": 2023,
"sha1": "1ef36ff7599ca74dcb722accc871f9061fd4ea6e",
"oa_license": "CCBYNC",
"oa_url": "https://medicaljournalssweden.se/jrm/article/download/4405/21581",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d160f00090fc2fd0524787c1ea5b912d2045451d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
256205509 | pes2o/s2orc | v3-fos-license | Osteoclast-derived extracellular vesicles are implicated in sensory neurons sprouting through the activation of epidermal growth factor signaling
Background Different pathologies, affecting the skeletal system, were reported to display altered bone and/or cartilage innervation profiles leading to the deregulation of the tissue homeostasis. The patterning of peripheral innervation is achieved through the tissue-specific expression of attractive or repulsive axonal guidance cues in specific space and time frames. During the last decade, emerging findings attributed to the extracellular vesicles (EV) trading a central role in peripheral tissue innervation. However, to date, the contribution of EV in controlling bone innervation is totally unknown. Results Here we show that sensory neurons outgrowth induced by the bone resorbing cells—osteoclasts—is promoted by osteoclast-derived EV. The EV induced axonal growth is achieved by targeting epidermal growth factor receptor (EGFR)/ErbB2 signaling/protein kinase C phosphorylation in sensory neurons. In addition, our data also indicate that osteoclasts promote sensory neurons electrophysiological activity reflecting a possible pathway in nerve sensitization in the bone microenvironment, however this effect is EV independent. Conclusions Overall, these results identify a new mechanism of sensory bone innervation regulation and shed the light on the role of osteoclast-derived EV in shaping/guiding bone sensory innervation. These findings provide opportunities for exploitation of osteoclast-derived EV based strategies to prevent and/or mitigate pathological uncontrolled bone innervation. Supplementary Information The online version contains supplementary material available at 10.1186/s13578-022-00864-w.
Introduction
The innervation pattern is achieved by a series of chemoattractant and chemorepellent cues secreted at the peripheral tissues, guiding the axonal projections to form functional circuits [1][2][3]. Axonal terminals have the machinery to accurately respond to these molecules, ensuring the correct establishment of peripheral connections [2][3][4].
Open Access
Cell & Bioscience In the bone tissue, nerve terminals display an important regulatory mechanism for bone development, turnover, and regeneration [1,[4][5][6][7][8]. Importantly, neuroskeletal interaction is bidirectional as bone-resident cells are acknowledged to modulate sensory neurons by promoting or inhibiting axonal growth. We have demonstrated that the differentiation of human mesenchymal stem cells to osteoblasts (bone forming cells) leads to marked impairment of their ability to promote axonal growth [1]. The mechanisms by which osteoblasts provide this nonpermissive environment for axons include paracrine-induced repulsion [stimulation of Semaphorin 3A, Wnt4, and Sonic hedgehog (Shh) expression] and loss of neurotrophic factors expression (drastic reduction of nerve growth factor (NGF) and brain-derived neurotrophic factor (BDNF) production) [1]. On the other hand, recent studies reported that osteoclasts (bone resorbing cells) are implicated, via netrin-1 signaling, in the pathological sensory innervation of the subchondral bone and endplates in inflammatory mouse models of osteoarthritis [9] and intervertebral disc degeneration [10]. Exuberant pathological nerve sprouting has been associated with pain development, mainly in cancerrelated metastases [11][12][13].
Research examining neuronal activity reported that extracellular vesicles (EV) are an important communication route between neurons and surrounding cells/microenvironment. In the central nervous system, vesicles released from neurons and glial cells have been implicated in mediate synaptic plasticity, neuronal survival and neuroprotection [14][15][16]. In the peripheral nervous system, microglial-derived EV were also reported to promote synaptic refinement and instructing neurons upon inflammatory stimuli [17]. Schwann cells-derived EV, internalized by axons, enhance axonal regeneration after nerve injury [14,18]. Moreover, neurons respond to EV derived from other cellular populations, as seen for the increased neurite growth of cortical neurons in response to mesenchymal stem cells-derived EV [19][20][21] and enhanced differentiation of neuroblastoma cell line upon exposure to EV from adipocyte-derived Schwann celllike [22].
Osteoclasts secrete EV at the bone microenvironment, in physiological and pathological conditions [23], which have been associated as key players underlying the osteoclast-osteoblast communication [24][25][26][27]. Osteoclast-derived EV were shown to either enhance or block osteoblast differentiation depending on their cargo. EV containing miRNA-214 was demonstrated to downregulate alkaline phosphatase, osteocalcin and collagen type 1 alpha [25], while osteoclasts-EV carrying RANK can bind to osteoblasts surface activating transcription factor Runx2, which promotes bone formation [27].
The current study examined the outgrowth, signaling pathways activation and electrical activity of sensory nerve fibers under the effect of osteoclast-derived EV. Skeletal nerve fibers density varies with changes in skeletal diseases and increased pain is often associated with neural ingrowth [12,[28][29][30][31]. Our results elucidate novel mechanisms for explaining the peripheral nerve growth modulation by osteoclasts, essential for pursuing new targets for bone pain therapies.
Sensory neurons outgrowth under osteoclasts effect is not mediated by neurotrophins
To explore how peripheral sensory nerves axonal growth can be modulated by the osteoclast lineage, lumbar dorsal root ganglia (DRG) were exposed to the secretome of osteoclasts at different stages of differentiation (evaluation of the osteoclast differentiation in Additional file 1: Fig. S1). Secretome from mature osteoclasts provided greater support for axonal development when compared to pre-osteoclast secretome indicating that the maturation state of osteoclasts influences the neurotrophic potential (Fig. 1A, B). The mature osteoclast secretome demonstrated an approximate threefold stronger influence on sensory neurons growth than the control of alpha-MEM medium (OCm) supplemented with receptor activator of nuclear factor kappa-Β ligand (RANKL) and macrophage colony-stimulating factor (M-CSF), cytokines described to modulate the axonal outgrowth [32,33]. The secretome from bone marrow stromal cells (BMSC), known for its neurotrophic potential [1,34,35], was also evaluated and no differences were observed when compared to the NGF supplemented neurobasal and alpha-MEM controls (Additional file 1: Fig. S2).
Osteoclasts were described to induce nerve outgrowth through netrin-1 action under inflammatory conditions [9,10]. To assess if the sensory axonal growth, induced by the osteoclast secretome under homeostatic state, was neurotrophin dependent we analyzed the expression levels of neurotrophic factors with a consolidated capability in promoting axonal growth [36]. No differences were observed between the pre-osteoclasts and osteoclasts genetic levels. All the analyzed neurotrophic factors (NGF, BDNF, GDNF, netrin-1, netrin-3, netrin-4 and netrin-5) presented low levels of genetic expression (Fig. 1C). The results were further validated by measuring the protein levels of NGF, BDNF, netrin-1, neurotrophin-3 and neurotrophin 4/5 in the conditioned medium, by enzyme-linked immunosorbent assay (ELISA) (Additional file 1: Fig. S3). Neither NGF, BDNF, NT-1 nor NT-3 were not detected in the mature osteoclast secretome, despite our observation of higher neurite outgrowth on embryonic DRG explant cultures. BDNF was detected in the pre-osteoclast conditioned medium and NT-4/5 in both conditions, still at low concentrations.
Osteoclast-derived extracellular vesicles (EV) are directly involved in the sensory neurons axonal outgrowth EV depletion from osteoclasts secretome impaired axonal growth
It is increasingly appreciated that cells can release growth factors in and/or on the surface of EV/exosomes [37,38]. We hypothesized that osteoclast-derived EV could play a crucial role in the axonal outgrowth. To test this, we exposed DRG to EV-depleted osteoclast secretome and measured axonal sprouting. The EV enriched fraction was characterized by Western Blot (WB), transmission electron microscopy (TEM) and nanoparticle tracking analysis (NTA). The EV isolated from the osteoclast secretome stained positive for the CD81, CD63 and CD9 specific markers ( Fig. 2A). Cytochrome c was absent in the EV samples indicating that the EV preparations were not contaminated with cellular debris (not shown). EV were visualized by negative staining for TEM ( Fig. 2A, white arrowheads), presenting a size ranging from 40 to 200 nm. The analysis of the size and concentration of the vesicles by NTA confirmed a normal distribution with a mean size of 141.8 ± 2.7 nm and a concentration of 4.90 × 10 11 particles/mL (Fig. 2B). DRG were exposed to EV-depleted osteoclast secretome to address the impact on axonal growth. A significant decrease in axonal sprouting was observed in the absence of EV (Fig. 2C, D). This suggests that the EV cargo plays a role in the neurotrophic potential of the osteoclast secretome. We further evaluated if the effect Fig. 1 Sensory neurons axonal outgrowth is promoted by osteoclasts secretome. A Representative images of dorsal root ganglia (DRG) outgrowth after treatment (stained for βIII-tubulin, scale bar 500 µm). Fresh osteoclast medium [OCm: alpha-MEM supplemented with 10% fetal bovine serum (FBS), receptor activator of nuclear factor kappa-Β ligand (RANKL) and macrophage colony-stimulating factor (M-CSF)], pre-osteoclast conditioned media (Pre-OC), and mature osteoclast conditioned media (OC) were used to stimulate embryonic DRG culture. B Automatic axonal outgrowth area quantification. Data represented as a violin plot; **p ≤ 0.01; ***p ≤ 0.001 and ****p ≤ 0.0001. C. Gene expression analysis of neurotrophic factors expressed by pre-osteoclasts (Pre-OC) and mature osteoclasts (OC), normalized for the expression of glyceraldehyde 3-phosphate dehydrogenase (GAPDH) housekeeping gene. Nerve growth factor (NGF), brain-derived neurotrophic factor (BDNF), glial cell line-derived neurotrophic factor (GDNF), and netrins 1, 3, 4 and 5. Data represented as scatter dot plot mean ± SD was equally observed at the nerve terminals. We cultured DRG in microfluidic platforms to recapitulate the in vivo state, where sensory neuron cell soma is confined to the DRG, apart from their axonal terminals in the bone microenvironment [1,39,40]. The reduced height of the microchannels, combined with a higher volume on the somal compartment of the microfluidic devices, creates a sustained unidirectional flow from the somal to the axonal compartment. This ensures the retention of the stimuli in the axonal compartment, therefore producing a localized effect on axon terminals. We observed that the axonal sprouting was reduced in the conditions without EV (EV-dep) when compared to the total osteoclasts secretome (OC) (Fig. 2E). Neurite outgrowth was measured with AxoFluidic, an algorithm that we designed to quantify neurite projection within microfluidic platforms [39]. The AxoFluidic calculates two major parameters: A represents the amount of axons that arrive at the axonal compartment and λ represents the scale of spatial decay (associated with the length of the neurite). Both constants (A and λ) were significantly reduced in the conditions where nerve terminals were exposed to EV-depleted secretome, when compared to the full secretome (Fig. 2F). This indicates that the lack of EV in the osteoclast secretome leads to fewer and shorter neurites in the axonal compartment.
Osteoclast-derived EV promote axonal growth
To evaluate the direct interaction of osteoclast-derived EV on sensory neurons, we exposed the axonal terminals of DRG, cultured in the microfluidic devices, to the EV enriched fraction isolated from the mature osteoclast secretome. Axons were exposed to a concentration of 10 11 EV/mL resuspended in neurobasal medium. We showed that the osteoclast-derived EV were able to promote axonal growth as depicted in Fig. 2G, supporting our previous observations. Under osteoclastderived EV exposure, the number of axons to cross the microchannels was similar to the positive control, while the quantification of neurite's length revealed a significant increase (Fig. 2H).
Total osteoclast secretome and osteoclast-derived EV enriched fraction activate EGFR related signaling pathways on sensory neurons
The activity of different receptor tyrosine kinase (RTK) have been implicated in neuronal development, growth, survival and axonal regeneration [41]. To understand the mechanisms activated in the context of axonal outgrowth under total osteoclast secretome and EV-depleted stimuli, we determined the phosphorylation/activation level of RTK and downstream molecules in the DRG neurons.
EGFR family signaling pathway is involved in the axonal growth of DRG sensory neurons under osteoclasts secretome stimulation
DRG protein lysates exposed to osteoclast secretome were screened to quantify the phosphorylation level of over 30 RTK. An overview of the possible signaling pathways activated over osteoclast secretome stimuli was observed in the array (Fig. 3A). Epidermal growth factor receptor (EGFR), ErbB2, and platelet-derived growth factor receptor alpha (PDGFRα) displayed higher activation levels (Fig. 3A). Low levels of TrkA, TrkB (absent), and TrkC phosphorylation were observed (Fig. 3B), further confirming the low contribution of NGF, BDNF, NT-3, and NT-4/5 neurotrophins on the osteoclast mediated axonal growth. The activation of the ErbB2 receptor in DRG neurons could be triggered by heterodimerization with EGFR, since ErbB2 is an orphan receptor with no characterized ligand, which can be activated by spontaneous homodimer formation (in overexpressing cells) or by heterodimerization with another ligand-bound or EGF family transactivated receptor [42,43]. In the context of nerve repair, these results are in agreement with the literature where ErbB receptors expression was shown to be showing the concentration vs. size distribution (diluted in filtered PBS 1:500). Lines representing 3 runs. C Representative images of DRG treated with osteoclast secretome (OC) and EV-depleted osteoclast secretome (EV-dep). Staining for βIII tubulin, scale bar 500 µm. D Quantification of axonal sprouting area of DRG. Data represented as box and whiskers (median, whiskers represent minimum to maximum range), ****p ≤ 0.0001. E Representative images of DRG cultures in microfluidic devices. Nerve terminals exposed to complete osteoclasts secretome (OC) and EV-depleted osteoclasts secretome (EV-dep). Axons stained against βIII-tubulin; scale bar: 1 mm. F Quantification of the axonal growth using AxoFluidic algorithm. The data were given by the spatial dependence decay function f (x) = A · exp(−x/ ) of the axons that can effectively cross the microchannels, where the constant A represents the entering in the axonal compartment, and λ the scale of spatial decay, as a measure to represent the length of the neurites. G Representative images of DRG cultures in the microfluidic platforms. Nerve terminals exposed to neurobasal control (NB) and osteoclast-derived EV (EV+). Axons stained against βIII-tubulin; scale bar: 1 mm. H Quantification of the axonal growth using AxoFluidic algorithm. The constant A represents the enter in the axonal compartment, and λ the scale of spatial decay, as a measure to represent the length of the neurites. Results are presented as bar ± SD, ns-non-significative; *p ≤ 0.05; **p ≤ 0.01 and ***p ≤ 0.001. Each dot represents a microfluidic device analyzed from at least three independent experiments (See figure on next page.) increased in DRG upon lesion [44]. Therefore, osteoclasts might promote axonal outgrowth through EGFR family signaling, described to be involved in neuronal repair.
To correlate the contribution of EGFR and ErbB2 signaling on axonal outgrowth induced by osteoclasts secretome stimuli, receptor-mediated inhibition using pharmacological blockers was performed. Erlotinib is an EGFR inhibitor that reversibly binds to the intracellular tyrosine kinase domain of the receptor. Still, it has been shown to inhibit both EGFR and ErbB2 signaling pathways [43,[45][46][47]. Our results show that the neurotrophic effect of osteoclasts was reduced in the presence of the high Erlotinib concentration (Fig. 3C, D), without compromising the cell viability and metabolic activity (Additional file 1: Fig. S4), suggesting that both EGFR/EGFR homodimers and EGFR/ErbB2 heterodimers might contribute to the osteoclast-mediated effect in axonal outgrowth.
ErbB2 phosphorylation is reduced upon EV depletion while protein kinase C (PKC) phosphorylation is increased after EV exposure
To understand if EV depletion modulates the EGFR/ ErbB2 phosphorylation levels, we evaluated the phosphorylation state of the EGFR family on DRG after exposure to EV-depleted osteoclasts secretome. Remarkably, a significant decrease in both EGFR and ErbB2 phosphorylation levels was observed in the absence of osteoclastderived EV (Fig. 3E, F), supporting the contribution of this signaling pathway to the EV osteoclast-mediated axonal growth. No alterations in the activation levels of PDGFRα was observed upon DRG stimulation with EVdepleted secretome (Fig. 3F).
To strengthen our hypothesis on the involvement of osteoclast-derived EV in the activation of EGFR/ErbB2 signaling pathway in axonal growth, protein kinase C (PKC) phosphorylation levels were quantified at the growth cones in microfluidic devices.
To maximize our experimental readout, we allowed sensory axons to accumulate in the axonal compartment where we performed a starving period with plain neurobasal medium for 5 h. Afterward, terminals were stimulated for 10 min with osteoclasts-derived EV. The phosphorylation levels were normalized for the growth cone area stained for growth-associated protein-43 (GAP-43). PKC was shown to be preferentially stimulated by EGFR/ErbB2 heterodimers over Akt downstream pathway [48,49]. We observed an accumulation of phosphorylated PKC at the growth cones stained for GAP-43. A significantly higher phosphorylation level was detected at the nerve terminals exposed to osteoclast-derived EV, as depicted in Fig. 3G, H.
To unravel whether the osteoclasts lineage was expressing ligands that could activate these signaling pathways, a personalized primePCR was designed targeting the EGF receptors family ligands. Gene expression was normalized for the GAPDH housekeeping gene, followed by a fold-change calculation relative to the preosteoclasts expression levels. The results indicate that the differentiated osteoclasts express higher amounts of heparin-binding EGF (Hb-EGF), while pre-osteoclasts express higher amounts of Neuregulin-4. Independently of the differentiation stage, both express Amphiregulin and Neuregulin 1 and 2 (Additional file 1: Fig. S5). Further proteomic analysis to confirm the presence of these proteins in the EV cargo will be a valuable input to the osteoclasts-DRG crosstalk.
Osteoclast-derived EV are internalized by sensory neurons
Several studies describe how EV can interact with the recipient cell: by interacting with surface receptors at the nerve terminals, by fusing with neuronal cells membrane or by internalization [50]. Herein we labelled the osteoclast-derived EV (with lipophilic marker PKH26) and tracked the EV mobilization added to the axonal compartment in the microfluidic chips (Fig. 4A). We observed that the sensory neurons with internalized EV were positively stained for calcitonin gene-related peptide positive (Fig. 4B), characteristic of neuropeptidergic fibers. To understand the kinetics of interaction between the EV and sensory terminals, live imaging of EV internalization was performed over 2 h (controlled temperature and CO 2 ). The uptake of the osteoclastderived EV was observed after 45-60 min incubation (Fig. 4C). An increase in the fluorescence intensity, homogeneous distributed throughout the neurite extension, was observed with the increased incubation period (up to 2 h live, Fig. 4C). After 1 h incubation, 5% of the neurites had uptake EV, while after 2 h incubation the internalization almost reached 20% of the total fibers (Fig. 4E). For longer exposure periods, cells were kept at incubator and fixed after 24 h. EV positive signal was observed at the axonal, microchannels (See figure on next page.) Fig. 3 Epidermal-growth factor receptor (EGFR) activation. A Screening of receptor tyrosine kinases (RTK) phosphorylation levels in DRG cultures exposed to osteoclast secretome. Images of the X-ray films. For the analysis, 100 µg of protein lysate from 3 independent experiments (n = 3), was pooled. Elliptical shapes highlighting the spots corresponding to epidermal-growth factor receptors (EGFR, ErbB2 and ErbB3, light green) and platelet-derived growth factor receptor-alpha (PDGFα, light purple). B Heatmap representing the relative spot intensity for the activated receptors calculated from the pixel density, showing the primary activation of two different families: EGFR family and PDGF. C Pharmacological inhibition of EGFR and ErbB2 with increasing doses of Erlotininb. Representative images of DRG treated with different concentrations of Erlotinib for 72 h (βIII tubulin in green and nuclei in blue, scale bar 500 µm). D Quantification of axonal outgrowth of sensory neurons blocked with EGFR inhibitor-Erlotinib at different concentrations added to osteoclast conditioned medium. Data represented as violon plot *p ≤ 0.05. E Levels of receptor tyrosine kinases (RTK) phosphorylation in DRG cultures exposed to osteoclast secretome (OC, blue) and EV-depleted secretome (EV-dep, light orange). Images of the X-ray films. Elliptical shapes highlighting the spots corresponding to epidermal-growth factor receptors (EGFR, ErbB2 and ErbB3, light green) and platelet-derived growth factor receptor-alpha (PDGFα, light purple). F Graph representing the mean spot intensity of the activated receptors EGFR, ErbB2 and PDGFα for the DRG exposed to osteoclast secretome (OC, blue) and EV-depleted secretome (EV-dep, light orange). Data represented as bars with individual values (n = 4), mean ± SD, ns-non-significative; ****p ≤ 0.0001. G Representative images of sensory neurons growth cones exposed to neurobasal (NB; upper row) vs. EV enriched fraction (EV+; lower row), stained against growth-associated protein (GAP-43, red) and phosphorylated PKCα (green); scale bar: 10 µm. H Quantification of the integrated intensity of phosphorylated PKCα at the growth cones exposed to NB (grey) vs. EV+ (orange). Intensity of phosphorylated PKCα normalized for the growth cone area calculated through GAP-43 staining. Results are presented as scatter dot plot; ***p ≤ 0. and somal compartments, inside the neurites, suggesting the anterograde transport of the vesicles towards the cell soma (Fig. 4D). After 24 h incubation the percentage of neurites loaded with EV reached one-third (33%) (Fig. 4E). Orto-projected and zoomed images of axonal side, microchannels and somal side enlighten the selective EV internalization within the sensory neurons in the same microfluidic device (Fig. 4D, unlabelled neurites marked with an asterisk). PKH26-positive EV only entered the neurons when the EV pellet was used, ruling out a transfer of excess dye. No free EV were detected at the somal compartment (Additional file 1: Fig. S6).
Sensory neurons electrical activity is triggered by osteoclast secretome but not mediated by the EV
To unravel the electrophysiological implications of the axonal exposure to the osteoclast total secretome and osteoclast-derived EV, a combination of substrate-integrated microelectrode arrays (MEAs) with custom-made microfluidic chambers [51] were used. MEAs enable non-invasive, thus repeatable, recordings of extracellular action potentials. Although recordings of random DRG cultures have been previously demonstrated [52], these were not adapted to the study of peripheral innervation.
Here, we employed microElectrode-microFluidic (µEF) devices, which allowed us to monitor axonal activity with high fidelity [51,53]. Regardless of the explant position (outside or inside the array area, Fig. 5A), in most cases, we could only detect activity within the microchannels. This allowed us to directly compare baseline and post-treatment levels of axonal activity, as most axons within the microchannels are expected to have extended to the axonal compartment.
Electrophysiology recordings show that the DRG exposed to osteoclasts secretome present an increased mean firing rate (MFR), when compared to the baseline recorded immediately before treatment (Fig. 5C). The control treatment, supplemented with NGF, does not produce a significant effect in the MFR. Curiously, the effect on the firing rate of the sensory neurons remained unchanged upon the addition of the osteoclasts EV enriched fraction (Fig. 5D, E). Different timepoints were tested, up to 24 h, but no alterations were detected (Additional file 1: Fig. S7). These results suggest that the sensory neurons electrophysiological activity is triggered by soluble factors present in the total osteoclasts secretome, however similar effect was not reproduced by the EV alone. The mechanisms supporting this effect require further elucidation.
Discussion
Neuronal axonal growth is mediated by different classes of neurotrophic factors which include classic neurotrophins (e.g., NGF, BDNF, NT-3/4) [54], pro-inflammatory cytokines (e.g., Il-1β, TNF-α) [55][56][57] or other soluble molecules secreted by different cells in response to their surrounding microenvironment. Osteoclasts were shown to induce axonal growth by netrin-1, under inflammatory conditions [9,10]. In our experimental set up, no NGF, BDNF, GDNF, neurotrophins nor netrins were found significantly expressed by osteoclast lineage. Still, the expression profile of these factors might completely change when osteoclasts are under pathological conditions such as inflammation, tumor, either simulated in vitro or in animal models.
During the last decade, emerging findings attributed to EV trading a central role in peripheral tissue innervation but in contexts of harsh conditions such tumor microenvironment and neuroinflammation [58]. As example, the neurotrophic-promoting activities of EV was demonstrated in pathological sensory axonogenesis in squamous cell carcinoma [59]. EV secreted by the cells at the tumoral microenvironment enhance sensory innervation through EphrinB1 guidance molecule and the pharmacological blockade of EV release attenuate the tumor innervation in vivo [59]. The physiological role of EV in establishing correct peripheral tissue innervation under homeostatic conditions and the mechanistic understanding of the EV mediated guiding of axonal projections is still poorly understood. Here we demonstrate that, under non-pathological conditions, osteoclasts-induced axonal growth is dependent on the secreted EV providing a new mechanism for the interplay between sensory terminals and bone resorbing cells. We confirmed these data by testing either the EV-depleted osteoclast secretome or the osteoclast-derived EV directly on DRG sensory neurons. The EV depletion from osteoclast secretome revealed a negative impact on axonal extension. The opposite effect was observed when osteoclast-derived EV enriched fraction was added to the axonal terminals, promoting and extensive axonal growth.
Zhang et.al. demonstrated that both cell soma and axons of cortical neurons were able to uptake the mesenchymal cells derived EV. The internalization was impaired by botulinum neurotoxin showing the involvement of soluble N-ethylmaleimide-sensitive factor attachment protein receptor (SNARE) complex [20]. In our study, we were interested in the specific interaction of osteoclast-derived EV and the axonal terminals, since only the axonal projections are present at the bone microenvironment in close contact with bone cells and their secreted products [60]. Therefore, compartmentalized microfluidic devices were applied to culture sensory neurons, allowing spatial and fluidic separation of the cell soma from distal axons [1,39], to evaluate the EV internalization process. We showed that the osteoclast-derived EV were up taken by the axonal terminals, revealing 5% internalization during the first hour and increasing to 30% after 24 h. Neurons were demonstrated to internalize EV by endocytosis with accumulation within endosome-like structures [15,18]. Given the distribution of the fluorescence intensity inside the neuronal extensions, we suggest that similar mechanisms are taking place in our experimental setup. Interestingly, despite the homogeneity of the nerve fibers phenotype present in culture (CGRP positive fibers), the EV were Each pixel corresponds to an electrode and the mean firing rate (MFR) is color-coded. Representative raster plots of 60 s of activity are shown below each activity map. Each row corresponds to the spike raster plot from the central electrode of a single microchannel. D Before-after plot of every active microchannel after treatment. ns = not significant, *0.01 < p < 0.05, **0.001 < p < 0.01, ***p < 0.001, ****p < 0.0001. E Scatter dot plots of the active microchannels' MFR at 30 min post-treatment (OC-osteoclasts secretome; EV+-osteoclast-derived EV). Data from 35 to 61 microchannels from 3 to 5 independent μEFs not equally internalized by the sensory neurons, since after the 24 h exposure there were fibers completely clear from EV. Previous studies reported that DRG neurons selectively uptake EV from glial cells rather than EV from fibroblasts [14]. In our experimental setup the EV source is the same (osteoclasts cells), still the internalization process can be mediated by specific interactions between the EV cargo and receptors at the nerve fiber terminals. This remains an open question that need further investigation.
There is an extensive discussion in the literature concerning the direct involvement of EGFR activation/ inhibition in axonal regeneration [61][62][63][64][65]. EGFR phosphorylation has been implicated in signaling inhibition of axonal growth in the central nervous system [61,62]. At the periphery, Koprivica et al. showed that EGFR inhibitors effectively promoted neurite outgrowth from cultured DRG [66]. However, we and others previously demonstrated an increased expression of the EGFR family in DRG after lesion [44,67,68], suggesting a possible role on neuronal regeneration. In this study we demonstrate that EV-depleted osteoclast secretome produced not only a significant decrease in axonal growth but also a significative reduction in EGFR family phosphorylation. Our data indicate that EGFR signaling has a role in axonal outgrowth promoted by osteoclast secretome. EGFR inhibition with Erlotinib, described to inhibit both EGFR and ErbB2 receptor kinases [43,46,47], resulted in a significant reduction in the axonal outgrowth area. Our findings are consistent with prior observations showing that the phosphorylation of EGFR enhances neurite outgrowth [61,[69][70][71][72][73][74]. Our data strongly indicate that the osteoclast-derived EV activate similar mechanisms, at the axonal growth cone, as significant increase of PCKα phosphorylation was observed. In fact, EGFR/ErbB2 heterodimers were reported to preferentially stimulate PKC, whereas ErbB2/ErbB3 heterodimers preferentially stimulate Akt signaling pathway [48,49]. In tumoral context, it was shown that the EV can incorporate in their cargos EGFR receptors or EGFR ligands to deliver to the recipient cells promoting metastases or inducing resistance in drug-sensitive cells [75][76][77][78]. Unraveling the osteoclastderived EV cargo will further elucidate the mechanism behind the EGFR activation. Our results largely contribute to support the hypothesis that EGFR activation is associated with an enhancement of axonal growth.
Abnormal increase of sensory nerve fibers axonal growth was demonstrated in skeletal diseases. Sensory nerve fibers undergo a remarkable sprouting and pathological reorganization which drive the pain [12,[28][29][30][31]. In pathological scenarios, such as fracture, bone cancer, or osteoporosis there is an imbalance between bone formation and bone resorption, alterations in the innervation pattern are often observed, suggesting a dynamic crosstalk within the bone microenvironment [12,28,30,79]. Neurotransmitters and axonal guidance cues have been shown to have an effect in bone cells, particularly in osteoclasts activity. Calcitonin gene-related peptide (CGRP) has been shown to suppress osteoclast maturation and activity in vitro [80], whereas substance P (SP) can drive RANKL-independent osteoclastogenesis [81]. Semaphorin 3A (Sema3a) is a vital axonal guidance cue which has been shown to inhibit osteoclastogenesis and promote osteoblast differentiation [6]. Cells at the bone microenvironment release mediators responsible for activation of sensory nerves triggering electrical signal propagation towards central pathways, thus evoking pain [82,83]. To understand if osteoclasts, under physiological conditions, induce the electrical signal propagation on sensory neurons, we measured the electrophysiological activity levels upon stimuli with the osteoclast secretome. We employed microElectrode-microFluidic (µEF) devices, to precisely expose only the nerve terminals to the stimuli, while recording the electrical propagation through the neuronal extensions towards the cell soma. Unlike central nervous system neurons in culture (e.g., hippocampal neurons), DRG neurons did not fire in bursts but rather exhibited a baseline activity with sporadic spontaneous spiking. Under normal conditions, this relatively low level of spontaneous activity also occurs in vivo [84]. The greater effect observed in the increasing of the MFR was related to the secretome from mature osteoclasts. The firing rate of the sensory neurons remained unchanged upon the addition of the osteoclasts EV enriched fraction. These results suggest that the sensory neurons electrophysiological activity is triggered by soluble factors present in the total osteoclasts secretome reflecting a possible pathway to be addressed to understand nerve sensitization on bone microenvironment. Stimulation of cortical neurons with glial EV were shown to increase the number of action potentials with unaltered spike amplitude [85]. Yet, similar effect was not reproduced in the sensory neurons through the stimulation with osteoclast-EV alone. This observation can indicate that osteoclast-derived EV are associated with sensory neurons extension but not directly with their neuronal activity. It would be relevant to collect the osteoclast-derived EV from bone microenvironment of inflammatory or metastatic mouse models and elucidate the role of EV on electrical signaling activation and propagation, related to nociception/pain since EV cargo is modified depending on the cellular and microenvironmental factors.
Overall, our study provides a new mechanism for sensory nerve growth mediated by bone resorbing cells-osteoclasts. We demonstrated that this effect is dependent on the EV released by these cells and achieved by targering EGFR/ErbB2 signaling/protein kinase C phosphorylation in sensory neurons. Our data also indicate that osteocalsts promote neuronal firing rate electrical activity in sensory neurons, but this effect is EV independent.
Animals
All animal procedures were approved by the i3S ethics committee and by the Portuguese Agency for Animal Welfare (Direção-Geral de Alimentação e Veterinária) in accordance with the EU Directive (2010/63/EU) and Portuguese law (DL 113/2013). Mice were housed at 22 °C with a 12 h light/dark cycle with ad libitum access to water and food. Adult C57Bl/6 male mice (7 weeks-old) and pregnant females were sacrificed in a carbon dioxide chamber to obtain the primary cells (bone marrow, osteoclast lineage and sensory neurons).
Bone marrow cells culture
Bone marrow stromal cells (BMSC) were isolated from tibiae and femur by flushing the bone marrow with α-MEM (Gibco, Thermo Fisher Scientific, Waltham, MA, USA) containing 10% (v/v) heat-inactivated (30 min at 56 °C) fetal bovine serum (FBS, Gibco, Thermo Fisher Scientific) and 1% (v/v) penicillin/streptomycin (P/S, Gibco, Thermo Fisher Scientific). Cells were plated in 75 cm 2 tissue culture flasks. Non-adherent cells were removed after 3 days, and fresh medium was added. Cells were expanded from the colony-forming units for 1 week. Afterward, cells were detached with trypsin and seeded into 48 well plates at 5 × 10 4 cells/cm 2 density. No differentiation factors were added. The conditioned medium was collected after 24 h, centrifuged at 140 g, 4 °C, 5 min, and stored at − 80 °C. The conditioned medium was divided into small aliquots (500 µL-1 mL) before freezing to avoid repeated freeze/thaw cycles.
Osteoclasts culture
Bone marrow cells were isolated from tibiae and femur by flushing the bone marrow with α-MEM containing 10% (v/v) FBS and 1% (v/v) P/S. To generate primary osteoclast precursors, the bone marrow mononuclear cell suspension was treated with red blood cells lysis buffer (ACK lysing buffer, #A1049201, Gibco, Thermo Fisher Scientific) for 1 min at room temperature (RT) and, after centrifugation, cells were plated in 10 cm diameter Petri dishes with 10 ng/mL macrophage colony-stimulating factor (M-CSF, PeproTech, London, UK) for 24 h. Afterward, the M-CSF concentration was increased to 30 ng/ mL for an additional 3 days. Adherent cells were then detached with a cell scraper and seeded at a density of 5 × 10 4 cells/cm 2 (in 48 well plate, 1 mL of medium per well) in the presence of 30 ng/mL M-CSF alone or 30 ng/ mL M-CSF and 100 ng/mL receptor activator of nuclear factor kappa-B ligand (RANKL, PeproTech) [86,87]. Conditioned medium from pre-osteoclasts (M-CSF only) was collected after 24 h, centrifuged at 140 g, 4 °C, 5 min, and stored at − 80 °C. The cells exposed to both M-CSF and RANKL had the medium renewed at day 3 of culture, which was then collected after 24 h, corresponding to the mature osteoclast condition.
qRT-PCR analysis
Total RNA was extracted using the Direct-zol ™ RNA miniPrep according to the manufacturer's protocol (Zymo Research). RNA final concentration and purity (OD260/280) was determined using a NanoDrop 2000 instrument (NanoDrop Technologies). RNA was reverse transcribed into cDNA using the NZY First-Strand cDNA Synthesis Kit (NZYTech), according to the manufacturer's protocol. For the analysis of neurotrophins expression levels, a personalized PrimePCR array (Bio-Rad Laboratories) was performed. qRT-PCR experiments were run using an iCycler iQ5 PCR thermal cycler (Bio-Rad Laboratories) and analyzed with the iCycler IQTM software (Bio-Rad). Target gene expression was quantified using the cycle threshold (Ct) values and relative mRNA expression levels were calculated as follows: 2^(Ct reference gene − Ct target gene). GAPDH was used as a reference gene. Both target and reference genes were amplified with efficiencies between 100% ± 5%.
Extracellular vesicles (EV) depletion from the osteoclast secretome and characterization
Osteoclasts were isolated and differentiated, as described in previous section. To obtain the supernatant for EV isolation, medium was prepared with EV-depleted FBS (obtained by ultracentrifugation). Cells were cultured with standard culture medium with 10% FBS, which was replaced by EV-depleted 1% FBS 24 h prior medium collection. All steps for the EV depletion were conducted under sterile conditions and in line with the published Minimal information for studies of extracellular vesicles 2018 guidelines [88] and as elsewhere described [89]. Briefly, the secretome was collected, centrifuged 1000×g for 10 min to clear the cell debris, 2000×g for 10 min, followed by 10,000×g for 30 min. The supernatant was then ultracentrifuged at 100,000×g using 70Ti rotor (Beckman Coulter Genomics) for 120 min. The pellet containing exosomes was then washed with filtered PBS, ultracentrifuged overnight, and then stored at − 80 °C. All centrifugation steps were performed at 4 °C [90]. The supernatant was stored − 80 °C to perform the experiments comprising the EV-depleted secretome.
The osteoclast-derived EV enriched fraction was further characterized by nanoparticle tracking analysis (NTA) and transmission electron microscopy (TEM), as previously described [91]. Briefly, for size and particle concentration evaluation, EV suspensions were diluted 1:500 in filtered PBS 1× and analyzed by NTA in a NanoSight NS300 device with NTA3.0 software. For the TEM negative staining, 10 µL of samples were mounted on Formvar/carbon film-coated mesh nickel grids (Electron Microscopy Sciences, Hatfield, PA, USA) and left standing for 2 min. The liquid in excess was removed with filter paper, and 10 µL of 1% uranyl acetate were added on to the grids and left standing for 10 s, after which liquid in excess was removed with filter paper. Visualization was carried out on a JEOL JEM 1400 TEM at 120 kV (Tokyo, Japan). Images were digitally recorded using a CCD digital camera Orious 1100 W (Tokyo, Japan) at the i3S Scientific Platforms Histology and Electron Microscopy.
Dorsal root ganglia (DRG) culture
Embryonic DRG were obtained from 16.5 days-old (E16.5) C57BL/6 murine embryos, harvested and maintained on ice-cold Hank's balanced salt solution (HBSS, Invitrogen). Ganglia were accessed through the dorsal side of the embryo after spinal cord removal. The meninges were cleaned, lumbar DRG from the L1 to L6 level were dissected and the roots were cut. The ganglia were kept in cold HBSS until seeding. Fig. S8). Controls with neurobasal supplemented with NGF and α-MEM supplemented with M-CSF and RANKL (no contact with cells) were performed. To assess the impact of EV depletion on axonal growth, DRG organotypic cultures were exposed to EV-depleted osteoclast secretome. b. Axonal-specific exposure in microfluidic devices.
Commercially available microfluidic devices (Merck Millipore and Xona Microfluidics) were adapted for explant DRG culture and assembled, as previously described [67], on top of glass slides coated with 0.1 mg/mL poly-d-Lysine (PDL, Sigma-Aldrich) at 37 °C and 5 μg/mL laminin (Sigma-Aldrich). Cultures were left undisturbed for 24 h. At this time, the medium from the axonal side was substituted by total osteoclast secretome or EV-depleted secretome.
To assess the effect of the osteoclast-derived EV enriched fraction on axonal growth, EV were resuspended in neurobasal medium at a concentration of 10 11 EV/mL, corresponding to the initial EV concentration in the total secretome. DRG culture was left undisturbed for additional 72 h. A higher volume on the somal side was set to induce a slow net flow of liquid from the somal to the axonal compartment, thus ensuring that the conditioned medium was restricted to the axonal compartment.
Quantification of axonal growth
Axonal outgrowth was quantified after 72 h of treatment with EV, EV-depleted secretome, total secretome and controls. [92]. To quantify axonal outgrowth in microfluidic platforms, neurite outgrowth was measured with AxoFluidic, an algorithm designed to quantify neurite projection within these platforms [39]. The data were given by the spatial dependence decay function f (x) = A · exp(−x/ ) of the axons that can effectively cross the microchannels, where the constant A represents the number of axons that enter in the axonal compartment, and λ the scale of spatial decay, as a measure to represent the length of the neurites.
Analysis of phospho-receptor tyrosine kinase (RTK) activation
A proteome profiler mouse phospho-RTK array kit (#ARY014, R&D system, USA) was used to quantify the phosphorylation level of 39 RTKs. After 72 h of treatment with conditioned medium and controls, the protein lysate of DRG was quantified and analyzed. According to the manufacturer's instructions, for the array analysis, the same amount of protein was added to each membrane (100 µg). Each array membrane was exposed to X-ray film using a chemiluminescence detection system (Amersham, GE Healthcare). The film was scanned using Molecular Imager GS800 calibrated densitometer (Bio-Rad, Hercules, USA), and pixel density was quantified using Quantity One 1-D Analysis Software, v 4.6 (Bio-Rad). The results were presented as the mean spot intensity, which corresponds to the mean of the two spots for each receptor within the same membrane array.
Pharmacological inhibition of epidermal growth factor receptor (EGFR) and ErbB2
Embryonic DRG were cultured in 15-well slides for 24 h. Erlotinib, an EGFR and ErbB2 inhibitor [47], was added to the conditioned medium at 10 nM, 100 nM, 1 µM, 10 µM, and 100 µM and tested on DRG cultures during 72 h. Afterward, axonal outgrowth and cell viability (Additional file 1) were measured.
Quantification of protein kinase C (PKC) phosphorylation
To assess the phosphorylation status of PKCα in growth cones, the DRG were cultured in microfluidic devices for 72 h for axons to accumulate in the axonal compartment [1]. At this time point, a starving period was performed only in the axonal compartment with plain neurobasal medium for 5 h. Throughout the starving period, a volume difference between the axonal compartment and the somal compartment was maintained to prevent the diffusion of the complete medium from the somal to the axonal side. Axons were stimulated for 10 min with EV in neurobasal without NGF at a concentration of 10 11 EV/ mL, corresponding to the initial EV concentration in the total secretome, and immediately fixed afterward. PKCα phosphorylation at the growth cones was performed by incubating DRG with primary antibodies directed against the growth-associated protein-43 [GAP-43 (Abcam)], and p-PKCα (Santa Cruz Biotechnology) diluted 1:1000 and 1:250, respectively, in blocking solution, overnight at 4 °C. Afterward, cells were washed and incubated for 1 h at RT with the secondary antibodies (Invitrogen) diluted 1:1000, in blocking solution. Images were captured with a widefield inverted microscope DMI6000 FFW (Leica Microsystems) equipped with LAS X software (Leica Microsystems) at the i3S Advanced Light Microscopy Platform. Growth cones were randomly chosen, based on GAP-43 fluorescence, without observation p-PKCα intensity. Total p-PKCα fluorescence was measured with Image J software, and the background intensity of each image was subtracted. For each selected growth cone, we determined the total of GAP-43 and p-PKCα fluorescence per area.
EV labelling and internalization assay
Osteoclast-derived EV (or PBS as negative control) were labelled with PKH26 0.5 μM dye (Sigma-Aldrich), for 5 min at RT, and washed in VivaSpin ® centrifugal columns (10 kDa cut-off ). Labelled EV were added to the axonal compartment of DRG in microfluidic devices, at the same concentration present in the total osteoclast secretome (10 11 EV/mL). Internalization was followed live for 120 min at laser scanning confocal microscopy (Leica TCS-SP5 AOBS) with controlled environment (temperature and CO 2 ). Samples were fixed and analyzed after 24 h exposure.
DRG exposed to osteoclast-derived EV were stained against calcitonin-gene related peptide (CGRP). Briefly, after fixation, permeabilization and blocking as previous mentioned, cells were incubated with the primary antibody directed against CGRP (Sigma-Aldrich) diluted 1:8000, in blocking solution, overnight at 4 °C. Afterward, cells were washed and incubated for 1 h at RT with the secondary antibody (Alexa Fluor 488, Invitrogen) diluted 1:1000, in blocking solution. Images were acquired at laser scanning confocal microscopy (Leica TCS-SP5 AOBS). To quantify the percentage of EV internalization, neurites were semi-automatically traced with simple neurite tracer plug in for Image J software.
Microelectrode-microfluidic cultures and electrophysiology recordings
The microfluidic devices (molds provided by INESC) were fabricated by mixing the polydimethylsiloxane (PDMS) elastomer (Sylgard ® 184, DowCorning) with a curing agent (10:1, w/w), degassed and cured over the mold at 65 °C for 2 h. Custom microElectrode-microFluidic (µEF) devices were prepared as described previously [51]. Briefly, PDMS microfluidic chambers were aligned on top of microelectrode array (MEA) chips (MultiChannel Systems MCS GmbH, Germany), with 252 recording electrodes (30 µm in diameter and pitch of 100 µm) organized in a 16 × 16 grid. Microfluidic chambers had an appropriate microchannel spacing for compartmentalization and probing of axonal activity. Microfluidic chambers were also adapted by adding an extra smaller reservoir (Ø 3 mm), which allowed the seeding of the DRG in a central position to the electrode matrix [39]. µEF devices were composed of two separate compartments connected by 16 microchannels of 700 μm length × 9.6 μm height × 10 μm width. Each microchannel was probed with 7 electrodes, thus every axon extending to the axonal compartment was electrophysiologically monitored. After mounting, µEFs were sequentially coated with PDL (0.01 mg/mL) and laminin (5 μg/ mL). The unbound laminin was aspirated, and chambers were refilled with complete neurobasal medium and left to equilibrate at 37 °C. Isolated embryonic DRG explants were placed and cultured as described before. DRG explants extended axons to the axonal compartment within the first 5 days in vitro (DIV). Then, treatments and recordings were performed at 6 DIV. This time point was chosen following preliminary studies that showed adequate electrophysiological maturation and culture viability at this stage [53]. Recordings at a sampling rate of 20 kHz were performed using a MEA2100 recording system (MCS GmbH, Germany). In every recording session, the temperature was maintained at 37 °C by an external temperature controller. After removing the cultures from the incubator, recordings only started after 5 min of habituation to avoid an effect due to mechanical perturbation. Then, a baseline recording (5 min) was obtained. Afterward, the medium from the axonal side was gently removed and replaced by 100 μL of treatment medium. The larger volume present on the somal compartment maintained a hydrodynamic pressure difference, inhibiting any flow from the axonal to the somal compartment.
Post-treatment recordings (30 min) were started as soon as the baseline stabilized following liquid flow perturbation (less than 1 min). Raw signals were high-pass filtered (200 Hz), and spikes were detected by a threshold set to 5xSD of the electrode noise. Spike data analysis was carried out in MATLAB R2018a (The MathWorks Inc., USA) using custom scripts. The mean firing rate (MFR) of each microchannel was calculated by averaging the MFR of the 5 inner electrodes (typically electrode rows [10][11][12][13][14], due to their superior signal-to-noise ratio. Microchannels with an MFR of at least 0.1 Hz in a given time point at 6 DIV were considered as active and included in the analysis.
Statistical analysis
All experiments were run in triplicate and repeated at least 3 times. Data analysis was performed using Graph-Pad Prism 8.2.0 for Windows (GraphPad Software, San Diego CA, USA). Normality of the data was assessed. Statistical differences between groups were calculated using one-way analysis of variance, more precisely, the Kruskal-Wallis test followed by Dunns post-test for multiple comparisons for non-parametric distributions and One-way ANOVA for normal distributions. The non-parametric Mann-Whitney t-test was used to identify statistical differences when only two groups where being compared. Differences between groups were considered statistically significant when *0.01 < p < 0.05, **0.001 < p < 0.01, ***p < 0.001, ****p < 0.0001. | 2022-08-15T13:44:50.697Z | 2022-08-14T00:00:00.000 | {
"year": 2022,
"sha1": "748a616f5d3c6c7d6d91bc4dccc494eb447cabcd",
"oa_license": "CCBY",
"oa_url": "https://cellandbioscience.biomedcentral.com/counter/pdf/10.1186/s13578-022-00864-w",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "07810a34c7c51a20b28e29acd8a92e1f6460c7a7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
257095087 | pes2o/s2orc | v3-fos-license | The effects of government policies targeting ethics and governance processes on clinical trial activity and expenditure: a systematic review
Governments have attempted to increase clinical trial activity in their jurisdictions using a range of methods including simplifying the ethics review and governance process of clinical trials. This study’s objective was to systematically review the effects of government actions targeting ethics reviews or governance processes on clinical trial activity. The data sources of Pub Med, Scopus, Sage, ProQuest, Google, Google Scholar and reference lists were all searched between 9/8/20 and 6/9/20. From these sources, 1455 potentially eligible reports were reviewed and full text assessments were done for 295. Thirty-eight reports provided data on 45 interventions—13 targeting ethics review and 32 targeting governance processes—were included. There were data describing effects on a primary or secondary outcome (the number of clinical trials or expenditure on clinical trials) for 39/45 of the interventions. 23/39 (59%) reported positive effects, meaning a greater number of trials and/or expenditure on clinical trials (6/11 ethics, 17/28 governance), 7/39 (18%) reported null effects (4/11 ethics, 3/28 governance) and 9/39 (23%) reported adverse effects (1/13 ethics, 8/28 governance). Positive effects were attributable to interventions that better defined the scope of review, placed clear expectations on timelines or sought to achieve mutual acceptance of ethics review outcomes. Adverse effects were mostly caused by governance interventions that unintentionally added an extra layer of bureaucracy or were developed without full consideration of the broader clinical trial approval system. Governments have an opportunity to enhance clinical trial activity with interventions targeting ethics reviews and governance processes but must be aware that some interventions can have an adverse impact.
Introduction
R andomised controlled clinical trials are gold standard research investigations designed to generate high-quality data about ways to prevent, detect or treat medical conditions (NHMRC National Health and Medical Research Council, Australian Clinical Trials (2021)). If done well, the evidence that derives from clinical trials forms the basis for the https://doi.org/10.1057/s41599-022-01269-3 OPEN 1 The George Institute for Global Health, Sydney, NSW, Australia. 2 The McKell Institute, Sydney, NSW, Australia. 3 UNSW Business School, Sydney, NSW, Australia. ✉ email: scrosby@georgeinstitute.org.au implementation of new health interventions, clinical guidelines and government policy. Clinical trials have also become important sources of employment and external investment for some jurisdictions (DOH Department of Health, 2021), as well as providing a means for the community to access novel therapies earlier.
The regulation and governance of clinical trials has evolved in a piecemeal fashion in most jurisdictions and the responsibilities of different parties are often poorly defined. Processes may be overlapping, bureaucratic and highly varied across clinical sites requiring reduplication of effort, enormous resources, and extended timelines. A 2013 Government of Australia review found that 'Australia has become one of the most expensive locations for clinical trials in the world and is inefficient in ethics approvals and governance processes ' (McKeon et al., 2013). The effect of overlapping and bureaucratic approval processes for clinical trials can prevent researchers accessing new medicines for evaluation, reduce investment in the health sector and cost lives. In Australia, for example, regulatory delay is estimated to be the cause of up to 60 premature deaths each year in oncology patients because research is slowed and patient access to novel therapies is delayed (Whitney and Schneider, 2011). Similarly, a UK study found that delays in approving studies frequently stretched to over a year with extended and inefficient use of trial coordinator time being borne by studies (Hackshaw et al., 2008). And in Japan, Konishi et al. highlighted the example of a medical device that was required to have a Japanese trial arm added, resulting in 4 years' delay of device approval compared with US timelines) (Konishi et al., 2018).
Ethics review and governance have been the target of multiple government interventions designed to increase clinical trial activity (Zhang et al., 2015;Kong, 2007;Madhani, 2010;Sarma and Manisha, 2018;Srinivasan, 2009). Ethics review describes the formal evaluation of the moral grounding of the proposed research project and governance the processes used by institutions to ensure that they are accountable for research conducted under their auspices. In general, interventions have attempted to simplify and harmonise ethics and governance systems and while some interventions have been successful (Konishi et al., 2018), others have not (Berge et al., 2015). The objective of this paper was to systematically collate and summarise evidence describing the effects of interventions that have sought to increase clinical trial activity by reforming ethics review or governance processes.
Methods
This systematic review was conducted in accordance with the Cochrane Handbook for Systematic Reviews of Interventions (Higgins et al., 2021). The guiding question was: 'What are the effects of governments actions targeting ethics or governance processes on clinical trial activity?' The protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) under registration number CRD42020191510 as a slightly broader question of 'What are the effects of governments actions on clinical trial activity?'. Other government actions such as tax credits or funding initiatives will be addressed in a separate publication, due to a large number of retrieved studies, which made reporting in one manuscript not feasible.
Search strategy. The search strategy was developed in consultation with the UNSW Library research service where key search terms were identified ('clinical trials' and 'public policy' as free text keywords). These terms were combined using the Boolean operator 'AND' to complete searches of Pub Med, Scopus, Sage, ProQuest and Google Scholar databases. This was followed by a search of the internet for grey literature done using the same terms in the search engine Google. Finally, a hand search of the references of all included reports was done. No time constraints or language barriers were placed on the search parameters.
The reports identified from the searches of Pub Med, Scopus, Sage and ProQuest were exported to Covidence, which automatically removed duplicate entries. The reports identified from Google Scholar were exported to Publish or Perish. The Google search engine results as well as the reports identified from the hand searches of reference lists were recorded in an Excel spreadsheet and duplicates were excluded by hand.
Study inclusion criteria. Studies were eligible for inclusion if they (1) reported on a policy intervention of interest (ethics review or governance process); (2) provided some report on the impact of the intervention; and (3) the intervention was implemented by a national or sub-national jurisdiction. Studies that analysed a jurisdiction's clinical trial sector or the laws and regulations that contributed to ethics review and/or governance processes but did not report on the effects of a specific intervention were excluded. 'Governance processes' were taken to include all approvals necessary for a trial to be initiated at a site-except ethics evaluation processes. This might include, site contracts, regulatory submissions and site required initiations. Studies that identified the implementation of an eligible intervention but failed to report on an outcome of interest were recorded in the listings but noted to have missing outcome data.
Study selection. Two authors (SC and ER) independently screened all potentially eligible studies. For the studies identified from Pub Med, Scopus, Sage and ProQuest this comprised an initial review of titles and abstracts with review of the full text articles done only for those that passed initial screening. For the studies identified from Google Scholar and using the Google search engine the screening was a single step process. Where one reviewer included or excluded a study in contradiction to the second reviewer a discussion was had, and consensus was reached about whether the study was eligible.
Data extraction. Two authors (SC and ER) independently extracted data from each eligible study into separate copies of the same spreadsheet. Once both authors had completed the data extraction process every item of data was compared and discrepancies were reconciled by discussion. The study characteristics extracted were country, year of publication, intervention (ethics or governance), impact of each intervention on outcomes of interest (number of trials, expenditure on trials, other assessment of impact).
Quality assessment. As intervention studies, the quality of each was assessed by four parameters as advised by the Cochrane Handbook for Systematic Reviews (Higgins et al., 2021). The four parameters were confounding bias that arises when there are systematic differences between experimental intervention and comparator groups in the care provided, which represent a deviation from the intended interventions; selection bias that arises when later follow-up is missing for individuals initially included and followed, bias due to exclusion of individuals with missing information about intervention status or other variables such as confounders; information bias introduced by either differential or non-differential errors in measurement of outcome data; and reporting bias representing selective reporting of results from among multiple measurements of the outcome, analyses or subgroups in a way that depends on the findings.
Outcomes. The primary outcome of interest was the number of clinical trials. Secondary outcomes were financial impact and community access to quality healthcare. Community access to quality healthcare was discontinued as an outcome since there was little reporting on this outcome. 'Financial impact' was measured by expenditure on clinical trials, which was defined as funding for trial activity from any source but most data related to expenditure on trials by multinational healthcare companies. For both the primary and secondary outcomes the effects were reported as positive, null or adverse.
Data synthesis. Outcome data about effects on the number of trials, expenditure on trials and other outcomes were described inconsistently and using different metrics across studies. To enable the effects of interventions on each outcome to be summarised, the effect of each intervention on each outcome was documented as positive (when a favourable impact was identified and the number of trials or expenditure on trials increased), null (when no impact on the number of trials or expenditure on trials was identified), adverse (when a negative effect on the number of trials or expenditure on trials was identified) or missing. The numbers of studies reporting each form of outcome was summarised and presented in tabular and graphical formats.
Results
Identified studies. There were a total of 1455 potentially relevant reports identified in the database searches ( Fig. 1). One-hundred fifty-four reports were retrieved from peer reviewed databases and examined in Covidence. 9820 were identified from Google Scholar and the first 980 (10%) were exported to publish or perish for title and abstract review. The first 100 titles reviewed yielded 20 studies for full text review with this number continually diminishing to only 3 studies in the last 80 reviewed (Supplementary Appendix 1). An additional 200 reports were identified from the Google search engine and were similarly reviewed and recorded in Excel. One-hundred seventy-four of these reports were identified as potentially relevant and their bibliographies were reviewed resulting in an additional 94 potentially relevant reports. The bibliographies of these 94 reports were then examined and a further 27 potentially relevant reports were identified for review. In total 295 reports were deemed relevant for full text review with 257 excluded as failing to meet the inclusion criteria. This left 38 reports with data describing 45 distinct interventions. 14 of these reports were published in the last 5 years, 10 between 5 and 10 years ago and 14 more than 10 years ago (Supplementary Appendix 2).
After conducting the quality assessment of the included papers and accounting for potential confounding bias associated with before and after studies, as well as the results from selection, information and reporting biases we conclude overall fairly low quality of evidence.
All reports were some form of 'before-after comparison', mostly with little formal description of methodology. The background settings within which the different interventions were tested varied considerably across the studies.
Characteristics of the interventions and the available outcome data. Of the 45 interventions identified, 13 targeted ethics review and 32 targeted governance processes (Table 1). The interventions were distributed across 12 countries and jurisdictions (Fig. 2). The country with the most interventions was India (1 ethics and 8 governance) followed by the UK (2 ethics and 6 governance). There were no interventions identified in Latin America or the Middle East. Only one intervention was identified for Europe though there were four reports about different aspects of that initiative.
The 13 ethics interventions comprised 4 interventions based on a single application model, 3 based on a mutual acceptance of review model, 2 based on the implementation of guidelines to standardise the application format, 2 based on streamlined approval and 2 others. The 32 governance interventions were 13 attempts to implement regulatory changes, 4 to implement a coordinating centre, 4 based upon a single application, 3 based on scope guidelines, 3 based on streamlining of the approval process and 5 others (Table 2).
There were 39/45 interventions for which there was a positive, null or adverse effect identified. The other 6 studies reported on the intervention form only (2 ethics (Care ACoSaQiH, 2020; Thompson, 2014) and 4 governance (Thompson, 2014;Madhani, 2010;Mani, 2006;Care ACoSaQiH, 2020), with no data on impact provided. Among the 39 interventions for which an outcome was recorded there was reporting on numbers of clinical trials for 38 (11 for ethics and 27 for governance) and expenditure on clinical trials for 5 (0 for ethics and 5 for governance).
Effects of interventions targeting ethics reform. Of the 11/13 attempts to reform ethics systems for which outcome data were available, 6 were positive (Care ACoSaQiH, 2020; Sarma, Manisha (2018) were null (Zannad et al., 2019;Industry CDo. 2011;Evans, Zalcberg (2016); Kong, 2007) and one was adverse (Warlow, 2005) (Table 3). The positive effects were mostly derived from interventions that implemented 'scope guidelines', placed 'defined timeline' expectations on review processes or established 'mutual acceptance' of review outcomes across ethics committees. For the four interventions reporting null effects this was attributed primarily to the interventions being of sound design but not being delivered with the fidelity intended (Zannad et al., 2019; Industry CDo., 2011; Evans and Zalcberg, 2016; Kong 2007). For example, the lack of enabling technology or infrastructure meant that the impact of the reforms was muted (Zannad et al., 2019). The adverse effect of an ethics intervention (Warlow, 2005) was observed in the United Kingdom and was attributed to the introduction of a new submission format, which the researchers found time consuming to complete and the ethics committees were incompletely equipped to assess. The defining characteristics that led to this negative result were a single centralised application process and an inadequate consideration of the wider research environment ( Table 4).
Effects of interventions targeting governance reform. Of the 28/ 32 interventions targeting governance reform for which outcome data were available, seventeen (ATIC Australian Trade and Investment Commission, 2018;Caulfield, 2001;Zhang et al., 2015;Kong, 2007;Mossialos et al., 2016;Choudhury, Saberwal (2019);Srinivasan et al., 2009;McGee, 2006;Srinivasan, 2009;Ippoliti, Falavigna (2014); Konishi et al., 2018;Chen, 1998;Haffner, 1994;Care ACoSaQiH 2020;Sarma and Manisha, 2018) were positive, three were null (Fudge et al., 2010;Industry CDo., 2011) and eight were adverse (Van Oijen et al., 2017;Reith et al., 2013;Berge et al., 2015;Newman et al., 2016;Ikegami and Campbell, 1999;Warlow, 2005 Table 3). The positive effects were mostly derived from two intervention strategies that overlapped with those effective in ethics review reform ('scope guidelines' and 'defined timelines'). Scope guidelines limited the numbers of ambiguities in the process and fixed timelines held review bodies to defined schedules. Additionally, the introduction of 'co-ordinating bodies' that facilitated the governance review process across the various responsible organisations in a jurisdiction also delivered positive outcomes. Once again, the null governance interventions were considered primarily to be a consequence of failure to achieve uptake of the intervention as planned, rather than the intervention format being fundamentally flawed. The eight governance interventions reporting adverse effects were mostly initiatives based upon standardised protocols that were too proscriptive or resulted in duplication of effort. The European Union Clinical Trials Directive, for example, was intended to standardise governance processes with legislated EU-wide regulations.
Ultimately the Directive was legislated in many countries but with differences across jurisdictions. The consequence was that multicountry clinical trials were required to understand and adhere to multiple different criteria across European Union sites with significant adverse implications for timelines and resources (Reith et al., 2013). The Directive was an example that contained each of the characteristics common amongst the negative results (i.e., a single centralised application process, inadequate consideration of wider research environment as well as a focus on retention of local control - Table 4).
Discussion
Governments have a clear opportunity to enhance clinical trial activity with interventions targeting ethics review and governance processes. However, the form of both ethics and governance interventions needs to be selected carefully to ensure they are effective. For both sets of interventions there were multiple examples of failures whereby no impact was achieved, and this appears mostly to have occurred because the interventions, while well-conceived, were not delivered as planned. There were also several examples of interventions that actually impeded clinical activity because the implemented interventions were not well designed (Table 4). Interest in efficient clinical trial processes is increasing as governments around the world seek to capture the health and economic benefits of foreign and domestic research investment in their jurisdictions. India's share of the global clinical trials market, for example, grew from 0.9 per cent in 2008 to 5 per cent in 2013 and China has experienced similar expansion as those countries took advantage of their large populations, rapidly developing workforce, and relatively low cost of business. At the same time, the share of clinical trial activities in the United States and other developed countries has been declining (Mondal and Abrol, 2015), spurring these more established markets to re-examine their own policy settings in an effort to retain valuable business.
Two forms of government intervention that were identified as more likely to be effective for both ethics and governance reform were the introduction of 'scope guidelines' and 'defined timelines'. The former seeks to place clear boundaries around the breadth of the assessment required to be done by the ethics or governance agency and thereby achieve focus on the key actions required. Scope guidelines introduced in India in the early 2000s were credited with defining ambiguous topics and demarcating the responsibilities of sponsors, ethics committees and investigators, which resulted in enhanced throughput and increased numbers of approved trials (Sarma and Manisha, 2018). 'Defined timeline' interventions were primarily about placing clear targets on the acceptable maximum duration of each step in the passage of clinical trials though approval processes, with accompanying reporting on the timelines achieved. There is the potential that these amendments might erode or decrease the quality of decisions made and efforts by the Indian government have been criticised as such (Barnes et al., 2018). 'Mutual acceptance' interventions were also effective as an ethics reform measure (Care ACoSaQiH, 2020) and there was some evidence that the establishment of a central 'co-ordinating body' for the support of governance approval could bring benefits. The 'coordinating body' approach is importantly different to the 'single application' strategy, with the former seeking to facilitate governance processes across multiple entities, rather than trying to centralise all processes in a single body. A theme central to multiple interventions was the intent of reduction of administrative burden. In general, this was viewed as a positive objective and where achieved was associated with positive outcomes. However, unintended effects sometimes resulted when programmes were not implemented as anticipated. The European Union sought to harmonise member state administration processes through the European Union Clinical Trials Directive (2001/20/EC) (European Union, Directive 2001/20/EC, 2001. Contrary to expectations, between 2003 and 2007, the average time from protocol finalisation to initiation of recruitment increased from 144 days to 178 days, rather than declining (Berge et al., 2015). Investigation revealed that in multiple jurisdictions Directive initiatives were layered on top of existing regulations rather than replacing them, because local ethics and governance bodies proved unwilling to divest responsibility to the Directive. This resulted in a more complex, variable and onerous system for clinical trialists to negotiate, which was exactly opposite to the goal intended. It was for these reasons that the directive was repealed by Regulation 536/2014(European Union, 2014. The United States embarked on a similar effort to streamline the ethics review for multisite clinical trials (HHS UDoHaHS, 2017), which has left some commentators doubtful that the centralisation of the review process will allow ethics committees to guarantee the protection of research participants (Tusino and Furfaro, 2021).
Similarly common to interventions with adverse effects were interventions that had no effect, and while clinical trial activity was not reduced with these interventions there was an opportunity cost for each. The European and Developing Countries Clinical Trials Partnership and World Health Organization efforts to improve the administration of clinical trials throughout Africa are an example of a resource-intensive intervention with null effects. Poor clinical research infrastructure and suboptimal access to technology (Zannad et al., 2019) were identified as the primary causes of project failure. The engagement of all relevant parties and a system-wide approach to enhancing clinical trial activity appears to be another factor important to success. There are several welldocumented instances where one part of the system acting alone to introduce enhancements resulted in an adverse outcome. On several occasions processes introduced to improve patient safety or patient rights, (Ikegami and Campbell, 1999;Newman et al., 2016;Kwon and Jung, 2018) while having laudable objectives, failed the clinical trial system because of insufficient consultation. An inadequately consulted upon requirement that physicians alone could obtain consent for trial participation implemented in Japan did little to improve the quality of information received by trial participants but became a major new barrier to recruitment (Ikegami and Campbell, 1999). By contrast, 'scope guidelines' implemented in Japan following negotiation amongst researchers, ethics and governance bodies were deemed highly effective at removing ambiguities and accelerating review processes (Nakamura, 2003). In this latter case the full engagement of health administrators, researchers and research coordinators in a whole-of-system approach to the reforms was deemed central to their success. Governance interventions focused on retaining local administrative control were relatively common and were frequently associated with reduced clinical trial activity (Van Oijen et al., 2017;Warlow, 2005;Hackshaw et al., 2008;Haynes et al., 2010;Hudson et al., 2016).
While most countries enacted ethics or governance process changes to improve efficiency and reduce regulatory burdens, some countries utilised regulatory changes to implement powerful one-off interventions. China, for example, now requires that companies wishing to market their product in China include a given number of participants recruited locally within their clinical trial programmes (Zhang et al., 2015;Kong, 2007;Mossialos et al., 2016). The mandated inclusion of local study participants has likely been an important part of the decision by many large international companies to establish or grow their presence in China. In India, it was indirect action on the reform of national intellectual property safeguards that
Not Applicable
Low risk Medium risk High risk was central to encouraging foreign companies to establish a presence in India and do more local clinical trials (McGee, 2006).
Strengths and limitations.
This review benefitted from the broad and systematic search of the literature done to try and capture all relevant information. The algorithms used by internet search engines can weight results towards user characteristics such as geography and language and this may have mitigated against the detection of reports from countries such as China and Koreathese are two markets that have significant clinical trial activity, that have implemented significant reforms but for which relatively few search results were returned. Additionally, most of the included studies were set in English-speaking jurisdictions and this may have been due to the exclusive use of English search terms and the algorithms. It is also possible that the search results were influenced by publication bias, which it was not possible to formally test for, given the limited data available across the constituent studies. Detail about the forms of intervention and nature of the evaluations were frequently sparse and categorising the interventions and outcomes was difficult as a consequence. For example, many studies referred only obliquely to 'regulatory reforms' meaning that large numbers of interventions were categorised non-specifically as 'regulatory changes'. The inclusion of grey literature ensured that more relevant data were included but the quality of reporting was more varied, and this presented analytic challenges (Reith et al., 2013). It was also not possible to search every possibly relevant result returned from the grey literature searches because of the very large numbers. The standardised and duplicated extraction of information from the identified reports served to maximise the quality of the data that was available and the semi-quantitative approach to summarising information, nonetheless, provided for clearer insights than are possible from even a high-quality narrative review approach (Care ACoSaQiH, 2020). The studies came from only a small number of jurisdictions that are not representative of the globe though there was a mix of higher and lower-income countries included. Additionally, the study design may have omitted various interventions that did not include evaluations on the impact of numbers of clinical trials or relevant expenditures (such as legal acts). As such there is some uncertainty about the extent to which the main conclusions are generalisable across other countries, though it seems likely that key themes such as the reduction of bureaucracy and the need for effective implementation of selected interventions will be common across jurisdictions. Table 4 attempts to identify common characteristics to positive and adverse interventions despite these differences in culture, levels of development, health infrastructure and population types (Table 5).
Conclusion
Our data show that governments can pursue clinical trial reform programmes targeting ethics and governance processes with a reasonable expectation of increasing clinical trial activity and expenditure. Where governments achieve greater clinical trial activity there is also a reasonable expectation that the research sector, the health system, the community, and the economy will benefit and there is a high likelihood that the costs of reform processes will be offset. There is, however, also a clear risk that incompletely implemented reforms will fail and that poorly conceived programmes will make processes more onerous and reduce clinical activity.
Data availability
All data generated or analysed during this study are included in this published article and its supplementary information files. | 2023-02-23T14:31:52.053Z | 2022-08-08T00:00:00.000 | {
"year": 2022,
"sha1": "7f0dfeef25669ac03f37f8faff29c8055ca5b35a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41599-022-01269-3.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "7f0dfeef25669ac03f37f8faff29c8055ca5b35a",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": []
} |
234679004 | pes2o/s2orc | v3-fos-license | Advances in Machine and Deep Learning for Modeling and Real-time Detection of Multi-Messenger Sources
We live in momentous times. The science community is empowered with an arsenal of cosmic messengers to study the Universe in unprecedented detail. Gravitational waves, electromagnetic waves, neutrinos and cosmic rays cover a wide range of wavelengths and time scales. Combining and processing these datasets that vary in volume, speed and dimensionality requires new modes of instrument coordination, funding and international collaboration with a specialized human and technological infrastructure. In tandem with the advent of large-scale scientific facilities, the last decade has experienced an unprecedented transformation in computing and signal processing algorithms. The combination of graphics processing units, deep learning, and the availability of open source, high-quality datasets, have powered the rise of artificial intelligence. This digital revolution now powers a multi-billion dollar industry, with far-reaching implications in technology and society. In this chapter we describe pioneering efforts to adapt artificial intelligence algorithms to address computational grand challenges in Multi-Messenger Astrophysics. We review the rapid evolution of these disruptive algorithms, from the first class of algorithms introduced in early 2017, to the sophisticated algorithms that now incorporate domain expertise in their architectural design and optimization schemes. We discuss the importance of scientific visualization and extreme-scale computing in reducing time-to-insight and obtaining new knowledge from the interplay between models and data.
Introduction
This chapter provides a summary of recent developments harnessing the data revolution to realize the science goals of Gravitational Wave Astrophysics. This is an exciting journey that is powered by the renaissance of artificial intelligence, and a new generation of researchers that are willing to embrace disruptive advances in innovative computing and signal processing tools.
In this chapter, machine learning refers to a class of algorithms that can learn from data to solve new problems without being explicitly re-programmed. While traditional machine learning algorithms, e.g., random forests, nearest neighbors, etc., have been used successfully in many applications, they are limited in their ability to process raw data, usually requiring time-consuming feature engineering to preprocess data into a suitable representation for each application. On the other hand, deep learning algorithms can learn patterns from unstructured data, finding useful representations and automatically extracting relevant features for each application. The ability of deep learning to deal with poorly defined abstractions and problems has led to major advances in image recognition, speech, computer vision applications, robotics, among others [1].
The following sections describe a few noteworthy applications of modern machine learning for gravitational wave modeling, detection and inference. It is the expectation that by the time this chapter is published, the ongoing developments at the interface of artificial intelligence and extreme-scale computing will have leapt forward, making this chapter a reminiscence of a fast-paced, evolving field of research. The chapter concludes with a summary of recent applications at the interface of deep learning and high performance computing to address computational grand challenges in Gravitational Wave Astrophysics.
Machine learning and numerical relativity for gravitational wave source modeling
One of the first examples of gravitational wave source modeling was introduced by Einstein. He derived an approximated version of his field equations [2] to confirm that general relativity accurately predicts the precession of the perihelion of Mercury [3]. Shortly after Einstein published his general theory of relativity, Karl Schwarzschild found an exact solution to Einstein's field equations, known as the Schwarzschild metric [4]. This analytical solution describes the gravitational field outside of a spherical mass that has no charge and no spin, under the assumption that the cosmological constant is zero. Soon afterwards, Reissner and Nordström derived an analytical solution that describes the gravitational field exterior to a charged, nonspinning spherical mass [5]. Nearly five decades later, and with the understanding that these metrics describe the gravitational field outside black holes, Roy Kerr discovered the analytical solution that describes uncharged, spinning black holes [6]. Shortly thereafter, the Kerr metric was extended to the case of charged, spinning black holes-the Kerr-Newman metric [7].
While these analytical solutions provided tools to extract new insights from general relativity, there were astrophysical scenarios of interest that required novel approaches. The development of approximate solutions to Einstein's equations, such as the post-Newtonian [8] and post-Minkowskian formalisms [9], provided a better understanding of gravitationally bound systems like neutron star mergers that were prime targets for gravitational wave detection. Still, a detailed study of gravitational wave emission in the strong, highly dynamical gravitational field of black hole mergers required a complete numerical solution of Einstein's field equations. In the late 1990s, and after decades of mathematical and numerical developments, the Binary Black Hole Grand Challenge Alliance, funded by the US National Science Foundation, successfully simulated a head-on binary black hole collision [10]. The first successful evolution of binary black hole spacetimes, including calculations about the orbit, merger, and gravitational waves emitted, were reported in [11]. Afterwards, other numerical relativity teams reported similar accomplishments [12,13].
Within a decade, numerical relativity matured to the point of harnessing high performance computing with mature software stacks to study the physics and gravitational wave emission of binary black hole mergers covering a wide range of astrophysical scenarios of interest [14][15][16][17]. These resources have been used extensively to develop semi-analytical waveform models that describe the inspiralmerger-ringdown of binary black hole mergers [18][19][20] and to inform the design of algorithms for gravitational wave detection [21].
In time, the need to produce accurate and computationally efficient waveform models became apparent. This need has led to the adoption of surrogate models [23][24][25][26] and traditional machine learning techniques, such as Gaussian emulation [22,27]. The development of fast waveform generators has had a significant impact in gravitational wave parameter estimation studies. This is because Bayesian parameter estimation that utilizes Markov-chain Monte Carlo requires Fig. 1 Interpolation error, σ , of a Gaussian Process Emulator on the amplitude (top panel), and phase (bottom panel), for three training datasets that describe quasi-circular, non-spinning binary black hole mergers: D 1 α (red), D 2 α (green), and D 3 α (blue). Each numerical relativity waveform in these sets has n = 2800 time samples. The (symmetric) mass ratio values for the points in Q 1 , Q 2 and Q 3 are indicated respectively by red, green, and blue dots along the central horizontal panel. Each iteration of the training set reduces the error in the amplitude and phase interpolations. This Figure was produced by the authors of this chapter in the published article [22]. O 10 8 waveform evaluations. Surrogate waveforms are tailored for this task, since each waveform may be produced within 50ms with typical mismatches of order 10 −3 with other state-of-the-art waveform approximants [23]. Fig. 1 shows an example of a Gaussian emulator used to create a stand-alone merger waveform trained with a catalog of quasi-circular, non-spinning, binary black hole mergers. Notice how the performance of the emulator changes depending on the format with which the training dataset is presented to the emulator. This is a key difference between traditional machine learning and deep learning, since in the latter case, neural networks do not need feature engineering to provide optimal performance.
Large-scale numerical relativity waveform catalogs have also been used as datasets to train Gaussian emulators to produce highly accurate modeled waveforms in record time [22]. It is also known that traditional machine learning methods, such as surrogate models or Gaussian emulation, present several challenges when trying to organize data in high-dimensional spaces [28]. In stark contrast, deep neural networks excel at learning high-dimensional data and provide additional advantages such as improving convergence and performance when combined with extremescale computing.
While rapid progress has taken place in the modeling of binary black hole mergers, matter systems such as binary neutron star mergers and black hole-neutron star mergers continue to present significant challenges [29]. It is expected that numerical relativity will meet these challenges within the next few years with the production of open source numerical relativity software that will bring together experts across the community. The adoption of deep learning to accelerate the description of physics that requires sub-grid scale precision, such as turbulence, is also in earnest development [30].
In summary, numerical relativity has played a key role in the development of waveform models and signal processing algorithms that enabled the discovery of gravitational waves. Numerical relativity and machine learning have been combined to produce modeled waveforms for production-scale gravitational wave analyses. We also see a new trend in which numerical relativity and deep learning have been combined for waveform production at scale. This approach may well provide a solution for high-dimensional signal manifolds. Another exciting trend is the use of deep learning to replace compute-intensive modules in numerical relativity software that describes matter systems.
Machine learning for gravitational wave data analysis
In this section we review machine learning applications in the context of parameter estimation, rapid characterization of compact binary remnants, and signal denoising. Parameter Estimation Machine learning applications for gravitational wave inference have been developed to overcome the computational expense and poor scalability of established Bayesian approaches. Estimating the posterior probability density functions of astrophysical parameters that describe gravitational wave sources is a computationally intensive task. This is because these systems span a 15-D parameter space, thereby requiring a large number of modeled waveforms to densely sample this signal manifold. In the previous section we discussed the importance of harnessing machine learning methods to produce modeled waveforms at scale. In addition to massive waveform generation, low to moderate signal-to-noise ratios and complex noise anomalies may require additional follow-up studies, thereby demanding additional computational resources. In order to mitigate these computational challenges, parameter estimation algorithms such as LALInference [31] and PyCBC Inference [32] use nested sampling [33] or Markov Chain Monte Carlo [34]. These techniques usually take days to weeks to produce posterior samples of gravitational wave sources' parameters. Thus, it is timely and relevant to explore new approaches to further reduce time-to-insight. This is ever more urgent as the international network of gravitational wave detectors continues to increase its sensitivity, thereby increasing the detection rate of observed events.
Machine learning solutions to accelerate parameter estimation include Gaussian process emulation [35], nested sampling [36], and nested sampling combined with neural networks [37]. In the latter case, likelihood calculations are accelerated by up to 100x for computationally demanding analyses that involve long signals, such as neutron star systems. Rapid characterization of compact binary remnants Multi-Messenger searches demand real-time detection of gravitational wave sources, accurate sky localization, and information regarding the nature of the source, in particular whether the progenitor may be accompanied by electromagnetic or neutrino counterparts. To address the latter point, i.e., to ascertain whether the remnant is a neutron star, or whether in the case of a neutron star-black hole merger the black hole remnant is surrounded by an accretion disk of tidally disrupted material from the neutron star, it may be possible to use information provided by low-latency detection algorithms. However, it is known that these estimates may differ from accurate but hours-to days-long parameter estimation studies. To start addressing these limitations, a supervised nearest neighbors classification method was introduced in [38]. This method infers, in a fraction of a second, whether a compact binary merger will have an electromagnetic counterpart, thus providing time-critical information to trigger statistically informed electromagnetic follow-up searches.
On the other hand, the ever-increasing catalog of detected gravitational wave sources provides the means to infer the mass and spin distributions of stellar mass compact binary systems. These studies will shed new light on the stellar evolution processes that may lead to the formation of these astrophysical objects, or whether these objects are formed from a mixture of different populations [39]. Agnostic studies that involve Gaussian mixture models are ideally suited to enable data-driven analyses [40]. Signal denoising Gravitational wave signals are contaminated by environmental and instrumental noise sources that are complex to model and difficult to remove. Several methods have been explored for data cleaning and noise subtraction, ranging from the basic Wiener filter that optimally removes linear noise [41] to machine learning applications that can effectively remove linear, non-linear, non-Gaussian and non-stationary noise contamination [42,43]. Machine learning methods employed to remove noise from gravitational wave signals includes Bayesian methods [44], dictionary learning [45,46], principal component analysis [47], and totalvariational methods based on L 1 norm minimization techniques that were originally developed in the context of image processing, but were subsequently adapted to clean signals in time-and frequency-domain [48].
This succinct summary of traditional machine learning applications in gravitational wave astrophysics suggests that part of this community has been engaged in harnessing advances in computing and signal processing to address computational grand challenges in this field. The following section shows how this process has been accelerated with the rise of artificial intelligence from the early 2010s.
Deep Learning for gravitational wave data analysis
Gravitational wave data analysis encompasses a number of core tasks, including detection, parameter estimation, data cleaning, glitch classification and removal, and signal denoising. In this section we present a brief overview of the rapid rise of artificial intelligence for gravitational wave astrophysics. Detection Existing algorithms for signal detection include template matching, where the physics of the source is used to inform the classification of noise triggers, and to identify those that describe gravitational wave sources [49,50]. Other events where the underlying astrophysics is unknown or too complex to capture in modeled waveforms take advantage of burst searches, which make minimal assumptions about the morphology of gravitational wave signals [51]. Continuous wave sources, such as isolated neutron stars, emit signals that, although well known, are very weak and long. This combination makes their search and detection very computationally intensive [52].
As mentioned above, as advanced LIGO and the international gravitational wave detector network gradually reach design sensitivity, core data analysis studies will outstrip the capabilities of existing computing facilities if we continue to use poorly scalable and compute-intensive signal processing methods. Furthermore, gravitational wave astrophysics is not the only discipline with an ever-increasing need for computing resources. The advent of other large-scale scientific facilities such as the Square Kilometer Array, the High Luminosity Large Hadron Collider, or the Legacy Survey of Space and Time, to mention a few [53][54][55], will produce datasets with ever-increasing complexity and volume. Thus, a radical approach in terms of computing and signal processing is needed to maximize and accelerate scientific discovery in the big data era.
To contend with these challenges, a disruptive approach that combines deep learning and high performance computing was introduced in [56]. This idea was developed to address a number of specific challenges. To begin with, the size of modeled waveform catalogs used for template matching searches imposes restrictions on the science reach of low-latency searches. Thus, it is worth exploring a different methodology that enables real-time gravitational wave detection without sacrificing the depth of the signal manifold that describes astrophysical sources. It turns out that there is indeed a signal processing tool, deep learning [57], that encapsulates information in a hierarchical manner, bypassing the need to use large catalogs of images or time-series for accelerated inference. The second key consideration in the use of deep learning for gravitational wave detection is the fact that real signals may be located anywhere in the data stream broadcast by detectors. Thus, the neural networks in [56,58] introduced the concept of time-invariance. A third consideration is that there is no way to predict the signal-to-noise ratio of real events. Thus, the methods presented in [56,58] showed how to adapt curriculum learning [59], originally developed in the context of image processing, to do classification or detection of noisy and weak signals. The key idea behind this approach consists of training the model by first exposing it to signals with high signal-to-noise ratio, and then gradually increasing the noise content until the signals become noise-dominated. The combi-nation of the aforementioned innovations led to the realization that neural networks could indeed detect modeled gravitational wave signals embedded in simulated advanced LIGO noise with the same sensitivity as template matching algorithms, but orders of magnitude faster with a single, inexpensive GPU. In addition to these results, the authors in [56] showed how to modify a neural network classifier and use transfer learning to construct a neural network predictor that provides real-time point-parameter estimation results for the masses of the binary components, mirroring a similar capability of established low-latency detection analyses. This seminal work was then extended to the case of real gravitational wave signals in advanced LIGO noise [58]. About a year later, different teams reported similar classification results in the context of modeled signals in simulated LIGO noise [60].
A metric for the impact of the seminal ideas laid out in [56,58] is given by the number of research teams across the world that have reproduced and extended these studies [60][61][62][63][64][65][66][67]. It is also worth mentioning, however, that while these studies demonstrated the scalability, computational efficiency and sensitivity of neural networks for gravitational wave detection, it is still essential to demonstrate the use of these signal processing tools for searches that span a high-dimensional signal manifold, and to apply them to process large datasets.
Deep learning has been applied to the detection of neutron star mergers [68][69][70], forecasting of neutron stars inspirals and neutron star-black hole mergers [71,72], continuous wave sources [73][74][75], signals with complex morphology [61], and to accelerate waveform production [76,77]. The rapid progress and maturity that these algorithms have achieved within just three years, at the time of writing this book, suggest that production-scale deep learning methods are on an accelerated track to become an integral part of gravitational wave discovery [78,79]. Signal Denoising The first deep learning application for the removal of noise and noise anomalies for gravitational wave signal processing was introduced in [80]. This study described how to combine recurrent neural networks with denoising auto-encoders to clean up modeled waveforms embedded in real advanced LIGO noise. The different components of this Enhanced Deep Recurrent Denoising Auto-Encoder (EDRDAE) are shown in Figure 2. To provide optimal denoising performance for low signal-to-noise ratio signals, this model incorporated a signal amplifier layer, and was trained with curriculum learning. Another feature of this model is that while it was originally trained to denoise signals that describe quasi-circular, non-spinning black hole mergers, this model was able to generalize to signals that describe eccentric, non-spinning black hole mergers, whose morphology is much more complex than the training dataset. This study showed that deep learning approaches outperform traditional machine learning methods such as principal component analysis and dictionary learning for gravitational wave signal denoising [80]. The first application of deep learning for signal denoising and de-glitching of actual gravitational wave observations was introduced in [81]. The model proposed for this analyses consists of a repurposed WaveNet architecture-see left panel of Figure 3-which was originally developed for forecasting and human speech generation [82]. The data used to train this network consists of one-second-long time-series modeled waveforms, sampled at 8192Hz, that describe quasi-circular, non-spinning binary black hole mergers. Upon encoding time-and scale-invariance, this model was used to denoise several gravitational wave signals, as shown in the right panel of Figure 3, and to demonstrate its effectiveness at removing noise and glitches from simulated signals embedded in real advanced LIGO noise. This model was also used to denoise quasi-circular, spinning, precessing binary black hole mergers, furnishing evidence for the ability of the model to generalize to new types of signals that were not used in the training stage. Data cleaning Recent developments for data cleaning include [83], in which deep learning is applied to gravitational wave detector data and data from on-site sensors monitoring the instrument to reduce the noise in the time-series due to instrumental artifacts and environmental contamination. This approach is able to remove linear, nonlinear, and non-stationary coupling mechanisms, improving the signal-to-noise ratio of injected signals by up to ∼ 20%. Parameter Estimation Uncertainty quantification is a rapidly evolving area in deep learning research. Thus, it is natural that a number of methodologies have been investigated to constrain the astrophysical parameters of gravitational wave sources. For instance, in [84], Bayesian neural networks were used to constrain the astrophysical properties of real gravitational wave sources before and after the merger event, showcasing the ability of neural networks to measure the final mass and spin remnant sources by directly processing real LIGO data. Conditional variational auto-encoders [85] and multivariate Gaussian posterior models [86] have been used to construct posterior distributions of modeled signals embedded in simulated LIGO noise. In [87], the authors introduce the use of auto-regressive normalizing flows for rapid likelihood-free inference of binary black hole mergers that describe an 8-D parameter space. This analysis, originally applied for modeled signals in stationary Gaussian noise, was extended to cover the 15-D parameter space for GW150914 [88]. Deep learning has also been explored to characterize compact binary populations [89]. Deep Learning for the detection and characterization of higher-order waveform multipole signals of eccentric binary black hole mergers It has been argued in the literature that gravitational wave observations of eccentric binary black hole mergers will provide the cleanest evidence of the existence of compact binary populations in dense stellar environments, such as galactic nuclei and core-collapsed globular clusters [90].
The importance of including higher-order waveform modes for the detection of eccentric binary black hole mergers has been studied in the literature [91]. It has been found that, as in the case of quasi-circular mergers, higher-order modes play a significant role in the detection of asymmetric binary black hole mergers [92]. For instance, Figure 4 shows the increase in signal-to-noise ratio due to the inclusion of higher-order modes for a variety of astrophysical scenarios. These results show that for comparable mass ratio systems, represented by the numerical relativity waveform E0001 (see Table 1), higher-order modes do not alter the amplitude of the = |m| = 2 mode, thereby having a negligible contribution on the signal-to-noise ratio of these systems. However, for the asymmetric mass ratio systems represented by P0020 and P0024, the inclusion of higher-order modes leads to a significant increase in the signal-to-noise ratio of these systems.
In Figure 4 the signal-to-noise ratio distributions are presented as a function of the source's sky location, (α, β ), mapped into a Mollweide projection: (ϑ , ϕ) → (π/2−α, β −π). The reference frame (θ , φ ) is anchored at the center of mass of the binary system, and determines the location of the detector. In this reference frame, θ = 0 coincides with the total angular momentum of the binary, and φ indicates the azimuthal direction to the observer. Furthermore, the top panels in Figure 4 show that the inclusion of ( , |m|) modes significantly modifies the ringdown evolution of = |m| = 2 waveforms. The finding is in line with studies that indicate the need to include ( , |m|) modes for tests of general relativity using ringdown waveforms [93,94].
Having identified a collection of waveforms in which the inclusion of higherorder modes induce the most significant modification to the = |m| = 2 mode, corresponding to the maximum gain in signal-to-noise ratio, the authors in [91] injected these signals into simulated and real advanced LIGO noise, and used neural networks to search for them. Their findings are shown in Figure 5. In a nutshell, deep learning models can identify these complex signals, even though they were trained with quasi-circular waveforms. Future studies may explore whether neural networks improve their sensitivity when they are trained with datasets that describe eccentric mergers.
Deep Learning for the characterization of spin-aligned binary black hole mergers
Deep learning has been used to study the properties of the gravitational wave signal manifold that describes quasi-circular, spinning, non-precessing binary black hole mergers [95]. This study explored how neural networks handle parameter space degeneracies, and their ability to measure the individual spins, effective spin and mass ratio of black hole mergers by directly processing waveform signals in the absence of noise.
The model introduced in [95] was trained, validated and tested with = |m| = 2 waveforms produced with the NRHybSur3dq8 Fig 6. The entire data set is ∼ 1.5TB in size, and mpi4py is used to parallelize data generation. Neural network architecture The neural network architecture consists of two fundamental components, a shared root consisting of layers slightly modified from the WaveNet [82] architecture, and two branches consisting of fully connected layers that take in features extracted from the root to predict the mass ratio and the individual spins of the binary components, respectively, as illustrated in Fig 7. Physics-inspired optimization scheme The effective one-body Hamiltonian for moderately spinning black holes, where σ 1 ≡ 1 + 3 4q and σ 2 ≡ 1 + 3q 4 , and the effective spin parameter were used to train the model and provide tight constraints for the individual spins s z i . The performance of the neural network was assessed by computing the overlap O (h, s), between every waveform in the testing dataset, h(θ i ) with ground-truth parameters θ i , and the signal, s, that best describes h according to the neural network model, i.e., s(θ i ) using the relation This study [95] has shown that while vanilla neural networks provide uninformative predictions for the astrophysical parameters of black hole mergers, physicsinspired models provide accurate predictions. Thus, these approaches may be investigated in the context of parameter estimation to further constrain existing measurements for the spin distribution of observed events.
As in the case of gravitational wave detection, deep learning applications for inference are progressing at a very rapid pace. The extension of existing neural network models to characterize real signals with a broader range of reported signalto-noise ratios, in particular at the low end, will mark a major milestone on this exciting front.
The importance of developing novel signal processing tools and computing approaches is underscored by the computing needs of established, though poorly scalable and compute-intensive algorithms, which burned about 500M CPU core-hours in astrophysical searches, follow up studies and detector characterization analyses during the third observing run. Furthermore, the second observing run indicates that about 10M CPU core-hours of computing were needed for O (10) detected events. In the scenario of a third generation gravitational wave detection network with three interferometers, the number of observed events per year may be of order O 10 3 , and thus the computing needs will grow by 3 orders of magnitude. In brief, it is essential to pursue innovation in signal processing tools, computing methodologies and hardware architectures if we are to realize the science goals of gravitational wave astrophysics [97]. Deep Learning for the classification and clustering of noise anomalies in gravitational wave data While deep learning is now customarily used to extract information from complex, noisy, and heterogeneous datasets, it is worth exploring and removing known sources of noise from experimental datasets. This is particularly relevant in the context of gravitational wave astrophysics, since noise anomalies-or glitches-tend to contaminate and even mimic real gravitational wave signals.
The Gravity Spy project aims to identify, classify and excise instrumental and environmental sources of noise that decrease the sensitivity of ground-based gravitational wave detectors [98]. A sample of glitches classified by the Gravity Spy are shown in Figure 9. As citizen science efforts continue to increase the number of glitches classified through Gravity Spy, it may be possible to automate their classification, or to utilize human-in-the-loop machine learning methods. An initial approach for glitch classification was presented in [98]. This method consisted of using the small and unbalanced dataset of Gravity Spy glitches to train a neural network model from the ground up. A method for automatic glitch classification was introduced in [99]. This approach combined deep learning with transfer learning for glitch classification. Specifically, this study showed that models such as Inception, ResNet-50, and VGG that have been pre-trained for real-object recognition using ImageNet may be fine-tuned through transfer learning to enable optimal classification of small and unbalanced datasets of spectrograms of glitches curated by Gravity Spy. This approach provided state-of-the-art classification accuracy, reduced the length of the training stage by several orders of magnitude, and eliminated the need for hyperparameter optimization. More importantly, both ResNet-50 and Inceptionv3 achieved a classification accuracy of 98.84% on the test set despite being trained independently via different methods on different splits of the data, and obtained 100.00% accuracy when considering the top five predictions. This means that for any given input, the true class can be narrowed down to within five classes with 100.00% confidence. This is particularly useful, since the true class of a glitch is often ambiguous, even to human experts. Finally, this study [99] also showed that neural networks may be truncated and used as feature extractors for unsupervised clustering to automatically group together new classes of glitches and noise anomalies.
Other techniques, such as multi-view convolutional neural networks [100] and similarity learning [101], have also been explored to automate glitch classification [100]. Discriminative embedding functions have also been explored to cluster glitches according to their morphology [102]. The use of machine learning to identify glitches by gathering information from environmental and detector data channels has also been reported in [103].
The development of a framework to enable online identification of simulated glitches was introduced in [104]. The wealth of curated data from Gravity Spy may soon enable online data quality studies, providing timely and critical input for low-latency gravitational wave detection and parameter estimation analyses.
Deep Learning for the construction of Galaxy Catalogs in Large Scale Astronomy Surveys to enable gravitational wave standard-siren measurements of the Hubble constant The previous sections have summarized the state-of-the-art in deep learning applications for gravitational wave astrophysics. Deep learning is also being investigated in earnest to address computational grand challenges for large-scale electromagnetic and neutrino surveys [105].
This section showcases how to combine deep learning, distributed training and scientific visualization to automate the classification of galaxy images collected by different surveys. This work is timely and relevant in preparation for the next generation of electromagnetic surveys, which will significantly increase survey area, field of view, and alert production, leading to unprecedented volumes of image data and catalog sizes. Furthermore, since gravitational wave observations enable a direct measurement of the luminosity distance to their source [106], they may be used in conjunction with a catalog of potential host galaxies to establish a redshift-distance relationship and measure the Hubble constant. This has already been demonstrated in practice with the neutron star merger GW170817, whose electromagnetic counterpart allowed an unambiguous identification of its host galaxy [107]. Compact binary mergers without electromagnetic counterparts, such as the black hole merger GW170814, have been combined with galaxy catalogs provided by the Dark Energy Survey Year 3 [108] to estimate the Hubble constant. Therefore, to enable these type of statistical analysis it is necessary to automate the construction of complete galaxy catalogs [109] .
A method to automate galaxy classification was introduced in [109]. The basic idea consists of leveraging the human-labelled galaxy images from the Galaxy Zoo project to fine-tune a neural network model that was originally pre-trained for real-object recognition using ImageNet. As described in [109], this approach not only enabled state-of-the-art classification for galaxies observed by the Sloan Digitial Sky Survey (SDSS), but also for Dark Energy Survey (DES) galaxies. A fully trained model can classify about 10,000 test images within 10 minutes using a single Tesla P100 GPU. Interpretability While the complexity of deep learning models is a major asset in processing large datasets and enabling data-driven discovery, it also poses major challenges to interpreting how these models acquire knowledge and use said knowledge to make predictions. This challenge has been widely recognized, and novel exploration techniques as well as visualization approaches are required to aid in deep learning model interpretability. Figure 10 present the activation maps of the second-to-last layer of the model described above for automated galaxy classification using t-SNE [110]. t-SNE is a nonlinear dimensionality reduction technique that is particularly apt for visualizing high-dimensional datasets by finding a faithful representation in a low-dimensional embedding, typically 2-D or 3-D. These 3-D projections are then visualized at different training iterations, and at the end of the training they neatly cluster into two groups, corresponding to spirals and elliptical. A scientific visualization of this clustering algorithm for the entire Dark Energy Survey test set is presented at [111].
DES Test Set
Unclassified DES In summary, methods that have been explored elsewhere for automated image classification, as in the case of glitch classification [99], may be seamlessly applied to galaxy classification [109]. This is an active area in deep learning research, i.e., the development of commodity tools that may be used across disciplines.
Challenges and Open Problems
We have seen quite a few successful examples on how AI is able to improve the detection, parameter estimation, waveform production, and denoising of gravitational waves in the context of real advanced LIGO data. However, there are still important challenges to be addressed to turn AI into the preferred signal-processing tool for discovery at scale.
One major challenge is the huge computational cost for constructing and updating AI models, including exploration of the model architectures, hyper-parameter tuning, and training of AI models with streaming simulation/experimental data. To improve the convergence of training, we need to develop new initialization and optimization techniques for the network weights. Distributed training is also crucial for processing large simulation and observational data. In the section below, we detail our vision for combining AI and high performance computing for Multi-Messenger Astrophysics. We anticipate that in the future, we will have pre-trained AI models available in large-scale scientific projects, e.g., LIGO, LSST, SKA, etc., for production scale classification and inference. It is important to incorporate computing at different levels, e.g., edge computing for real-time on-site prediction and cloud computing for updating the models with new training data to adapt to the changes in the sensitivity of detectors, and physical parameter space.
The AI models discussed in this chapter are trained with particular noise statistics. Previous works [71,81,84] have shown that the trained models are robust to small changes in noise statistics. As LIGO and other astronomical observatories continue to enhance their detection capabilities, researchers will uncover new types of noise anomalies that may contaminate or mimic transient astrophysical events. It is then important to develop new unsupervised and semi-supervised learning techniques to tell apart unexpected noise anomalies from real events. Since the noise statistics of observatories vary with time, it will be necessary to retrain the network models every few hours, a light-weight computational task that may readily completed within a few minutes with cloud computing resources. This approach may also be useful to drive the convergence of centralized HPC platforms that are essential to accelerate the training of AI models from the ground up. Once fully trained, these AI models may be deployed at the edge to enable real-time inference of massive, multi-modal and complex datasets generated by scientific facilities. Thereafter, these models may be fine-tuned with light-weight, burst-like re-training sessions with cloud computing.
Convergence of Deep Learning with High Performance
Computing: An emergent framework for real-time Multi-Messenger Astrophysics discovery at scale This section provides a vision for the future of deep learning in Multi-Messenger Astrophysics. First, deep learning has rapidly evolved from a series of disparate efforts into a worldwide endeavor [105]. As described in the previous sections, there has been impressive progress across all fronts in gravitational wave astrophysics including detection, parameter estimation, data cleaning and denoising, and glitch classification. While the vast majority of these approaches have used vanilla neural network models, there is an emergent trend in which deep learning is combined with domain expertise to create physics-inspired architectures and optimization schemes to further improve neural network predictions [95].
Another interesting trend in recent studies is the use of high-dimensional signal manifolds-one of the key considerations that led to the exploration of deep learning. Applying deep learning to create production-scale data analysis algorithms involves the combination of large datasets and distributed training on high performance computing platforms to reduce time-to-insight. This approach is in earnest development across disciplines, from plasma physics to genomics [112][113][114][115]. Figure 11 shows recent progress using the Summit supercomputer to accelerate by 600-fold the training of physics-inspired deep learning models for gravitational wave astrophysics [95]. Mirroring the successful approach of corporations that lead innovation in artificial intelligence research, projects such as the Data and Learning Hub for Science [116,117] have provided an open source platform to share artificial intelligence models and data with the broader community. This approach will accelerate the development of novel artificial intelligence tools to enable breakthroughs in science and technology in the big data era. In addition to combining artificial intelligence and extreme-scale computing to reduce time-to-insight, there is an ongoing effort to incorporate artificial intelligence into the software stacks used to numerically simulate multi-scale and multi-physics processes, such as neutron star mergers [30]. Through these approaches, it may be feasible to accurately capture the physics of subgrid scale processes such as turbulence at a fraction of the time and computational resources currently needed for high-quality simulations. In essence, this is promoting artificial intelligence as a guiding tool to maximize the use and reach of advanced cyberinfrastructure facilities.
Finally, as artificial intelligence and innovative computing become widely adopted as the go-to signal processing and computing paradigms, it is essential to not become complacent in the quest for better signal processing tools. Since the development of artificial intelligence goes well beyond academic pursuits, it will be important to keep transferring and cross-pollinating innovation between academia, industry and technology. At the same time, it is essential to keep pursuing translational research, e.g., how to reuse algorithms for real-object recognition in the context of glitch classification [99] and galaxy classification [109], or how to adapt and combine algorithms for gravitational wave denoising [81] and earthquake detection [118] to other tasks, like the identification of heart conditions [119].
The future of artificial intelligence and innovative computing for Multi-Messenger Astrophysics is in the hands of bold innovators that will continue to expand the frontiers of discovery.
Cross-References
This chapter is related to the following entries in this book: • Introduction to gravitational wave astronomy by Nigel Bishop | 2021-05-17T01:16:13.210Z | 2021-05-13T00:00:00.000 | {
"year": 2021,
"sha1": "9c2f58bd781bd0ae5d329b10cb47bfbc81f8ab23",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2105.06479",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9c2f58bd781bd0ae5d329b10cb47bfbc81f8ab23",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
54613564 | pes2o/s2orc | v3-fos-license | Polarization Transfer in Proton Compton Scattering at High Momentum Transfer
Compton scattering from the proton was investigated at s=6.9 (GeV/c)**2 and \t=-4.0 (GeV/c)**2 via polarization transfer from circularly polarized incident photons. The longitudinal and transverse components of the recoil proton polarization were measured. The results are in excellent agreement with a prediction based on a reaction mechanism in which the photon interacts with a single quark carrying the spin of the proton and in disagreement with a prediction of pQCD based on a two-gluon exchange mechanism.
Compton scattering from the proton was investigated at s = 6.9 GeV 2 and t = -4.0 GeV 2 via polarization transfer from circularly polarized incident photons. The longitudinal and transverse components of the recoil proton polarization were measured. The results are in excellent agreement with a prediction based on a reaction mechanism in which the photon interacts with a single quark carrying the spin of the proton and in disagreement with a prediction of pQCD based on a two-gluon exchange mechanism.
PACS numbers: 13.60.Fz,24.85.+p Real Compton Scattering (RCS) from the nucleon with s, −t, and −u values large compared to Λ 2 QCD is a hard exclusive process which provides access to information about nucleon structure complementary to high Q 2 elas-tic form factors [1,2] and Deeply Virtual Compton Scattering [3]. A common feature of these reactions is a high energy scale, leading to factorization of the scattering amplitude into a hard perturbative amplitude, which de-scribes the coupling of the external particles to the active quarks, and the overlap of soft nonperturbative wave functions.
Various theoretical approaches have been applied to RCS in the hard scattering regime, and these can be distinguished by the number of active quarks participating in the hard scattering subprocess, or equivalently, by the mechanism for sharing the transferred momentum among the constituents. Two extreme pictures have been proposed. In the perturbative QCD (pQCD) approach (Fig. 1a) [4,5,6,7], three active quarks share the transferred momentum by the exchange of two hard gluons. In the handbag approach (Fig. 1b) [8,9,10,11], there is only one active quark whose wave function has sufficient high-momentum components for the quark to absorb and re-emit the photon. In any given kinematic regime, both mechanisms will contribute, in principle, to the cross section. It is generally believed that at sufficiently high energies, the pQCD mechanism dominates. However, the question of how high is "sufficiently high" is still open, and it is not known with any certainty whether the pQCD mechanism dominates in the kinematic regime that is presently accessible experimentally. One prediction of the pQCD mechanism for RCS is the constituent scaling rule [12], whereby dσ/dt scales as s −6 at fixed θ CM . The only data in the few GeV regime from the pioneering experiment at Cornell [13] are approximately consistent with constituent scaling, albeit with modest statistical precision. Nevertheless, detailed calculations show that the pQCD cross section underpredicts the data by factors of at least ten [6], thereby calling into question the applicability of the pQCD mechanism in this energy range. On the other hand, more recent calculations using the handbag approach have reproduced the Cornell cross-section data to better than a factor of two [8,9]. The purpose of the present experiment [14] was to provide a more stringent test of the reaction mechanism by improving significantly on the statistical precision of the Cornell data, by extending those data over a broader kinematic range, and by measuring the polarization transfer observables K LL and K LS at a single kinematic point. The results of the latter measurements are reported in this Letter. As will be shown subsequently, these results are in unambiguous agreement with the handbag mechanism and in disagreement with the pQCD mechanism.
The present measurement, shown schematically in Fig. 2, was carried out in Hall A at Jefferson Lab, with basic instrumentation described in [15]. A longitudinallypolarized, 100% duty-factor electron beam with current up to 40 µA and energy of 3.48 GeV was incident on a Cu radiator of 0.81 mm thickness. The mixed beam of electrons and bremsstrahlung photons was incident on a 15-cm liquid H 2 target, located just downstream of the radiator, with a photon flux of up to 2 × 10 13 equivalent quanta/s. Quasi-real photons, which contribute 16% of total events with an average virtuality of 0.005 GeV 2 , were treated as a part of the RCS event sample. For incident photons at a mean energy of 3.22 GeV, the scattered photon was detected at a mean scattering angle of 65 • in a calorimeter consisting of an array of 704 lead-glass blocks subtending a solid angle of 30 msr and with angular resolution of 1.8 mrad and relative energy resolution of 7.7%. The associated recoil proton was detected in one of the Hall A High Resolution Spectrometers (HRS) at the corresponding central angle of 20 • and central momentum of 2.94 GeV. The HRS had a solid angle of 6.5 msr, momentum acceptance of ±4.5%, relative momentum resolution of 2.5 × 10 −4 , and angular resolution of 2.4 mrad, the latter limited principally by scattering in the target. The trigger was formed from a coincidence between a signal from a scintillator counter in the HRS focal plane and a signal above a 500 MeV threshold in the calorimeter. In total, 15 C and 3.5 C of beam charge were accumulated for RCS production and calibration runs, respectively.
Potential RCS events were selected based on the kinematic correlation between the scattered photon and the recoil proton. The excellent HRS optics was used to reconstruct the momentum, direction, and reaction vertex of the recoil proton and to calculate δx and δy, the difference in x and y coordinates, respectively, between the expected and measured location of the detected photon on the front face of the calorimeter. The distribution of events in δx with a coplanarity cut of | δy | 10 cm is shown in Fig. 3. The RCS events, which are in the peak at δx = 0, lie upon a continuum background primarily from the p(γ, π 0 p) reaction, with the subsequent decay π 0 → γγ. An additional background is due to electrons from ep elastic scattering, which is kinematically indistinguishable from RCS. A magnet between the target and the calorimeter (see Fig. 2) deflected these electrons horizontally by ∼20 cm relative to undeflected RCS photons.
The recoil proton polarization was measured by the focal plane polarimeter (FPP) located in the HRS. The FPP determines the two polarization components normal to the momentum of the proton by measuring the azimuthal asymmetries in the angular distribution after scattering the proton from an analyzer, then taking the difference of these asymmetries between plus and minus 00 00 11 11 00000 00000 00000 00000 00000 00000 00000 00000 00000 11111 11111 11111 11111 11111 11111 11111 11111 11111 000000000 000000000 000000000 000000000 000000000 000000000 000000000 000000000 111111111 111111111 111111111 111111111 111111111 111111111 111111111 111111111 00000 00000 00000 11111 11111 electron beam helicity states. To improve the efficiency, two analyzers were utilized in the experiment, a 44-cm block of CH 2 and a 50-cm block of carbon. Vertical drift chambers together with front and rear straw chambers tracked the protons before, between, and after the analyzers, effectively producing two independent polarimeters with a combined product of efficiency and square of analyzing power that was measured to be 4.5 × 10 −3 . For each analyzer separately, Fourier analysis of the helicity difference leads to the product of the proton polarizations at the FPP (P f pp n or P f pp t ), the circular polarization of the incident photon beam (P c γ ), and the FPP analyzing power (A y ): where N 0 is the number of protons which scatter in the polarimeter, ϑ and ϕ are the polar and azimuthal scatter-ing angles, and α, β are instrumental asymmetries. Determination of A y (ϑ), α, and β for each analyzer was performed by measuring the polarization of the recoil proton from ep elastic scattering at approximately the same momentum and by using previously determined ratio of the proton form factors [2]. The electron beam polarization was measured to be 0.766 ± 0.026 at the start of the experiment using a Møller polarimeter and continuously monitored throughout the production runs by observing the asymmetry due to the large p(γ, π 0 p) background. An upper limit of 2% for the change of the beam polarization during the experiment was obtained from the pion data. The bremsstrahlung photon has 99% of the initial electron polarization over the energy range used [16].
To relate the proton polarization components at the focal plane to their counterparts at the target, the precession of the proton spin in the HRS magnetic elements was taken into account using a COSY model [17] of the HRS optics for the spin transport matrix. The elements of this matrix depend on the total precession angle, which was near 270 • in order to optimize the determination of K LL . The proton spin vector was then transformed to the proton rest frame, with the longitudinal axis pointing in the direction of the recoil proton in the center of mass frame [9]. In that frame, the longitudinal and transverse components of the proton polarization are just the spin transfer parameters K LL and K LS , respectively.
The RCS events are selected from a small elliptical region at the origin of the δx − δy plane. For each spin component, the RCS recoil polarization is given by where P all and P bkg are the polarizations for all events and background events in that region, respectively, and R is the ratio of RCS to total events. The background polarization was measured by selecting events from regions of the δx−δy plane that contain neither RCS nor ep elastic events. It was determined that within the statistical precision of the measurements, P bkg was constant over broad regions of that plane. Results obtained with the two polarimeters were statistically consistent and were averaged. With the RCS region selected to obtain the best accuracy on P RCS , one finds R = 0.383 ± 0.004 and the resulting polarizations are given in Table I. The result for K LL is shown in Fig. 4 along with the [7], GPD [10], CQM [11], and extended Regge model [18]. The curve labeled KN is K KN LL , the Klein-Nishina asymmetry for a structureless proton.
results of relevant calculations. In the handbag calculation using Generalized Parton Distributions (GPD), where R A , R V are axial and vector form factors, respectively, that are unique to the RCS process [9]. The experimental result implies the ratio R A /R V = 0.81 ± 0.11.
The excellent agreement between the experiment and the GPD-based calculation, shown with a range of uncertainties due to finite mass corrections [19], and the close proximity of each to K KN LL are consistent with a picture in which the photon scatters from a single quark whose spin is in the direction of the proton spin. The RCS form factors are certain moments of the GPD's H and H [8,9], so our result provides a constraint on relative values of these moments. An alternative handbag-type approach using constituent quarks (CQM) [11], with parameters adjusted to fit G p E data [2], is also in excellent agreement with the datum. Also in good agreement is a semi-phenomenological calculation using the extended Regge model [18], with parameters fixed by a fit to high-t photoproduction of vector mesons. On the other hand, the pQCD calculations [7], shown for both the asymptotic (ASY) and the COZ [20] distribution amplitude, disagree strongly with the experimental point, suggesting that the asymptotic regime has not yet been reached.
A non-zero value of K LS implies a proton helicityflip process, which is strictly forbidden in leading-twist pQCD. In the GPD-based approach [10], K LS /K LL ≃ ( −t/2M)R T /R V , wheret is the four-momentum transfer in the hard subprocess of the handbag diagram, M is the proton mass, and R T is a tensor form factor of the RCS process. From the experimental result for K LS , we estimate R T /R V = 0.21 ± 0.11 ± 0.03, where the first uncertainty is statistical and the second is systematic due to the mass correction uncertainty in calculatingt [19]. A value of 0.33 was predicted for R T /R V [10] based on the hypothesis R T /R V = F 2 /F 1 , the ratio of the Dirac and the Pauli electromagnetic form factors. Although the uncertainties are large, the present data suggest that R T /R V may fall more rapidly with −t than F 2 /F 1 . K LS vanishes in the CQM-based handbag calculation [11].
In conclusion, the polarization transfer observables K LL and K LS were measured for proton Compton scattering in the wide-angle regime at s =6.9, t =-4.0 GeV 2 and shown to be in good agreement with calculations based on the handbag reaction mechanism [10,11]. | 2018-12-11T08:40:49.456Z | 2004-10-01T00:00:00.000 | {
"year": 2004,
"sha1": "277361f187c40b252c5d4a68650fc6a7bc61cb2f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nucl-ex/0410001",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "277361f187c40b252c5d4a68650fc6a7bc61cb2f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
16851116 | pes2o/s2orc | v3-fos-license | Risk portofolio management under Zipf analysis based strategies
A so called Zipf analysis portofolio management technique is introduced in order to comprehend the risk and returns. Two portofoios are built each from a well known financial index. The portofolio management is based on two approaches: one called the"equally weighted portofolio", the other the"confidence parametrized portofolio". A discussion of the (yearly) expected return, variance, Sharpe ratio and $\beta$ follows. Optimization levels of high returns or low risks are found.
Introduction
Risk must be expected for any reasonable investment. A portofolio should be constructed such as to minimize the investment risk in presence of somewhat unknown fluctuation distributions of the various asset prices [1,2] in view of obtaining the highest possible returns. The risk considered hereby is measured through the variances of returns, i.e. the β. Our previous approaches were based on the "time dependent" Hurst exponent [3]. In contrast, the Zipf method which we previously developed as an investment strategy (on usual financial indices) [4,5] can be adapted to portofolio management. This is shown here through portofolios based on the DJIA30 and the N ASDAQ100. Two strategies are examined through different weights to the shares in the portofolio at buying or selling time. This is shown to have some interesting features.
A key parameter is the coefficient of confidence. Yearly expected levels of returns are discussed through the Sharpe ratio and the risk through the β.
Data
Recall that a time series signal can be interpreted as a series of words of m letters made of characters taken from an alphabet having k letters. Here below k = 2: u and d, while the words have a systematic (constant) size ranging between 1 and 10 letters.
Prior to some strategy definition and implementation, let us introduce a few notations. Let the probability of finding a word of size m ending with a u in the i (asset) series be given by P m,i (u) ≡ P i ([c t−m+2 , c t−m+1 , ..., c t+1 , c t ; u]) and correspondingly by P m,i (d) when a d is the last letter of a word of size m. The character c t is that seen at the end of day t.
In the following, we have downloaded the daily closing price data available from the web: (i) for the DJIA30, 3909 data points for the 30 available shares, i.e. for about 16 years 1 ; (ii) for the N ASDAQ100, 3599 data points 2 for the 39 shares which have been maintained in the index, i.e. for about 14.5 years. The first 2500 days are taken as the preliminary historical data necessary for calculating/setting the above probabilities at time t = 0. From these we have invented a strategy for the following 1408 and 1098 possible investment days, respectively, i.e. for ca. the latest 6 and 4.5 years respectively. The relevant probabilities are recalculated at the end of each day in order to implement a buy or sell action on the following day. The daily strategy consists in buying a share in However the weight of a given stock in the portofolio of n assets can be different according to the preferred strategy. In the equally weighted portofolio (EWP), each stock i has the same weight, i.e. we give w i∈B = 2/n u and In the other strategy, called ZCP P , for the confidence parametrized portofolio (CPP), the weight of a share depends on a confidence parameter K m,i ≡ P m,i (u) -P m,i (d). The shares i to be bought on a day belong to the set B when K m,i > 0, and those to be sold belong to the set S when K m,i < 0. The respective weights are then taken to be w B = 2Km,i∈B ΣKm,i∈B , and w S = −Km,i∈S ΣKm,i∈S .
Results
The yearly return, variance, Sharpe ratio, and β are given in Table 1 and Table 2 for the so called DJIA30 and so called N ASDAQ39 shares respectively as a function of the word length m. The last line gives the corresponding results for the DJIA30 and the N ASDAQ100 respectively. We have calculated the average (over 5 or 4 years for the DJIA30 and N ASDAQ39 respectively) yearly returns, i.e. E(r P ) for the portofolio P . The yearly variances σ P result from the 5 or 4 years root mean square deviations from the mean. The Sharpe ratio SR is given by SR = E(r P ) / σ P and is considered to measure the portofolio performance. The β is given by cov(r P , r M )/σ 2 M where the P covariance cov(r P , r M ) is measured with respect to the relevant financial index, so called market (M ), return. Of course, σ 2 M measures the relevant market variance. The β is considered to measure the portofolio risk. For lack of space the data in the tables are not graphically displayed.
It is remarkable that the E(r P ) is rather low for the ZEW P , and so is the σ P , but the E(r P ) can be very large, but so is the σ P in the ZCP P case for both portofolios based on the DJIA30. The same observation can be made for the N ASDAQ39. In the former case, the highest E(r P ) is larger than 100% (on average) and occurs for m =4, but it is the highest for m=3 in the latter case. Yet the risk is large in such cases. The dependences of the Sharpe ratio and β are not smooth functions of m, even indicating some systematic dip near m = 6, in 3 cases; a peak occurs otherwise.
The expected yearly returns E(r P ) vs. σ are shown for both portofolios and for both strategies in Figs.1-2, together with the equilibrium line, given by E(r M )(σ/σ M ), where it is understood that σ is the appropriate value for the investigated case. Except for rare isolated points below the equilibrium line, data points fall above it. They are even very much above in the ZCP P 's. cases. words of m letters, and investigated the occurrence of such words. We have invented two portofolios and maintained them for a few years, buying or selling shares according to different strategies. We have calculated the corresponding yearly expected return, variance, Sharpe ratio and β. The best returns and weakest risks have been determined depending on the word length. Even though some risks can be large, returns are sometimes very high. | 2014-10-01T00:00:00.000Z | 2005-04-19T00:00:00.000 | {
"year": 2005,
"sha1": "05e7663a5c7a18850de69f0a61b6a427afa41d61",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/physics/0504131",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "05e7663a5c7a18850de69f0a61b6a427afa41d61",
"s2fieldsofstudy": [
"Economics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Economics",
"Mathematics"
]
} |
212695784 | pes2o/s2orc | v3-fos-license | Rademacher type and Enflo type coincide
A nonlinear analogue of the Rademacher type of a Banach space was introduced in classical work of Enflo. The key feature of Enflo type is that its definition uses only the metric structure of the Banach space, while the definition of Rademacher type relies on its linear structure. We prove that Rademacher type and Enflo type coincide, settling a long-standing open problem in Banach space theory. The proof is based on a novel dimension-free analogue of Pisier's inequality on the discrete cube.
Introduction and main results
Let (X, · ) be a Banach space. We say that X has Rademacher type p ∈ [1,2] if there exists C ∈ (0, ∞) so that for all n ≥ 1 and x 1 , . . . , x n ∈ X E n j=1 ε j x j p ≤ C p n j=1 x j p .
We denote by T R p (X) the smallest possible constant C in this inequality. A nonlinear notion of type was introduced by Enflo [4]: a Banach space has Enflo type p if there exists C ∈ (0, ∞) so that for all n ≥ 1 and f : E D j f (ε) p , and we denote by T E p (X) the smallest possible constant C in this inequality. Here we define the discrete partial derivatives on the cube {−1, 1} n as D j f (ε) := f (ε 1 , . . . , ε j , . . . , ε n ) − f (ε 1 , . . . , −ε j , . . . , ε n ) 2 .
The key feature of Enflo type is that its definition depends only on the metric structure of X, that is, it involves only distances between two points. This notion therefore extends naturally to the setting of general metric spaces. In contrast, the definition of Rademacher type relies on the linear structure of X.
The study of metric properties of Banach spaces, known as the "Ribe program", has been of central importance in Banach space theory in recent decades [12]. Understanding the relationship between Rademacher type and Enflo type is a fundamental question in this program. That Enflo type p implies Rademacher type p follows immediately by choosing the linear function f (ε) = n j=1 ε j x j in the definition of Enflo type. Whether the converse is also true, that is, that Rademacher type p implies Enflo type p, is a long-standing problem that dates back to Enflo's original paper [4] from 1978. Despite a number of partial results in this direction [2,3,1,14,13,11,7,6], the question has remained open.
Here we settle Enflo's question in the affirmative: Rademacher type p is equivalent to Enflo type p. In other words, Enflo type provides a characterization of Rademacher type using only the metric structure of X.
The key new ingredient in the proof of Theorem 1.1 is a novel dimension-free analogue of a classical inequality of Pisier.
1.1. Pisier's inequality. Let p ≥ 1, let f : {−1, 1} n → X and let ε, δ be independent random vectors that are uniformly distributed on the discrete cube {−1, 1} n . As part of his investigation of metric type, Pisier discovered the following class of Sobolev-type inequalities for vector-valued functions on the discrete cube: (1.1) If such an inequality were to hold with a constant C that is independent of dimension n, then Enflo's problem would be solved: if X has Rademacher type p, then applying this property to the right-hand side of (1.1) conditionally on ε would yield immediately the definition of Enflo type p. Unfortunately, Pisier was able to prove (1.1) only with a dimension-dependent constant C ∼ log n [14, Lemma 7.3], and it was subsequently shown by Talagrand [15, section 6] that this order of growth is optimal: that is, there exist Banach spaces X for which the optimal constant in Pisier's inequality must grow logarithmically with dimension. In order to resolve Enflo's problem, however, it is not necessary to establish Pisier's inequality for an arbitrary Banach space: it suffices to show that (1.1) holds with a dimension-free constant under the additional assumption that X has nontrivial type. For this reason, subsequent work has focused on identifying conditions on the Banach space X under which (1.1) holds with a constant that depends only on the geometry of X (but not on n). Notably, Naor and Schechtman [13] proved that (1.1) holds with a dimension-free constant under the stronger assumption that X is an UMD Banach space (see also [7,5]). Very recently, Eskenazis and Naor [6] proved that for superreflexive Banach spaces X, the constant in Pisier's inequality can be improved to log α n for some α < 1.
Beside the inequality (1.1), Pisier also proved [14, Theorem 2.2] a more general counterpart of his inequality in Gauss space: if f : R n → X is locally Lipschitz, G, G ′ are independent standard Gaussian vectors in R n , and Φ : X → R is convex and satisfies a mild regularity assumption, then One obtains an inequality analogous to (1.1) by choosing Φ(x) = x p . Remarkably, the Gaussian inequality is dimension-free for an arbitrary Banach space X, in sharp contrast to the inequality on the cube. Unfortunately, its proof is very special to the Gaussian case: one defines G(θ) := G sin θ + G ′ cos θ, and notes that (G(θ), d dθ G(θ)) has the same distribution as (G, G ′ ) for each θ by rotation-invariance of the Gaussian measure. Then (1.2) follows by expressing f (G) − f (G ′ ) = π/2 0 d dθ f (G(θ)) dθ and applying Jensen's inequality. If one attempts to repeat this idea on the discrete cube, the absence of rotational symmetry makes the argument inherently inefficient, and one cannot do better than (1.1) with constant C ∼ log n.
Despite the apparent obstructions, we will prove in this paper a completely general dimension-free analogue of (1.2) on the discrete cube. The existence of such an inequality appears at first sight to be quite unexpected. It will turn out, however, that the dimension-dependence of (1.1) is not an intrinsic feature of the discrete cube, but is simply a reflection of the fact that (1.1) is not the "correct" analogue of the corresponding Gaussian inequality. To obtain a dimension-free inequality, we will replace δ by a vector of biased Rademacher variables δ(t) which arises naturally in our proof by differentiating the discrete heat kernel.
1.2.
A dimension-free Pisier inequality. The following random variables will appear frequently in the sequel, so we fix them once and for all. Let ε be a random vector that is uniformly distributed on the cube {−1, 1} n . Given t > 0, we let ξ(t) be a random vector in the cube, independent of ε, whose coordinates ξ i (t) are independent and identically distributed with We also define the standardized vector δ(t) by The following analogue of (1.2) lies at the heart of this paper.
where µ is the probability measure on R + with density µ(dt) Even though (1.3) is formulated in terms of the biased variables δ j (t) as opposed to the Rademacher variables δ j that appear in (1.1), the proof of Theorem 1.1 will follow readily by a routine symmetrization argument. For this purpose the precise distribution of the random variables δ i (t) is in fact immaterial: it suffices that they are independent, centered, and have bounded variance. However, other applications (such as Theorem 1.5 below) do require more precise information on the distribution of δ i (t), which can be read off from its definition. Remark 1.3. It is interesting to note that (1.3) is not just an analogue of (1.2) on the cube: it is in fact a strictly stronger result, as the Gaussian inequality can be derived from Theorem 1.2 by the central limit theorem. To see why, assume f : R n → X is a sufficiently smooth function with compact support and let Φ : X → R be a sufficiently regular convex function. Define f N : and note that for Letting N → ∞ now yields (1.2) by the multivariate central limit theorem, as random vectors with unit covariance matrix. The requisite regularity assumptions on f and Φ can subsequently be removed by routine approximation arguments. The above discussion also shows that the constant in Theorem 1.2 is optimal. Indeed, as (1.3) implies (1.2), it suffices to show that (1.2) is sharp. But this is already known to be the case when X = R and Φ(x) = |x| [10, Chapter 8].
When Φ(x) = x p , the conclusion of Theorem 1.2 may be slightly improved. As the improvement will be needed in the sequel, we spell out this variant separately. Theorem 1.4. Let µ be as in Theorem 1.2. Then for any Banach space (X, · ), function f : {−1, 1} n → X, and 1 ≤ p < ∞, we have In this setting, the difference between (1.3) and (1.4) is that in the former the exponent 1/p appears outside the µ(dt) integral on the right-hand side.
We now briefly describe the idea behind the proofs of Theorems 1.2 and 1.4, which was inspired by Gaussian semigroup methods of Ledoux [10,Chapter 8]. Instead of using rotational invariance as in the proof of (1.2) to interpolate between f (G) and f (G ′ ), we use the heat semigroup on the discrete cube to interpolate between f and Ef . The resulting expressions involve derivatives of the form D j e t∆ f . We now observe that rather than applying the derivative to f , we may differentiate the heat kernel instead. A short computation (Lemma 2.1) shows that the gradient of the heat kernel on the cube yields the biased Rademacher vector δ(t). This elementary observation, analogous to the classical smoothing property of diffusion semigroups, leads us to discover (1.3) in a completely natural manner. Recall that a Banach space (X, · ) has (Rademacher) cotype q ∈ [2, ∞) if there exists C ∈ (0, ∞) so that for all n ≥ 1 and x 1 , . . . , x n ∈ X n j=1 We denote by C q (X) the smallest possible constant C in this inequality. The significance of cotype in the present context is twofold: As any Banach space with nontrivial type has finite cotype [9, Theorem 7.1.14], we obtain in particular an affirmative answer to the question posed after (1.1): Pisier's inequality holds with a dimension-free constant in any Banach space with nontrivial type. However, one may argue that this fact is no longer of great importance in view of our main results; in practice Theorems 1.2 and 1.4 may be just as easily deployed directly in applications (as we do in Theorem 1.1), and give rise to much better constants than would be obtained from Theorem 1.5.
A quantitative formulation of Theorem 1.5 will be given in section 4.
1.4.
Organization of this paper. The rest of this paper is organized as follows. Section 2 is devoted to the proofs of Theorems 1.2 and 1.4. We subsequently deduce Theorem 1.1 in section 3. Finally, Theorem 1.5 is proved in section 4.
Proof of Theorems 1.2 and 1.4
The Laplacian on the discrete cube is defined by We denote by P t the standard heat semigroup on the cube, that is, Recall that ∆ is self-adjoint on L 2 ({−1, 1} n ) with quadratic form The basis for the proof of Theorem 1.2 is the following probabilistic representation of the heat semigroup and its discrete partial derivatives.
Lemma 2.1. We have 1 (t), . . . , x n ξ n (t))] for t ≥ 0, 1 (t), . . . , x n ξ n (t))]. By the definition of ξ j (t), we have Note also that We now observe that We have therefore shown that It remains to show that Q t f = P t f . To this end, note that Q 0 f = f and Thus Q t satisfies the Kolmogorov equation for the semigroup P t .
We are now ready to prove Theorem 1.2.
Proof of Theorem 1.2. We may assume without loss of generality that X is finitedimensional (as f ({−1, 1} n ) spans a space of dimension at most 2 n ). Write As P 0 f = f and lim t→∞ P t f = Ef (ε) (this follows, e.g., from Lemma 2.1), we can write by the fundamental theorem of calculus where we used in the last line that ∆ is self-adjoint and commutes with P t . To proceed, we note that by Lemma 2.1 where εξ(t) := (ε 1 ξ 1 (t), . . . , ε n ξ n (t)). Moreover, E[Φ * (g(ε))] = E[Φ * (g(εξ(t)))], as the random vectors εξ(t) and ε have the same distribution. Thus and the conclusion follows.
The proof of Theorem 1.4 is almost identical.
with 1 p + 1 q = 1. Proceeding exactly as in the proof of Theorem 1.2, we obtain using Hölder's inequality. Recalling that E g(εξ(t)) q = E g(ε) q as the random vectors εξ(t) and ε have the same distribution, the conclusion follows readily.
Remark 2.2 (Alternative approach to the proofs of Theorems 1.2 and 1.4). Using that ε and εξ(t) have the same distribution and that (2.1) holds for all g, it is readily seen that (2.1) implies the pointwise identity for x ∈ {−1, 1} n . By using this identity one can organize the proofs in a manner that is closer to the proof of (1.2). For example, to prove Theorem 1.2 we can upper bound Φ(f (x) − Ef (ε)) pointwise by applying Jensen's inequality to the right-hand side of (2.2), and then (1.3) follows by taking the expectation of the resulting expression and using that ε and εξ(t) have the same distribution. The pointwise identity (2.2) can also be proved directly, which leads to proofs of Theorems 1.2 and 1.4 that avoid the use of duality. The following argument was communicated to us by Jingbo Liu. First, note two basic properties of the discrete cube: D 2 j = D j and D j P t = P t D j for every j. Thus we can write using Lemma 2.1 in the last step. While conceptually appealing, the disadvantage of this argument is that it relies on special properties of calculus on the discrete cube. In contrast, the proofs that are based on duality use nothing else than the quadratic form − f, ∆g ℓ 2 ({−1,1}) = Df, Df ℓ 2 ({−1,1}) and the gradient formula of Lemma 2.1, providing a more direct route to extensions beyond the discrete cube.
3. Proof of Theorem 1.1 Theorem 1.1 follows from Theorem 1.2 by a routine symmetrization argument.
Proof of Theorem 1.1. The first inequality T R p (X) ≤ T E p (X) follows readily by choosing f (ε) = n j=1 ε j x j in the definition of Enflo type. In the converse direction, note first that as ε and −ε have the same distribution, and as x → x p is convex, we can estimate To estimate the right-hand side we use a standard symmetrization argument. Let ξ ′ (t) be an independent copy of ξ(t) and ε ′ be an independent copy of ε. Then where we used Jensen's inequality in the first line; that ξ(t) − ξ ′ (t) has the same distribution as ε ′ (ξ(t) − ξ ′ (t)) (by symmetry and independence) in the second line; and the definition of Rademacher type conditionally on ξ(t), ξ ′ (t), ε and that ξ(t), ξ ′ (t), ε, ε ′ are independent in the third line. But as p ≤ 2, we obtain by Jensen's inequality. Thus we have shown 4. Proof of Theorem 1.5 The following contraction principle is a classical result of Maurey and Pisier (see, e.g., [14,Proposition 3.2]). We spell out a version with explicit constants.
Theorem 4.1. Let (X, · ) be a Banach space of cotype q < ∞, let η 1 , . . . , η n be i.i.d. symmetric random variables, and let ε be uniformly distributed on {−1, 1} n . Then for any n ≥ 1, x 1 , . . . , x n ∈ X, and 1 ≤ p < ∞, we have Proof. As η i are symmetric random variables, they have the same distribution as ε i η i . The conclusion for the special case p = q follows from [9, Theorem 7.2.6]. For the general case, we consider two distinct cases.
For the case p > q, recall that a Banach space with cotype q also has cotype r for all r > q, with C r (X) ≤ C q (X) [9, p. 55]. Thus the conclusion follows readily from [9, Theorem 7.2.6] by choosing q = p.
For the case p < q, we bound the L p -norm on the left-hand side by the L q -norm, and then apply the inequality for the case p = q. This yields We conclude by using the Kahane-Khintchine inequality [9, Theorem 6.2.4] to bound the L q -norm on the right-hand side by the L p -norm, which incurs the additional factor (q/p) 1/2 . This completes the proof.
We are now ready to prove one direction of Theorem 1.5: if X has finite cotype, then (1.1) holds with a dimension-free constant.
Proposition 4.2. Let (X, · ) be a Banach space of cotype q, and let ε, δ be independent uniformly distributed random vectors in {−1, 1} n . Then for any function f : {−1, 1} n → X and 1 ≤ p < ∞, we have Proof. Let ξ ′ (t) be an independent copy of ξ(t). We first note that where we used Jensen's inequality in the first line and we applied Theorem 4.1 conditionally on ε in the second line. Now note that δ j D j f (ε) p with C n,p = C log(n/9p), where C is a universal constant.
Proof. It was shown by Talagrand [15, section 6] that for every n ≥ 1 and 1 ≤ p < ∞, there is a function f : {−1, 1} n → ℓ 2 n ∞ so that for a universal constant c. But if X does not have finite cotype, then by the Maurey-Pisier theorem [9, Theorem 7.3.8] it must contain a 2-isomorphic copy of ℓ N ∞ for every N ≥ 1. Thus we can embed Talagrand's example in X for every n ≥ 1 and 1 ≤ p < ∞, and the proof is readily concluded.
Remark 4.5. We emphasize that our characterization of when Pisier's inequality holds with dimension-free constant assumes the Banach space X and 1 ≤ p < ∞ are fixed. When this is not the case, other phenomena can arise. For example, it follows from a result of Wagner [16] that if one chooses p ≍ n, then (1.1) holds with a universal constant for any Banach space X. This is is a purely combinatorial fact that does not capture any structure of the underlying space. | 2020-03-13T21:09:14.197Z | 2020-03-13T00:00:00.000 | {
"year": 2020,
"sha1": "237847fa1704880b7b5b798b6c0305fbda397f6f",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/2003.06345",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d4fe307ea642f520ae278136b8b034c302d115d0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
251658106 | pes2o/s2orc | v3-fos-license | A Concept Map based Teaching of Compiler Design for Undergraduate Students
In undergraduate engineering, most of the subjects do not have the open visibility of the Industry and Research requirements. Students are interested mostly on subjects which are useful for Industry placement. They do not show interest in non-open visibility subjects if an instructor teaches by simply following the textbook. Considering this, we presented a concept map based teaching methodology with Research and Industry assignments and problems. The proposed methodology focus on improving the teaching quality and students understanding level. In this paper, we have taken the Compiler Design subject and presented the concept map. To understand the e ff ectiveness of the proposed methodology, the students feedback was collected and evaluated using the sign-test and the students’ submitted problems and assignments were evaluated to understand their level. The analysis results show that most of students studied Compiler Design with interest as a result of proposed teaching methodology.
Introduction
The Industry professionals and the Academicians claim that most of undergraduate engineering students are not having required potential for hiring. The major reason behind such criticism is that while studying the course, students do not show interest in the subjects, thus affect their understanding of the subject in depth and they were not motivated to show interest in the subjects. Also, M. J. De Vries and J. H. M. Stroeken [1] found that engineering students are lacking in research skills (the abilities to define the research problem, comment upon research methodology, and reflect upon research outcomes). This varies from one subject to another subject. To understand the students' thoughts about different subjects, a survey was conducted with the three questions.
• What are the subjects you are interested in? * Corresponding author. Email: venkat@iiita.ac.in • What are the subjects you feel are required for Industry placement?
• What are the subjects you feel are required for Research?
The third-year undergraduate engineering students (number of students participated in the survey is 92) participated in the survey and given their opinion. The survey results in percentage are shown in Table 1 for few subjects of undergraduate computer science and engineering. The same survey was repeated with the next batch of third year engineering undergraduate students (number of student participated in the survey: 99) with a following additional question.
• If the listed subjects are not given in a core list but offered as elective, then what are the subjects will you prefer to study.
The survey result in percentage is shown in Table 2. It is very clear from the survey results that students are not having enough interest on some subjects as well as not aware of its need in Industry and Research. Hence, it is necessary to encourage and motivate students toward these subjects also by changing the teaching methodology, which should be in a high degree of quality. The concept map [2] [3] based teaching would be more appropriate for the purpose as C. Chiou et al. [4] presented its effectiveness.
In this paper, Compiler Design subject is concentrated since students feel this is a complex and very less productive. The students' opinion in Table 1 and 2 clearly shows that the students have a negative notion about the Compiler Design subject. Hence, Instructor need to beat this notion by re-designing the teaching methodology of Compiler Design and motivating students towards the subject.
Research Questions
While teaching, an instructor plays a significant role in motivating and making students to understand the subject. This can be achieved if an instructor have solutions to the following questions.
• Question 1: What innovative technique should be introduced to beat the students' negative notions of various subjects? -Most of the students in the class are not interested in the subjects that are not directly visible as a requirement for industry and research demand. This can be witnessed from the Table 1 and 2. The Instructor needs to follow an innovative strategy to beat the notion.
• Question 2: How do instructor make students self-design the problem and assignments for a concept of a subject? -The students in a subject understand the concept when the Instructor teaches it or later in a few cases. Also they can solve the problem given in the textbook. In case the problem is tricky, then they are unable to solve it. The instructor can overcome this issue by asking the students to design problems and assignments independently. For example, Ana et al. [5] proposed a module using concept map to support self-regulated learning by undergraduate engineering students. Each student are linked to the concept map to record all topics that are covered and the ones that are pending. The Instructor examines the concept maps of all students for further improvement. However, the issue is what will encourage the students to do it?
• Question 3: What factor is required to make the students in the classroom to be attentive? -In most of cases, the students are not attentive in the classroom due to various factors such as teaching methodology, complex topics, smart gadgets, next guy nuisance, etc. In a classroom of more than 50 students, the instructors may not be in a position to monitor all students. Even if the instructor forces or threatens the students to listen to the teaching, it will be a physical presence but mental absence which is non productive. Hence a novel teaching technique is required to make the students be attentive in the classroom.
The significance of the above questions are to achieve the long-lasting remembrance of the subjects, properly utilize the learned concepts in real time on demand, and most importantly, make the fullfledged graduate. Considering the significance of the questions, benefits of concept map and students opinion, the novel teaching methodology is introduced with an evaluation technique to provide solutions to the research questions. The methodology has been 2 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4 experimented while teaching the Compiler Design subject for Undergraduate engineering students.
Contribution
The contributions of the paper to improve the teaching quality and understanding level of students are 1. A novel teaching methodology based on the concept map, directed concept graph, and concept relation weight.
2. Linked each concept with the present and future Industry and Research related problems and assignments.
3. Developed the broad and extended concept map of Compiler Design.
The rest of the paper is structured as follows: Section 2 discusses the existing works, Section 3 discusses the proposed teaching research methodology, section 4 discusses the broad and extended concept map of the Compiler Design subject. Section 5 presents the evaluation of the proposed methodology and section 6 proves the significance of the proposed teaching methodology and discusses the limitation. Finally, Section 7 concludes the paper with the directions of future work.
Related works
In literature, different approaches were proposed to encourage and motivate the students to concentrate on Compiler Design subject and its concepts. Here, we discuss the existing works in two categories: The Teaching style of the compiler design course and its concepts.
H. Liu [6] introduced software engineering practice in an Undergraduate Compiler Course. A. Demaille et al. [7] introduced a set of compiler construction tools for educational projects. L. Xu and F. G. Martin [8] proposed a Chirp system that provides a realistic and engaging environment to teach compiler courses. T. R. Henry [9] proposed a Game Programming Language (GPL) based teaching to motivate students towards the compiler project. E. White et al. [10] proposed an approach that enables students of compiler courses to examine and experiment with a real compiler without becoming overwhelmed by complexity. S. R. Vegdahl [11] proposed visualization tool to teach compiler design to bring the interest of students to the subject. M. Mernik and V. Zumer [12] described a software tool called LISA (Language Implementation System Based on Attribute Grammars) for learning and conceptual understanding of compiler construction in an efficient, direct, and long-lasting way. H. D. Shapiro and M. D. Mickunas [13] replaced the term project on compiler design with several smaller, independent programming assignments to better understand and motivate students. M. Ruckert [14] argues that teaching compiler with unusual programming language is a good choice. D. Kundra and A. Sureka [15] [16] discussed case-based teaching and learning of compiler design and proposed case studies for different concepts to make learning easier and more interesting. N. Wang and L. Li [17] reformed the compiler theory course by combining theory and practice to improve the students' skill set. J. Velásquez [18] developed a tool called bcc: minilanguage to cover the complete syllabus in a semester. The bcc supports to execute different phases of the compiler. K. Abe [19] developed an integrated laboratory that deals with processor organization, compiler design, and computer networking. The students are encouraged to integrate these components to construct a complete small computer system. J. Velásquez [20] also described exposing students to the practical application of compiler construction techniques and encouraging the development of programming and problem-solving skills. In addition, the author discussed the implementation of a scheme for the automatic assessment using the Virtual Programming Lab. S. B. Nolen and M. D. Koretsky [21] discussed the physical projects to actively engage students in a subject. M. Frank et al. [22] analysed and recommended teaching technology and engineering by means of Project Based learning can train students better for their future profession. J. Àngel Velázquez-Iturbide et al. [23] analyzed and identified that the student motivation increase through software visualization. I. Estévez-Ayres et al. [24] proposed a methodology for improving active learning through regular feedback and iterative refinement.
F. Ortin et al. [25] shows how object-oriented design patterns represented in unified modeling language (UML) can be used to teach type checking and develop the semantic analysis phase of a compiler. S. Sangal et al. [26] proposed a Parsing Algorithms Visualization Tool (PAVT) to teach the process of parsing algorithms. Learners can visualise the intermediate steps of the parsing algorithm through the tool. R. D. Resler and D. M. Deaver [27] introduced the Visible Compiler Compiler (VCOCO) to make the students visualize a compiler's internal working. The VCOCO is flexible and effective tool for generating user-specified LL(1) visible compilers. A. Karkare and N. Agrawal [28] developed ParseIT tool for teaching parsing.
The game based, experiment based, project based like teaching computer architecture by D. Cenk Erdil [29], case based, feedback based and tool based teaching makes learning easy and interesting. However, concepts and its relationship along with assignments and problems related to Research and Industry requirement 3 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4 should be discussed for motivation and long-lasting remembrance thus provides solutions to the research questions. Hence this paper propose a novel teaching methodology based on concept map which will simplify the teaching, make students to understand the subject and get motivated.
Research Methodology
This section discusses the proposed novel teaching research methodology and its evaluation technique to answer the teaching research questions to improve the quality of teaching and understanding.
Methodology
The concept map based teaching and relating the concepts with Research and Industry demand is the right method to address the above research questions since students are interested in Industry which was witnessed through the survey outcome. Considering this, a systematic teaching methodology is introduced to teach a subject.
Basics for Quality Teaching. For efficient and qualitative teaching, a teacher/instructor shall follow the research questions and plan the subject's teaching material, assignments and problems. This paper improves the quality of teaching by the teacher by constructing the concept map and directed graph of the subject, which gives a teacher to systematically follow the relation of concepts and order to teach with examples. Also, to evaluate the students' understanding and improve their understanding level, students needs to self-design the problems and assignments. Accordingly, the following two teaching experiments are designed.
Teaching quality improvement The systematic approach of teaching is to identify, order and deliver the concepts. While teaching, each concepts should be discussed along with practical Industry and Research usage. Also, the relevant assignments and problems of a concept to be discussed before taking up the next concept in the order or queue. It is necessary to make the queue and follow it so that students will not miss the significance of any concepts and their relational concepts. The Directed Concept Graph (G = (N , E), the set of N nodes and E edges) is one of the best techniques to order the concepts of a subject. The nodes N are the concepts, and edges E are the concepts' relations. The Figure 1 shows the Directed Concept Graph (DCG) with five concepts notation C a , C b , C c , C d and C e , which can be mapped with compiler concepts.
The Lemma 1 proves the need for DCG to order the concepts of a subject and discuss them in the class, thus improving the teaching quality.
Postulate 3. A concept should be discussed only after its pre-requisite concepts are discussed with relevant problems and assignments. Lemma 1. The concept queue get ordered concepts if the DCG followed.
Proof. Given postulate 1, 2 and 3, the precedence of concept C a is more than concept C b (C a > C b ), if the C b has directed edge from the C a . Therefore, the C a comes before C b in the concept's queue. Hence the queue will have ordered concepts.
The directed edge is insufficient to order the concepts when a concept has more than one outdegree. For example, C a has directed edge (two outdegrees) to C b and C e . The next concept to C a in the order is nondeterministic. To overcome the non-determinism, the relation weight among concepts is introduced. An each concept of a subject will be given weight based on the importance in three categories; teaching (use in other subjects), industry and research. The qualitative weight under each category is classified into low, medium and high with a quantity of 1, 2 and 3 respectively. A concept's overall or total weight is the summation of weights of all three categories. The weights for some concepts of the Compiler Design is given in Table 3. The weights are assigned based on the Instructors' experience in teaching, preparation and evaluation of the Industry and Research problems and assignments.
Using the concept weights, the relation weight can be assigned. For example, the relation weight of C a → C b is sum of weight of C a and C b . The relation weight can be used to decide the next concept to be placed in the order.
What will be the loss if the instructor miss a concept?: A concept missed while teaching a subject will greatly impact on understanding the related concepts. For 4 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4 What will be the loss if the Instructor does not follow the order?: A concept will significantly influence understanding the other concepts. In case, the concept C a , which should be discussed before concept C b is missed or discussed later, then the student will not be able to understand C b . For example, lets map the DCG of Figure 1 with the lexical analyser phase concepts of Compiler Design. The C a is the Tokenization, C b is the Finite State Machine, C e is the Regular Expression, and C c is the Table Driven LA. Assume, the instructor teaches C a , C b , C c in order but the C e later. In this case, students will not be in position to understand the source of C c while it is discussed thus degrade the teaching quality. This supports the need to order concepts using DCG and relation weight to improve the teaching quality.
Improving understanding level of students In addition to the concept map and order-based teaching, the pyramid structure, as shown in Figure 2 can be followed while teaching each concept. An idea is to give the definition, practical working model, how it was and is used and what would be its future. The structure shows the contents of a concept to be discussed and the time frame for each content. The time frame is proportional to spread of the pyramid. The discussion of lexical analyzer concept based on the pyramid shown in Figure 2 is as follows.
Firstly, the concept was defined then its working model along with text book based examples were discussed. Later, use of the concept in an other concepts or subjects was discussed. At the end, the research and industry oriented examples of the concept were discussed by the Instructors. Finally, students were Need of pre-requisite concept understanding: If a student does not have 100% understanding of C c of Figure 1 then cannot solve the problem of C d . For example, a student not having clear understanding of first and follow construction cannot solve the example problem or prepare the sample assignment or problem for the item set/parsing table construction. Hence, understanding the pre-requisite concepts by the students is very essential and instructor needs to guide them.
To understand the concept very clearly, the students should be asked to self-design Industry, and Research problems for each concepts based on the Instructor discussed Industry and Research examples. A student with sufficient knowledge on the concept can prepare the relevant industry and research assignments and problems. The instructors can understand students' level of understanding while evaluating their assignments and problems accordingly, which can guide them to understand the concept in depth. This will make the students think innovative and attentive in the classroom. Also, 5 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4 this method will encourage the students towards the subject since they visualize and realize the use of a concept in Industry and Research domain. Hence, this method improves the understanding level of students.
Building Concept Map. The subject overview can be presented using a concept map to encourage students toward a subject. In the experiment, the Compiler Design concept map was built and discussed in the class. The process of making the broad concept map is presented in the Algorithm 1. As a part of building the concept map, each concept must be linked with other concepts if there is an association/relation, as discussed in the Directed Concept Graph. The associations/relationships are has, includes, creates, etc.
Evaluation Technique
The proposed methodology can be evaluated in two ways: the first is the evaluation of the proposed teaching methodology and second is the evaluation of students' self-designed assignments and problems.
Methodology
Evaluation. The proposed teaching methodology can be evaluated using the students' feedback survey. The survey can be conducted at beginning and at the end of the course with the same set of students and questions. The course should be taught using the proposed teaching methodology. The survey questions to be prepared to understand the students' mind set about a subject and the response to be dichotomous, that is yes or no. In section 1, the necessary survey questions are presented to understand the students, mindset about a subject. The outcome of the survey results can be evaluated using the nonparametric sign test [30] to prove the significance of the proposed methodology. The sign test uses the binomial distribution with the cumulative distribution function given in equation 1.
The sign test is most useful if comparisons can be expressed as x > y, x = y and x < y [31], where x and y are the survey outcome before and after the event respectively. This fits the proposed methodology evaluation. Here, the event refers to the proposed teaching methodology. The possible outcome from the survey results x and y that can be used for sign test are as follows • If the survey outcome y of a student has more answers as yes then record a ′ + ′ (positive for the proposed teaching methodology), where x < y.
• If the survey outcome x of a student has more answers as yes then record a ′ − ′ (negative for the proposed teaching methodology), where x > y.
• If a student's number of yes answers has not changed then record a ′ 0 ′ (neutral), where x = y.
• If a student's number of no answers has not changed, then record a ′ − ′ (negative for the proposed teaching methodology), where ¬x = ¬y.
To compute the binomial probability (bp) for sign test, n, total count that is number of ′ + ′ and ′ − ′ excluding ′ 0 ′ , k, number of ′ − ′ that is against the proposed methodology and p, initial non biased probability value 0.5 are required. The two-tailed sign test is required since not sure whether yes answers would increase or decrease after the event. The hypotheses for the test are as follows.
• The null hypothesis H 0 : The proposed teaching methodology does not have any effect or a negative effect on the students.
• The alternate hypothesis H a : The proposed methodology has the positive effect on the students, and they are motivated. 6 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4 Student Assignment Evaluation. The students submitted self-designed assignments and problems can be evaluated by Instructors according to the concept in the concept map and categorize into relative/non-relative or strong/weak. This helps the Instructors understand student's understanding level and their involvement. The non-relative or weak assignment submitted students could be asked to prepare the new set of assignments. Instructors can provide support in preparing the assignments by giving more examples.
COMPILER DESIGN CONCEPT MAP
The students studied or studying Compiler Design course should know the overview and understand how the compiler is designed, and the high-level program is getting translated for execution. The compiler design concept map [32] in Figure 3 provides the complete overview and relationship among the concepts to encourage the students to focus on the subject. The concept map includes the preprocessor and pre-requisites for this subject. The prerequisites are instruction set, Context Free Grammar (CFG) and Context Sensitive Grammar (CSG), Regular Expression (RE), Finite State Machine (FSM) and Push Down Automata (PDA). This makes students understand the importance of Theory of Computation (ToC) subject since they studied RE, FSM, PDA and CFG/CSG there. Hence, the proposed concept map motivates the students toward Compiler Design and Theory of Computation subject. However, no advantage concerning ToC since students already studied the subject, but this can impact junior students because senior students will convey the importance of ToC/Automata to juniors.
Phases of Compiler
In this paper, machine independent optimization is included in the code optimization phase and machine dependent optimization in the code generation and optimization phase.
Lexical Analysis. The important concepts to be discussed in lexical analysis phase are Tokenization, Finite State Machine, Regular Expression. In addition, buffering and Symbol Table and obviously how these concepts are interlinked to generate the tokens. Even though, students studied regular expression and finite state machine in the pre-requisite subject ToC, the concepts need to be revised with programming language patterns example. In the classroom, instructor may take the hello world C language program as input and recognize the tokens using single and double buffering to better understand lexical analyser. The tools Flex, and lolo need to be briefly introduced and practiced in the laboratory.
Syntax Analysis. Syntax Analysis takes the input as tokens from the lexical phase and produces the syntax tree, which the semantic phase will use. The major task of the syntax analyzer is to check whether the syntax presents in the program is part of the programming language or not. To do this check, Syntax Analyzer uses parser with Finite State Machine, Push down Automata, Context Free and Sensitive Grammar. Parser can be top down or bottom up approach and it will be chosen based on the developer's requirement. The LL (Leftto-right, Leftmost derivation) and LR (Left-to-right, Rightmost derivation) parsers work well only when a grammar is unambiguous and parse in linear time. For LL parser, an unambiguous, deterministic and non-left recursive grammar will be given as input and compute the first and follow. Using the first and follow, the parsing table will be constructed to do parsing. In the case of the LR parser, the finite state machine based item set and parsing table for the unambiguous grammar will be constructed to do parsing. Instructor also needs to briefly discuss about the universal parsers like Earley's and CYK, which can take any type of grammar and parse however the complexity will not be linear. The tool Yacc can be briefly introduced. In Compiler Design course teaching, at least 25% to 35% of time in the semester will be spent on the syntax analysis phase.
Semantic Analysis. It takes the parse tree as input from the syntax analyzer and checks the semantics of the program such as type checking, parameter matching, label check, etc. This phase uses the attribute grammar or direct method to verify the semantics. The concepts in the semantic analyser are the Symbol Table, Attribute Intermediate Code Generation (ICG). It takes the input as syntax tree from the semantic phase and provides the Intermediate Representation (IR). The IR can be structural, linear and hybrid. In most of the programming languages, linear representation is used further three address code is mostly preferred with storage representations such as quadruple, triple and indirect triple. It also uses the Single Static Assignment (SSA) for better register allocation. This phase also uses the SDT in some languages to convert the syntax tree representation into three address code. 7 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4
Figure 3. Basic concept map of Compiler Design
Code Optimization. This phase helps in optimizing the time and space complexity of the program while execution however semantics of the program should be preserved. The Code optimization can be done locally, globally or inter-procedural level and can also be machine dependent or independent. In the classroom, Instructor has to show one example for each optimization concept to make the concepts clear to students.
Code Generation. Code Generation is the final phase of the compiler that takes optimized code and generates target code. Target code may be another High level language code or Assembly level language code. To generate the assembly level, it is important to consider the Instruction Set, Register Allocation and Instruction Scheduling. Instruction set will be based on Reduced Instruction Set Computer (RISC) or Complex Instruction Set Computer (CISC) or Micro-op (mix of CISC and RISC). Since CISC architecture is used in popular Intel processors, CISC instruction set can be taken for solving the example in classroom. However, students may be asked to generate the target code for different architectures.
Extended Concept Map
The Instructor introduced the broad concept map at the first level then for each concept, the concept map is extended further till it makes the students to understand easily and solve the problems and assignments. The extended concept map also encourages the enthusiastic student to design their problems and assignment regarding Industry application and Research. This section, presents the extended concept map of Compiler Design phases.
Lexical Analysis. The Figure 4 shows the extended concept map of lexical analyser [33] [34]. The Industry and Research example for the core concepts in the extended concept map are as follows.
• 1I: Use the regular expression for medical industry to identify the external analytic vendor that is different from the approved list.
• 1R: Use text manipulation (update, delete, search) in the text file database using the regular expression [35]. In this case, flat files may play a useful role as a database. The extensive facilities provided by the databases such as MySQL, Oracle, SQL server, etc. are not required and free the users from the additional work of database software installation, knowing a query language etc.
• 2I: Finite State Machine (FSM) can be used to execute sequence of tests for flow measurement testing. This solution can be applied to automate other serial, and batch processes [36]. • 2R: Design a process flow for managing enterprise documents and identify all possible states that a document can be in; and identify the corresponding actions that allow the documents transition between states. Software usually allows auto deleting documents once they have been in a final state for a given number of days, months or years. This is useful for compliance scenarios in which some documents must be kept, archived and later deleted following strict rules following legal requirements [37].
• 3I & 6I: An object-oriented lexical analyzer for the compiler construction can simplify the design effort, and it permits code re-use [38].
• 3R & 6R: An industry that handles big data and needs analysis can use the object-based scanner to recognize.
• 4I: The state oriented lexical analyser can be designed to detect the SQL injection attack in a database [39].
• 4R: Use the state oriented lexical analyser for sentimental analysis of entertainment industry for prediction of movies. Identify the states and transition for the prediction of the movie [40].
• 5I: Design the table-driven lexical analyser tool to identify the patterns of a Go language.
• 5R: Design the table-driven Lexical analyser to analyse the log file of an online product selling Industry.
• 7I: Design the handwritten scanner using the state oriented/table driven/object-oriented lexical analyser to extract the useful information from an email message to categorize the emails.
• 7R: Design the handwritten tokenizer using the state-oriented, table-driven or object-oriented lexical analyser to analyse natural language data.
• 8I: Design the automated tokenization tool for an online product selling Industry that can be reused for the cyber security industry that needs to scan the packets datagram to identify the attack if any.
• 8R: Design the automated lexical analysis tool for sentiment, social cognition, and social-order analysis [41].
Other phases of compiler. The Figure 5 to 9 respectively shows the extended concept map of Syntax Analyzer, Semantic Analyzer, Intermediate Code Generation, Code Optimization and Code Generation phase. The Industry and Research example for the core concepts need to be discussed in the class when the respective concept is taught to the students. This will motivate students to study Compiler Design subjects with interest.
Evaluation
The proposed concept map, directed concept graph and concept relation weight based teaching methodology was implemented/experimented with while teaching Compiler Design course and evaluated its significance through students' self-designed problems, assignments and feedback.
Students Feedback
To validate the proposed teaching methodology, the students' opinion about the Compiler design subject was collected at the end of the course through anonymous survey using the same three questions discussed in section 1. The number of students from the Batch -I (2019) who participated in the survey is 60. The outcome is shown in Figure 10, which also includes the survey outcome of the Batch -I (2019) students at the beginning of the course. The difference in the survey outcome indicates that the students' mindset about the Compiler Design subject is changed positively, and that the proposed teaching methodology influenced students. It is very clear from the postevent outcome that students are largely interested in the subject, and they realized that the subject's requirement in research is more, however, they found the subject's requirement in the Industry is less. To overcome this issue, instructors needs to discuss more assignments and problems from the industry domain.
Students feedback reliability justification. In general, students always feel that the course is about to be completed and the grade will be generated so accordingly let us answer the survey questions to gain higher grades. This way of answering will give a better results for the proposed teaching methodology but it will not be fact (authentic) and the whole experiment is useless. But in the survey, the results seems authentic since students' view of placement are reduced at the same time, it is massively increased in interest and research. Hence, the students' feedback is not biased considering grades.
Sign Test.
To test the significance of the proposed teaching methodology, the survey outcome is applied to the sign test. The survey taken was anonymous so mapping the pre and post-event outcome with a participant is impossible. Hence, random mapping was used to perform the test. Also, the participants in the beginning (pre-event) are 92 and at the end (postevent) are 60 so randomly, 32 participants' feedback are removed from the pre-event survey list. The survey at the beginning (x) and the end (y) had same three 10 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4 Figure 5. Concepts of Syntax Analyzer 11 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4 Figure 6. Concepts of Semantic Analyzer questions and equivalent students' answers. The sign test attribute discussed in section 3.2 and its values along with computed cumulative binomial probability (bp) and two-tailed test value are given in Table 4.
It can be concluded from the results in Table 4 that the proposed teaching methodology had a positive effect on the students since the two-tailed binomial probability value bp is less than significance level 0.05. Hence the null hypothesis H 0 is rejected, and the alternate hypothesis H a is accepted. The proposed teaching methodology changed the students' negative notion about the subject towards the positive side. That is, students understand the need of the subject concerning to Industry and Research requirements, shown interest and get motivated.
Students Assignment
As a part of an evaluation and experiment, students were asked to form a five member team and prepare assignments on each core concept of the compiler with respect to both Industry and Research. Marks were awarded while grading to motivate the students to actively participate in the assignment preparation. The instructors evaluated students' submitted assignments according to relevance with the concept. Table 5 shows the Batch -I (2019) assignment evaluation results in percentage with respect to the relevancy of assignments with the concept. The results shows that for few concepts students' team prepared the relevant assignments, and for other concepts, they are weak. For example, only 45% and 50% of students teams are able to self design the Industry and Research assignments, respectively for the Lexical Analyzer phase concepts. Hence, the Instructor have to identify few more example assignments on these concepts and discuss in the class to improve the students understanding level. These results helps Instructor to identify the level of students and their understanding of concepts accordingly guide them to understand the concept clearly and self design the relevant industry and research assignments.
Proof and Limitation
The Lemmas are proposed to prove the impact of the proposed teaching methodology among students. The 12 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4 notations considered to prove the Lemma are given in Table 6.
In any subject, a class can have N number of students with different categories. The first category of students studies the subject just to pass and get a degree. The second category of students concentrates only on the subjects which help in industry placement, and consider other subjects are just to complete the degree. The third category of students interested in doing research will concentrate on subjects they feel are required for Research. The fourth category of students focuses on all subjects and studies thoroughly. The fourth category of students are the outliers (P ), who do not need any motivation, but the other three categories need motivation and different teaching methodology. Hence, only two categories (one needs motivation and another does not need motivation) of students considered for the proof. The set of students S in equation 2 is the two categories of students i.e M and P = N − M but for further proof S is considered with one category of students i.e M. In undergraduate engineering program, there are four years of students 13 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4 Venkatesan Subramanian, Kalaivany Karthikeyan, Pallapa Venkataram Postulate 4. A student is influenced by the senior students and placement record. (s ← SS)
Postulate 5.
A student believes in seniors as well as Instructor but has more belief in senior student than 14 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4
Postulate 6.
In most of the cases, student mindset about a subject is fixed before the beginning of the class.
Postulate 7.
Linking each concept with present Industry and research requirements can make students know a subject's need.
Lemma 2.
If Instructor follows only textbook-based teaching, encouraging a student towards a subject is challenging.
Proof. Given Postulate 4, 5 and 6, encouraging a student with the textbook teaching is a challenging task for the Instructor if a subject is not directly visible as a requirement for placement and research Lemma 3. If Instructor follow the proposed teaching methodology, the student mindset about the subject will be positive and motivated.
Proof. Given postulate 7, students' mindset about a subject will be positive and motivated towards the subject.
Lemma 4.
If the Instructor changes the notion about a subject once, it will have consequences for the junior students. 15 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4 x is very lower than y The survey results in Figure 10 supports the Lemma 2 and 3. The survey results in Table 2 is not supporting the Lemma 4 however, after two or three batches, Instructor can realize the impact of Lemma 4.
Limitation
Deciding on the number of teaching hours including lectures, tutorials and practicals hours is more important for a subject since all topics need to be taught to the students within a semester. The academic committee of an Institute decide credit hours according to the content weight of the subject while creating the curriculum, thus making it possible to complete all topics in a semester. The credit hours vary from a subject to subject and nature of the subjects such as core, elective, add-on, practical oriented, etc. The proposed teaching methodology covers various aspects, including the student assignment and problem evaluation, retraining the students and re-evaluation, and discussing each concept of a subject by following the proposed pyramid model. This teaching methodology needs more time due to multiple learning level (Below Average, Average and Above Average) of students in the class. In some cases the time or total duration of the semester may not be sufficient to complete all the subject topics following the proposed methodology. Hence, the proposed methodology needs to be optimized further to make it effective in the allotted time without affecting the semester's regular exercises such as exams, result declaration, etc. There is a possibility that the proposed teaching methodology can be applied entirely to the subjects that are not having the open visibility of Industry and Research need for others with a reduction. For example, Machine Learning course has open visibility because a lot many research paper like [42] [43] can be found easily and more work available on it thus we can optimize the proposed teaching methodology. However, the effectiveness should not be affected.
Conclusion
This paper analysed various teaching methodologies and identified the three research questions that need to be addressed to improve the quality of teaching and understanding level of students. The proposed teaching methodology using the concept map, concept 16 EAI Endorsed Transactions on e-Learning 06 2022 -08 2022 | Volume 8 | Issue 1 | e4 order queue, pyramid structure, Industry and Research assignments and problems addresses the questions. The methodology advances the teaching practice compared to the existing techniques. The Compiler Design subject is chosen for the experiment of the proposed teaching methodology since many students pre-decided that this subject is of less scope in the research and development. Teaching Compiler Design through the proposed teaching methodology made students understand the importance of Compiler Design subject in the Industry and research domain, thus motivated them to self-design the problems and assignments and be attentive in the classroom. The sign test performed on the students' feedback collected through a survey at the beginning and the end proves the significance and impact of the proposed teaching methodology. This methodology can also be used for other subjects; however the discussed limitation needs to be addressed. The future work of this paper is to optimize the proposed teaching methodology, possibly by doing parallel activities to make it effective in the available time. Also to experiment with other batches and subjects that are considered by the students' as not or less required for their Research or Industry placement or both. | 2022-08-19T15:17:01.085Z | 2022-08-17T00:00:00.000 | {
"year": 2022,
"sha1": "1ad9d40e0a28422fab43ab3fa3819dae6c135cc9",
"oa_license": "CCBY",
"oa_url": "https://publications.eai.eu/index.php/el/article/download/2550/2169",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "44772db9a087894337d72f9587cb16216a5c3ee8",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
251932511 | pes2o/s2orc | v3-fos-license | Multiple venous variations at the abdominopelvic region: a case report
Knowledge of vascular variations of the abdominopelvic junction is of importance to surgeons, radiologists, orthopaedic surgeons and other medical disciplines. We report a rare combination of venous variations observed at the abdominopelvic junction of an adult male cadaver. The right common iliac vein was absent. The inferior vena cava was formed by the union of the right external iliac vein and the left common iliac vein. The right internal iliac vein was a tributary of the left common iliac vein. The left common iliac vein was larger than usual in size and its wall was adhered to the right common iliac artery. We discuss the functional, developmental and clinical issues related to the case.
Introduction
The inferior vena cava is formed by the union of right and left common iliac veins. The common iliac veins are formed by the union of external iliac and internal iliac veins of the corresponding side. Some of the variation that have been reported about the iliac vein include absence of common iliac vein or its doubling, absence of external iliac vein, opening of renal veins into the iliac veins, and formation of the common iliac vein by the confluence of four veins [1]. Rarely a venous ladder is formed between the external iliac and internal iliac veins [2]. In case of a pelvic kidney, the renal vein might drain into internal iliac vein or to the junction of two common iliac veins forming the inferior vena cava. We report a rare combination of venous variations at the abdominopelvic junction and discuss the developmental, functional and clinical importance of these concurrent variations.
Case Report
During routine dissection classes, we observed a rare combination of venous variations at the abdominopelvic junction of an adult male cadaver aged about 75 years. The right common iliac vein was absent. The inferior vena cava was formed by the union of the right external iliac vein and the left common iliac vein at the upper border of the fifth lumbar vertebra. The left common iliac vein was formed by the union of left external and internal iliac veins. The right internal iliac vein had a variant course and termination. It ran upwards and to the left, and joined the left common iliac vein 25 cm below the level of formation of the inferior vena cava (lower border of the fifth lumbar vertebra). The left common iliac vein was larger than usual in size and had its wall adhered to the right common iliac artery. The anterior wall of the left common iliac vein was very thin and it opened up easily while separating the right common iliac artery from it. The variations have been shown in the Figs. 1, 2. A simplified, schematic diagram of the variant veins has also been given (Fig. 3).
Discussion
Though morbidity and mortality rates in pelvic malignancies has reduced drastically due to the advances in imaging technologies, the surgeons often encounter significant bleeding during lateral pelvic surgeries. This bleeding is largely due to the presence of variations in the pelvic vasculature. Variations of the common iliac vein are less common compared to the variations of the external and internal iliac veins. In the current case, the inferior vena cava was formed by the union of right external iliac vein and the left common iliac vein. We can also interpret the formation of inferior vena cava in another way, i.e., the inferior vena cava was formed by the union of left common iliac vein and the right internal iliac vein at the level of lower border of the fifth lumbar vertebra and the right external iliac vein joined the inferior vena cava at the upper border of the fifth lumbar vertebra. Other congenital anomalies of the inferior vena cava include left inferior vena cava, double inferior vena cava and the azygos continuation of the inferior vena cava [3]. These anomalies were not found in the current case.
The If there is any error in this process, the anomalous veins are seen in the pelvis as seen in the current case. In the current case, the vein forming the right internal iliac vein had failed to communicate with the external iliac vein of the right side. Instead of that, it had joined the left common iliac vein in the embryonic period. On a functional viewpoint, the union of the right internal iliac vein with the left common iliac vein results in increased venous return through the left common iliac vein. All the blood from the pelvic organs and the left lower limb passes through the left common iliac vein in this case. This could result in enlargement of the vein. May-Thurner syndrome or Cockett syndrome is a syndrome in which the left common iliac vein is compressed between the right common iliac artery and the lower part of the vertebral column. In this syndrome, the veins of the pelvis and lower limbs enlarge and in some cases, it leads to deep vein thrombosis of lower limb. In the current case, there were no enlargement of the pelvic veins or the veins of lower limb. However, there was a fusion between the walls of right common iliac artery and the left common iliac vein. The wall of the vein adjacent to the artery was extremely thin. This was possibly due to the friction between the vein and the artery due to the large size of the vein. This type of fusion can lead to spontaneous rupture of the vein at any time of life.
Clinically, the knowledge of iliac vein variations is important in phlebography and retroperitoneal surgery as retroperitoneal pelvic surgeries are on a rise lately [4,5]. Iliac veins and their tributaries must be paid attention during superior hypogastric neurectomy and hysterectomy. Orthopaedic procedures involving screw placement and obstetric surgeries can also endanger the anomalous course of the right internal iliac vein observed in the current case [6][7][8]. The anomalous veins in the pelvis could get injured also in aortic and iliac artery reconstruction procedures. Preoperative magnetic resonance imaging examination could minimise the iatrogenic injuries to some extent.
In conclusion, absence of right common iliac vein and opening of right internal iliac vein into the left common iliac vein is one of the rare venous variations in the abdomino-pelvic area [9,10]. It was accompanied by the adhesion of the left common iliac vein with the right common iliac artery. This combination of variations may remain asymptomatic or could lead to May-Thurner syndrome. Knowledge of this variation could minimise bleeding in the retroperitoneal surgeries of abdominopelvic junction. | 2022-08-31T06:17:35.614Z | 2022-08-30T00:00:00.000 | {
"year": 2022,
"sha1": "4a45d27c10ab5c6a298d1d0cdb2757aeb4d8d751",
"oa_license": "CCBYNC",
"oa_url": "https://acbjournal.org/journal/download_pdf.php?doi=10.5115/acb.22.066",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "761804c626097ed8187a3836828589fde81330de",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259192831 | pes2o/s2orc | v3-fos-license | Effects of blood pressure and tranexamic acid in spontaneous intracerebral haemorrhage: a secondary analysis of a large randomised controlled trial
Background Tranexamic acid reduced haematoma expansion and early death, but did not improve functional outcome in the tranexamic acid for hyperacute spontaneous intracerebral haemorrhage-2 (TICH-2) trial. In a predefined subgroup, there was a statistically significant interaction between prerandomisation baseline systolic blood pressure (SBP) and the effect of tranexamic acid on functional outcome (p=0.019). Methods TICH-2 was an international prospective double-blind placebo-controlled randomised trial evaluating intravenous tranexamic acid in patients with acute spontaneous intracerebral haemorrhage (ICH). Prerandomisation baseline SBP was split into predefined ≤170 and >170 mm Hg groups. The primary outcome at day 90 was the modified Rankin Scale (mRS), a measure of dependency, analysed using ordinal logistic regression. Haematoma expansion was defined as an increase in haematoma volume of >33% or >6 mL from baseline to 24 hours. Data are OR or common OR (cOR) with 95% CIs, with significance at p<0.05. Results Of 2325 participants in TICH-2, 1152 had baseline SBP≤170 mm Hg and were older, had larger lobar haematomas and were randomised later than 1173 with baseline SBP>170 mm Hg. Tranexamic acid was associated with a favourable shift in mRS at day 90 in those with baseline SBP≤170 mm Hg (cOR 0.73, 95% CI 0.59 to 0.91, p=0.005), but not in those with baseline SBP>170 mm Hg (cOR 1.05, 95% CI 0.85 to 1.30, p=0.63). In those with baseline SBP≤170 mm Hg, tranexamic acid reduced haematoma expansion (OR 0.62, 95% CI 0.47 to 0.82, p=0.001), but not in those with baseline SBP>170 mm Hg (OR 1.02, 95% CI 0.77 to 1.35, p=0.90). Conclusions Tranexamic acid was associated with improved clinical and radiological outcomes in ICH patients with baseline SBP≤170 mm Hg. Further research is needed to establish whether certain subgroups may benefit from tranexamic acid in acute ICH. Trial registration number ISRCTN93732214.
ABSTRACT
Background Tranexamic acid reduced haematoma expansion and early death, but did not improve functional outcome in the tranexamic acid for hyperacute spontaneous intracerebral haemorrhage-2 (TICH-2) trial. In a predefined subgroup, there was a statistically significant interaction between prerandomisation baseline systolic blood pressure (SBP) and the effect of tranexamic acid on functional outcome (p=0.019). Methods TICH-2 was an international prospective double-blind placebo-controlled randomised trial evaluating intravenous tranexamic acid in patients with acute spontaneous intracerebral haemorrhage (ICH). Prerandomisation baseline SBP was split into predefined ≤170 and >170 mm Hg groups. The primary outcome at day 90 was the modified Rankin Scale (mRS), a measure of dependency, analysed using ordinal logistic regression. Haematoma expansion was defined as an increase in haematoma volume of >33% or >6 mL from baseline to 24 hours. Data are OR or common OR (cOR) with 95% CIs, with significance at p<0.05. Results Of 2325 participants in TICH-2, 1152 had baseline SBP≤170 mm Hg and were older, had larger lobar haematomas and were randomised later than 1173 with baseline SBP>170 mm Hg. Tranexamic acid was associated with a favourable shift in mRS at day 90 in those with baseline SBP≤170 mm Hg (cOR 0.73, 95% CI 0.59 to 0.91, p=0.005), but not in those with baseline SBP>170 mm Hg (cOR 1.05, 95% CI 0.85 to 1.30, p=0.63). In those with baseline SBP≤170 mm Hg, tranexamic acid reduced haematoma expansion (OR 0.62, 95% CI 0.47 to 0.82, p=0.001), but not in those with baseline SBP>170 mm Hg (OR 1.02, 95% CI 0.77 to 1.35, p=0.90).
Conclusions Tranexamic acid was associated with improved clinical and radiological outcomes in ICH patients with baseline SBP≤170 mm Hg. Further research is needed to establish whether certain subgroups may benefit from tranexamic acid in acute ICH. Trial registration number ISRCTN93732214.
WHAT IS ALREADY KNOWN ON THIS TOPIC
⇒ Tranexamic acid did not improve functional outcome despite reducing haematoma expansion and early death in patients with acute spontaneous intracerebral haemorrhage in the tranexamic acid for hyperacute spontaneous intracerebral haemorrhage-2 trial, but there was a statistically significant interaction between baseline systolic blood pressure (SBP) and treatment on functional outcome, which we sought to explore further.
WHAT THIS STUDY ADDS
⇒ In this prespecified secondary analysis, randomisation to tranexamic acid in the presence of baseline SBP ≤170 mm Hg was associated with less haematoma expansion and improved clinical outcomes with fewer deaths and serious adverse events (SAEs), less death and dependency, and improved quality of life scores compared with placebo. A >15% reduction in SBP from baseline to day 2 was associated with fewer deaths by day 7 and 90 in those randomised to tranexamic acid but not placebo, whilst a >5% increase in SBP was associated with increased death and SAEs by day 7 overall, and increased SAEs in those randomised to placebo by days 7 and 90.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
⇒ Tranexamic acid may improve clinical and radiological outcomes in participants with baseline SBP≤170 mm Hg. Future research should aim to establish which subgroups of patients may benefit from tranexamic acid and whether BP lowering is additive or synergistic in the presence of tranexamic acid in acute intracerebral haemorrhage.
INTRODUCTION
Elevated blood pressure (BP) in acute intracerebral haemorrhage (ICH) is associated with haematoma expansion and increased death and dependency. [1][2][3] Large trials of intensive BP lowering in ICH patients with elevated BP at presentation have produced mixed results: in INTER-ACT-2 intensive BP lowering was associated with less death and dependency in a shift analysis of the modified Rankin Scale (mRS) 4 ; ATACH-2, which assessed a more aggressive intensive BP lowering strategy, did not influence mRS at day 90 but did lead to more renal adverse events. 5 Current guidelines recommend to consider reduction of elevated BP in acute ICH in line with the INTERACT-2 protocol. [6][7][8] In the tranexamic acid for hyperacute spontaneous intracerebral haemorrhage-2 (TICH-2) trial, 9 tranexamic acid reduced haematoma expansion and early death but did not influence the mRS at day 90. In predefined subgroups, there was a statistically significant interaction between baseline systolic BP (SBP) and tranexamic acid on the primary outcome of mRS at day 90. Those with baseline SBP ≤170 mm Hg randomised to tranexamic acid had a favourable shift in the mRS to less death and dependency compared with placebo, while those with baseline SBP >170 mm Hg randomised to tranexamic acid had no change in the mRS compared with those randomised to placebo. 9 We sought to investigate this interaction in this predefined subgroup in more detail, and in particular to assess the association of baseline SBP on the potential treatment effect of tranexamic acid. We hypothesised that patients with lower baseline SBP were more likely to have non-hypertension-related ICH aetiologies with more lobar ICH, present later, have milder clinical phenotypes, may not have undergone haematoma expansion, and therefore, might benefit from tranexamic acid.
METHODS
TICH-2 was an international prospective double-blind randomised placebo-controlled clinical trial that tested the safety and efficacy of intravenous tranexamic acid in people with acute spontaneous ICH within 8 hours of symptom onset. Details pertaining to the trial protocol and main results are published. 9 10 Written consent was obtained from patients or their representatives before starting trial procedures.
Blood pressure
Baseline BP was measured immediately prior to randomisation and recorded on the randomisation form, a further two BP measurements were taken and recorded on day 2. We used the predefined baseline SBP cut-point used in the statistical analysis plan and main results paper of TICH-2: ≤170 mm Hg and >170 mm Hg. 9 11 This cutpoint was the median baseline SBP found in acute ICH patients previously. 12 13 We also assessed whether change in SBP from baseline to day 2, independent of baseline SBP, was associated with clinical outcome using categories applied in a previously published secondary analysis of a large acute stroke trial as follows: large decrease (>15% decrease), moderate decrease (5%-15% decrease), no change (5% decrease to 5% increase, reference group), increase (>5% increase). 14 The association between SBP on day 2 and clinical outcome by treatment group was assessed in those with SBP≤140 mm Hg and >140 mm Hg and in 20 mm Hg increments across the range of day 2 SBP. Data on the number, route and class of antihypertensive medications used between randomisation and day 2 were collected.
Clinical outcomes
The primary outcome in TICH-2 was functional outcome measured using the mRS by trained assessors over telephone at day 90. Quality of life was recorded at day 90 using European Quality of Life 5-dimensions derived health utility status and European Quality of Life visual analogue scale. Safety outcomes included death and serious adverse events (SAEs) at day 2, day 7, discharge and day 90, and neurological status at day 7 using the National Institute of Health Stroke Scale (NIHSS). In addition, length of hospital stay was recorded.
Imaging outcomes A baseline CT brain scan was performed prior to randomisation and a repeat CT scan at 24±12 hours. Haematoma volumes were assessed by three independent raters blinded to clinical data using semiautomated segmentation tools of ITK-SNAP software V.3.6.0. 15 Haematoma expansion was defined as an increase in haematoma volume on follow-up scan of >33% or >6 mL compared with the baseline scan. 16
Statistics
Analyses followed the statistical analysis plan for the overall TICH-2 trial. 11 Data are number (%), mean (SD), median [IQR]. Baseline characteristics of participants by baseline SBP were compared using X 2 test, one-way analysis of variance or Kruskal-Wallis test as appropriate. Analyses between treatment groups were assessed by intention to treat. The primary outcome was assessed across all seven levels of the mRS using ordinal logistic regression with adjustment for baseline prognostic variables as in the main TICH-2 trial: age, sex, time since onset to randomisation, baseline SBP, baseline NIHSS, presence of intraventricular haemorrhage and antiplatelet therapy before stroke onset. Sensitivity analyses of the mRS were performed including unadjusted, mRS>3 and with additional adjustment for baseline haematoma volume and location. Other outcomes were analysed using binary logistic, multiple linear or Cox regression models as appropriate with adjustment as outlined above. A sensitivity analysis using >33% increase in Open access haematoma volume to define haematoma expansion was performed, given that absolute volume increase may have a differential effect depending on haematoma location. To assess whether haematoma location influenced the effect of tranexamic acid on the primary outcome, an interaction term was added to an adjusted ordinal logistic regression model. Resultant common OR (cOR), OR, HR or mean difference with corresponding 95% CIs are given, with significance set at p<0.05. Results were not adjusted for multiple testing. Statistical analyses were performed using the SPSS V.23.
RESULTS
Of the 2325 participants in TICH-2, 1152 had baseline SBP≤170 mm Hg and 1173 baseline SBP>170 mm Hg. Baseline characteristics are depicted in table 1. Those with a baseline SBP≤170 mm Hg were more likely to be older, male, have a previous stroke or TIA, ICH, CT angiography was performed more frequently in those with baseline SBP≤170 mm Hg, but a positive spot sign-although uncommon-was seen more often in those with baseline SBP>170 mm Hg. Investigator-reported final diagnosis differed by baseline SBP (online supplemental table 1). More participants with ICH secondary to cerebral amyloid angiopathy (CAA) had a baseline SBP≤170 mm Hg compared with SBP>170 mm Hg: 58 (5%) vs 22 (1.9%), p<0.001. Hypertensive arteriopathy was the most common cause of ICH in both baseline groups and more common in those with baseline SBP>170 mm Hg than SBP≤170 mm Hg: 672 (57.3%) vs 449 (39%), p<0.001 (online supplemental table 1).
Outcomes by baseline SBP
The interaction between baseline SBP and treatment with tranexamic acid on mRS at day 90 was statistically significant, p=0.019. In participants with baseline SBP≤170 mm Hg, randomisation to tranexamic acid was associated with a smaller increase in haematoma volume from baseline to 24 hours and less haematoma expansion than those randomised to placebo (table 2). At day 7, NIHSS scores were lower in those randomised to tranexamic acid compared with those allocated to placebo. There were fewer deaths by days 2, 7, 90 and discharge from hospital in the tranexamic acid group. Further, there were fewer SAEs throughout the trial in those randomised to tranexamic acid; a finding driven by a reduction in nervous system-related SAEs (online supplemental table 2).
At day 90, in those with baseline ≤SBP 170 mm Hg, the primary outcome demonstrated a favourable shift to less death and dependency in those randomised to tranexamic acid compared with placebo (cOR 0.73, 95% CI 0.59 to 0.91, figure 1A). This finding was not altered after also adjusting for baseline haematoma volume and haematoma location. In those with baseline SBP≤170 mm Hg, there was a significant interaction between haematoma location and treatment with tranexamic acid on the primary outcome (p=0.012). The interaction term for those with baseline SBP>170 mm Hg was non-significant. Improved quality of life scores were seen in participants randomised to tranexamic acid (table 2).
In contrast, there were no significant treatment effects of tranexamic acid in those participants with baseline SBP>170 mm Hg ( figure 1B, table 2).
Outcomes by change in SBP
Those with >15% decrease in SBP from baseline to day 2 had a higher baseline SBP than the 5%-15% decrease, reference (5% decrease to 5% increase) and >5% increase groups. There was a trend to lower baseline SBP across the groups: p<0.001 (table 3).
Overall, compared with the reference group (5% decrease to 5% increase), a >5% increase in SBP from baseline to day 2 was associated with increased death and SAEs at day 7. When assessed within treatment groups, a >5% increase in SBP from baseline to day 2 in those randomised to placebo was associated with increased SAEs at days 7 and 90; an effect not seen in those randomised to tranexamic acid (table 3). Overall, a >15% decrease in SBP from baseline to day 2 was associated with fewer deaths by day 7 and day 90; the same effect was seen in those randomised to tranexamic acid, but not in participants randomised to placebo (table 3).
There were fewer deaths at day 7 in those participants with a 5%-15% decrease in SBP from baseline to day 2 compared with the reference group, both overall and in those randomised to tranexamic acid (table 3).
By day 2, 1541 (66.3%) participants' SBP was over 140 mm Hg. In an on-treatment analysis, there was no difference in clinical outcomes between treatment groups when assessed by day 2 SBP≤140 mm Hg vs >140 mm Hg or across the spectrum of day 2 SBP (online supplemental figure 1)
BP lowering treatments
Of 2325 participants in the trial, 1736 (74.9%) received any BP lowering therapy by day 2 with a greater proportion receiving any treatment with baseline SBP>170 mm Hg compared with those with baseline SBP≤170 mm Hg: 89.2% vs 60.5%, p<0.001 (online supplemental table 3). The median number of agents used was 2 [1,3] and 1 [0, 2] in those with baseline SBP>170 mm Hg and≤170 mm Hg, respectively. Intravenous then oral routes were the most commonly used with 74.8% of those with baseline Overall, the most popular classes of agents were as follows: β-blocker (including labetalol) 39.9%; calcium channel blocker 32.2%; nitrate 59.2%. The number, route and class of antihypertensive agents did not differ between tranexamic acid and placebo groups (data not shown). Overall, those participants treated with either a calcium channel blocker or ACE inhibitor (ACEi) had a shift to less death and dependency at day 90 compared with those who did not receive these medications: calcium channel blockers cOR 0.82, 95% CI 0.69 to 0.98; ACEi cOR 0.75, 95% CI 0.61 to 0.92. No associations were seen for other antihypertensive drug groups. Over the course of the trial, the use of BP lowering therapy by day 2 increased from 33.3% in quarter 2 of 2013 to 81.8% in quarter 4 of 2017 (χ 2 and Mantel-Haenszel test for trend p<0.001, online supplemental figure 2).
DISCUSSION
In this prespecified secondary analysis of the TICH-2 trial, randomisation to tranexamic acid in the presence of baseline SBP≤170 mmHg was associated with less haematoma expansion and improved clinical outcomes with fewer deaths and SAEs, less death and dependency, and improved quality of life scores compared with placebo. This was despite patients being older, randomised later and having larger baseline haematoma volumes than those with SBP>170 mm Hg. A>15% reduction in SBP from baseline to day 2 was associated with fewer deaths by day 7 and 90 in those randomised to tranexamic acid but not placebo, while a>5% increase in SBP was associated with increased death and SAEs by day 7 overall, and increased SAEs in those randomised to placebo by day 7 and 90. BP lowering treatment by day 2 was more frequently used over the time course of the trial in line with changes in international clinical guidelines. Despite this, 66% of the trial population remained hypertensive on day 2.
Baseline characteristics differed by baseline SBP group; participants with SBP≤170 mm Hg had larger haematomas in lobar locations on average, while participants with SBP>170 mm Hg had deep haematomas on a background of hypertension. This may, in part, reflect the distribution of ICH due to underlying aetiology, that is, lobar haematomas in CAA and deep haematomas secondary to hypertensive arteriolopathy. A multicentre cohort study in China involving 5656 patients with ICH found that admission BP differed by ICH aetiology with patients with CAA having a lower BP and larger haematoma volumes than patients with presumed hypertensive arteriolopathy who had a higher BP and smaller haematoma volumes at baseline. 17 Dichotomising ICH location as either lobar or non-lobar may risk over-simplifying the underlying aetiology given that lobar ICH comprises CAA-related ICH, hypertensive arteriolopathy and mixed cerebral small vessel disease. 18 Recently, a detailed secondary imaging analysis of TICH-2 demonstrated that in participants with lobar CAA-related ICH, there was an increased risk of haematoma expansion with increasing time from randomisation, while the risk of haematoma expansion was constant irrespective of baseline haematoma volume. 19 These effects were not seen in those with non-CAA lobar or non-lobar ICH and may suggest a difference in haematoma dynamics between these ICH groups. 19 CAA-related bleeding may originate from leptomeningeal vessels and have more space to expand into (including the subarachnoid space), resulting in prolonged, slower, lower pressure bleeding over several hours. 19 Open access provide a longer treatment window for haemostatic agents including tranexamic acid to exert their effects. These hypotheses require further testing and should be considered in ongoing and future studies of tranexamic acid in ICH. Although trials to date have yet to demonstrate a positive effect of haemostatic therapies on clinical outcome in ICH, 20 there are several possible explanations for tranexamic acid being associated with improved clinical outcomes in those with baseline SBP≤170 mm Hg. First, a lower baseline SBP is associated with less haematoma expansion than higher baseline SBP, 1-3 therefore, any potential treatment effect of tranexamic acid on haematoma expansion may be larger as there is no separate pathological mechanism in the form of elevated BP to overcome. However, in the present analysis, there was no difference in the rate of haematoma expansion between baseline SBP groups: 28.2% in the≤170 mm Hg group vs 25.7% in the >170 mm Hg group, p=0.21. Second, by dichotomising baseline SBP, we may have unintentionally selected a group of participants more likely to benefit from tranexamic acid independent of BP, such as people with moderate-sized lobar haematomas as opposed to smaller deep haematomas. Although sensitivity analyses adjusting for baseline haematoma characteristics did not alter the treatment effect of tranexamic acid, there was a significant interaction between haematoma location and tranexamic acid on the primary outcome in those with baseline SBP≤170 mm Hg suggesting that there may be a differential effect of tranexamic acid depending on haematoma location.
Those participants with a >15% decrease in SBP from baseline to day 2 were less likely to die by days 7 and 90 when randomised to tranexamic acid; a finding not seen in those randomised to placebo. In contrast, a >5% increase in SBP from baseline to day 2 was associated with increased death and SAEs at day 7 overall, and increased SAEs at days 7 and 90 in those randomised to placebo. Therefore, a reduction in BP from baseline to day 2 combined with treatment with tranexamic acid may be beneficial, while an increase in BP in those randomised to placebo may be harmful. These results should be considered preliminary given their observational nature within a randomised controlled trial. However, one plausible explanation for why BP lowering and tranexamic acid may be additive or even synergistic in ICH is by attenuating haematoma expansion and improving clinical outcome.
The strengths of this study include the prespecified nature of the analyses within the context of the largest trial of haemostatic therapies in ICH with almost complete follow-up data. However, there are several limitations. First, we do not have data on what BP lowering medications patients were taking prior to their ICH. Therefore, we were unable to establish any association between the use of pre-ICH antihypertensives and prerandomisation baseline BP. This includes any BP lowering medication given after symptom onset of their ICH but before randomisation, which may have influenced the baseline BP recorded prior to randomisation. Second, there were few BP readings recorded in the trial; two measurements prerandomisation and two measurements on day 2. Therefore, we were unable to assess any effect of BP variability, which has been shown to be a stronger predictor of clinical outcome in acute ICH 21 22 and ischaemic stroke 23 than absolute BP. Nor could we adjust for early BP change within the first hours after ICH. Instead, we used the change in BP from baseline to day 2 to look for any associations with outcome. These analyses were within randomised treatment groups, are therefore observational, may represent chance and should be considered as hypothesis generating. Third, ICH aetiology was investigator-reported and largely determined by CT imaging. This may have led to less precise characterisation of ICH aetiology, particularly in those with mixed pathology. Last, given this is a subgroup analysis, our findings may represent chance. Although we did not adjust for multiplicity of testing, this subgroup analysis was prespecified in the statistical analysis plan of the main TICH-2 trial (including the SBP cut-point), has biological plausibility, had a positive interaction with treatment on outcome and followed the statistical analysis plan of the main TICH-2 trial.
In summary, this prespecified subgroup analysis of the TICH-2 trial demonstrated that randomisation to tranexamic acid in participants with baseline SBP≤170 mm Hg was associated with less haematoma expansion and improved clinical outcomes across multiple domains both early and late after ICH. Future research should seek to establish which subgroups of patients may benefit from tranexamic acid in acute ICH, including by haemorrhage location, size and underlying aetiology including CAA. Whether BP lowering is additive or synergistic in the presence of tranexamic acid in acute ICH is unclear and future clinical trials may help to provide a clearer treatment paradigm for clinicians.
Open access
Twitter Jason Philip Appleton @JPAppleton | 2023-06-20T05:06:35.742Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "23ee9eee99f996d50a8a8dbf429ca490bf8cedcf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1136/bmjno-2023-000423",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "23ee9eee99f996d50a8a8dbf429ca490bf8cedcf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237386110 | pes2o/s2orc | v3-fos-license | NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo
In this work, we present a new multi-view depth estimation method that utilizes both conventional reconstruction and learning-based priors over the recently proposed neural radiance fields (NeRF). Unlike existing neural network based optimization method that relies on estimated correspondences, our method directly optimizes over implicit volumes, eliminating the challenging step of matching pixels in indoor scenes. The key to our approach is to utilize the learning-based priors to guide the optimization process of NeRF. Our system firstly adapts a monocular depth network over the target scene by finetuning on its sparse SfM+MVS reconstruction from COLMAP. Then, we show that the shape-radiance ambiguity of NeRF still exists in indoor environments and propose to address the issue by employing the adapted depth priors to monitor the sampling process of volume rendering. Finally, a per-pixel confidence map acquired by error computation on the rendered image can be used to further improve the depth quality. Experiments show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes, with surprising findings presented on the effectiveness of correspondence-based optimization and NeRF-based optimization over the adapted depth priors. In addition, we show that the guided optimization scheme does not sacrifice the original synthesis capability of neural radiance fields, improving the rendering quality on both seen and novel views. Code is available at https://github.com/weiyithu/NerfingMVS.
: Qualitative results for multi-view depth estimation on ScanNet [4]. Our method clearly surpasses leading multiview estimation methods [29,34] by building on top of neural radiance fields [33]. While also using test-time optimization, CVD [29] suffers from inaccurate estimation of flow correspondences. NeRF [33] fails to produce accurate geometry due to the inherent shape-radiance ambiguity [61] (See Figure 3) in indoor scenes.With guided optimization, our method successfully integrates the learning-based depth priors into NeRF, significantly improving the geometry of the radiance fields.
Abstract
In this work, we present a new multi-view depth estimation method that utilizes both conventional reconstruction and learning-based priors over the recently proposed neural radiance fields (NeRF). Unlike existing neural network based optimization method that relies on estimated correspondences, our method directly optimizes over implicit volumes, eliminating the challenging step of matching pixels in indoor scenes. The key to our approach is to utilize the learning-based priors to guide the optimization process of NeRF. Our system firstly adapts a monocular depth network over the target scene by finetuning on its sparse SfM+MVS reconstruction from COLMAP. Then, we show that the shape-radiance ambiguity of NeRF still exists in indoor environments and propose to address the issue by employing the adapted depth priors to mon-itor the sampling process of volume rendering. Finally, a per-pixel confidence map acquired by error computation on the rendered image can be used to further improve the depth quality. Experiments show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes, with surprising findings presented on the effectiveness of correspondence-based optimization and NeRF-based optimization over the adapted depth priors. In addition, we show that the guided optimization scheme does not sacrifice the original synthesis capability of neural radiance fields, improving the rendering quality on both seen and novel views. Code is available at https://github.com/weiyithu/NerfingMVS.
Introduction
Reconstructing 3D scenes from multi-view posed images, also named as multi-view stereo (MVS), has been a fundamental topic in computer vision over decades. The application varies from robotics, 3D modeling, to virtual reality, etc. Conventional multi-view stereo approaches [2,9,13,60] densely match pixels across views by comparing the similarity of cross-view image patches. While producing impressive results, those methods often suffer from poorly textured regions, thin structures and non-Lambertian surfaces, especially in real-world indoor environments. Recently, with the success of deep neural networks, several learning-based methods [17,20,24,53] are proposed to tackle the multi-view stereo problem often by employing a cost volume based architecture. Those methods perform a direct neural network inference at test time for multi-view depth estimation and achieve remarkable performance on benchmarks. However, due to the lack of constraints at inference, the predicted depth maps across views are often not consistent and the photometric consistency is often violated. To address this issue, [29] proposed a test-time optimization framework that optimizes over learning-based priors acquired from single-image depth estimation. While being computationally inefficient, the method produces accurate and consistent depth maps that are available for various visual effects. However, the optimization formulation of this method relies heavily on an optical flow network [16] to establish correspondences, which becomes problematic when estimated correspondences are unreliable.
In this paper, we present a new neural network based optimization framework for multi-view depth estimation based on the recently proposed neural radiance fields [33]. Instead of relying on estimated correspondences and crossview depth reprojection for optimization [29], our method directly optimizes over volumes. However, we show that the shape-radiance ambiguity [61] of NeRF becomes the bottleneck on estimating accurate per-view depths in indoor scenes. To address the issue, we propose a guided optimization scheme to help train NeRF with learning-based depth priors. Specifically, our system firstly adapts a monocular depth network onto the test scene by finetuning on its conventional SfM+MVS reconstruction. Then, we employ the adapted depth priors to guide the sampling process of volume rendering for NeRF. Finally, we acquire a confidence map from the rendered RGB image of NeRF and improve the depth map with a post-filtering step.
Our findings indicate that the scene-specific depth prior adaptation significantly improves the depth quality. However, performing existing correspondence-based optimization on the adapted depth priors will surprisingly degrade the performance. On the contrary, with direct optimization over neural radiance fields, our method consistently improves the depth quality over adapted depth priors. This phenomenon demonstrates the potential of exploiting neural radiance fields for accurate depth estimation.
Experiments show that our proposed framework signifi-cantly improves upon state-of-the-art multi-view depth estimation methods on tested indoor scenes. In addition, the guided optimization from learning-based priors can help improve the rendering quality of NeRF on both seen and novel views, achieving comparable or better quality with state-of-the-art novel view synthesis methods. This indicates that conventional non-learning reconstruction method, while demonstrated to be effective on helping image-based view synthesis in [39,40], can also help improve the synthesis quality on neural implicit representations.
Related Work
Multi-view Reconstruction: Recently, 3D vision [14,48,[54][55][56]62]has attracted more and more attention. Early multi-view reconstruction approaches include volumetric optimization [7,21,52], which perform global optimization with photo-consistency based assumptions. However, those methods suffer from large computational complexity. Another direction [2,9] is to estimate per-view depth map. Compared to volumetric approaches, these methods can produce finer geometry. However, they rely on accurately matched pixels by comparing the similarity of crossview patches at different depth hypotheses, which will be problematic over poorly textured regions in indoor scenes. Recently, a number of learning-based methods are proposed. While some of them predict on voxelized grids [19,49], they suffer from limited resolution. An exception of this is Atlas [34], which predicts TSDF values via back-projection of the image features. Most learning-based methods [15,17,20,24,28,53] follow the spirit of conventional approaches [9] and generate per-view depth map from a cost volume based architecture. Most related to us, [29] performs test-time optimization over per-view depth maps with learning-based priors. While our work also utilizes the learning-based priors, we build on top of the recently proposed neural radiance fields [30] and introduce a new way to accurately estimate multi-view depths by directly optimizing over implicit volumes with the guide of learning-based priors. Our method neither suffers from the resolution problem nor relies on accurately estimated correspondences.
Neural Implicit Representation: Recently, several seminal works [3,31,36] demonstrate the potential of representing implicit surfaces with a neural network, which enables memory-efficient geometric representation with infinite resolution. Variations include applying neural implicit representations on part hierarchies [11,18], human reconstruction [41,42], view synthesis [27,46], differentiable rendering [26,35], etc. Neural radiance fields (NeRF) [33] represent scenes as continuous implicit function of positions and orientations for high quality view synthesis, which leads to several follow-up works [1,38,61] improving its performance. There are several extensions for NeRF including Figure 2: An overview of our method. We first adopt conventional SfM and MVS from COLMAP to get sparse depth (after fusion), which is used to train a monocular depth network to get scene-specific depth priors. Then, we utilize the depth priors to guide volume sampling in the optimization of NeRF [33]. Finally, by computing the errors between the rendered images and the original input images we acquire confidence scores, which enables us to employ a confidence-based filter to improve the rendered depths.
View Synthesis: View synthesis is conventionally often referred as view interpolation [12,22], where the goal is to interpolate views within the convex hull of the initial camera positions. With the success of deep learning, learningbased methods [8,32,47,64] have been proposed to address the problem and have achieved remarkable improvements. Recently, neural radiance fields [30] demonstrates impressive results of view synthesis by representing scenes as continuous implicit radiance fields. It is further extended to operate on dynamic scenes [37,57]. [25] employs a sparse voxel octree and achieves great improvement over [33]. [39] employs image-based encoder-decoder architecture to process the proxy generated from the conventional sparse reconstruction, and is later improved by [40]. While view synthesis is not the major focus of this work, we show that our guided optimization scheme consistently improves the synthesis quality of NeRF [33] on both seen and novel views, which shows the potential of using conventional sparse reconstructions to help improve the synthesis quality of NeRF-like methods.
Overview
We introduce a multi-view depth estimation method that utilizes conventional sparse reconstruction and learning-based priors. Our proposed system builds on top of the recently proposed neural radiance fields (NeRF) [33] and performs test-time optimization at inference. Compared to the existing test-time optimization method [29] that relies on estimated correspondences, directly optimizing over volumes eliminates the necessity of accurately matching crossview pixels. This idea is also exploited by direct methods in the context of simultaneous localization and mapping (SLAM) [6].
The key to our approach is to effectively integrate the additional information from the learning-based priors into the NeRF training pipeline. Figure 2 shows an overview of our proposed system. Section 3.2 shows how we adapt the depth priors to specific scenes at test time. In Section 3.3, we analyze the reason why NeRF fails on producing accurate geometry in indoor scenes and describe our learningbased priors guided optimization scheme. In Section 3.4, we discuss how to infer depth and synthesize views from the neural radiance fields trained with guided optimization.
Scene-specific Adaptation of the Depth Priors
Similar to CVD [29], our method also aims to utilize learning-based depth priors to help optimize the geometry at test time. However, unlike [29] that employs the same monocular depth network for all test scenes, we propose to adapt the network onto each scene to get scene-specific depth priors. Empirically this test-time adaptation method largely improves the quality of the final depth output.
Our proposal on adapting scene-specific depth priors is to finetune a monocular depth network over its conventional sparse reconstruction. Specifically, we run COLMAP [43,44] on the test scene and acquire per-view sparse depth maps by projecting the fused 3D point clouds after multiview stereo. Since geometric consistency check is adopted in the fusion step, the acquired depth map is sparse but robust and can be used as a supervision source for training the scene-specific depth priors.
Due to the scale ambiguity of acquired depth map, we employ the scale-invariant loss [5] to train the depth network, which is written as follows: where D i p is the predicted depth map and D i Sparse is the sparse depths acquired from COLMAP [43,44]. We align the scale of the predicted depth map with the sparse depth supervision by employing the scale factor α(D i p , D i Sparse ) in the loss formulation, which can be computed by averaging the difference over all valid pixels: (2) The finetuned monocular depth network is a stronger prior that fits the specific target scene. The quality of the adapted priors can be further improved with our guided optimization over NeRF, while Table 2 shows that applying existing correspondence-based neural optimization will surprisingly degrade the quality of the adapted depth priors.
Guided Optimization of NeRF
Neural radiance fields were initially proposed in [33], which achieves impressive results on view synthesis. Our system exploits its potential for accurate depth estimation. By integrating the aforementioned adapted depth priors, we directly optimizate on implicit volumes. The key to the success of NeRF is to employ a fully connected network parameterized by θ to represent implicit radiance fields with , where x and d denotes the location and direction, c and σ denotes the color and density as the network outputs. View synthesis can be easily achieved over NeRF with volume rendering, which enables NeRF to train itself directly over multi-view RGB images. During volume rendering, NeRF adopts the near bound t n and the far bound t f computed from the sparse 3D reconstruction to monitor the sampling space along each ray. Specifically, it partitions [t n , t f ] into M bins and one query point is randomly sampled for each bin with a uniform distribution: The rendered RGB value C(r) for each ray can be calculated from the finite samples with volume rendering. More- over, per-view depth D(r) can also be approximated by calculating the expectation of the samples along the ray: where T i = exp − i−1 j=1 σ j δ j indicates the accumulated transmittance from t n to t i and δ i = t i+1 −t i is the distance between adjacent samples.
While simply satisfying the radiance field over the input image does not guarantee a correct geometry, the shaperadiance ambiguity between the 3D geometry and radiance has been studied in [61]. It is believed in the paper that because incorrect geometry leads to high intrinsic complexity, the correct shape, with smoother surface light field, is more favored by the learned neural radiance fields with limited network capacity. This assumption generally holds for rich textured outdoor scenes. However, we empirically observe that NeRF struggles on poorly textured areas (e.g. walls), which are common in indoor environments. Figure 3 shows one failure case of NeRF that suffers from shape-radiance ambiguity in texture-less areas, where NeRF perfectly synthesizes the input image with a geometry largely deviated from the groundtruth. The failure comes from the fact that while extremely implausible shapes are ignored with the favor of smoothed surface light field [61], there still exists a family of smoothed radiance fields that perfectly explains the training images. Further, the blurred images and largemotion real-world indoor scenes will reduce the capacity of NeRF and aggravate the shape-radiance ambiguity issue. We find that this is a common issue in all tested indoor scenes.
In Figure 3(b), we show that all the sampled points along the camera ray that corresponds to a poorly textured pixel predict roughly the same RGB values, with the confidence distribution concentrated only in a limited range. Motivated by this observation, we consider guiding the NeRF sampling process with our adapted depth priors from the monocular depth network. By explicitly limiting the sampling range to be distributed around the depth priors, we avoid most degenerate cases for NeRF in indoor scenes. This enables accurate depth estimation by directly optimizing over RGB images.
Specifically, we first acquire error maps of the adapted depth priors with a geometric consistency check. Denote the adapted depth priors as {D i } N i=1 for the N input views. We project the depth map of each view to all the other views: where K is the camera intrinsics, T i→j is the relative pose. p s→t and D i→j are the 2D coordinates and depth of the projection in jth view. Then we calculate the depth reprojection error using the relative error between D j ′ and D i→j . Note that there are pixels that do not overlap across some view pairs. Thus, we define the error map of the depth priors for each view e i as the average value of the top K minimum cross-view depth projection error.
We use the error maps {e i } N i=1 to calculate adaptive sample ranges [t n , t f ] for each camera ray: where α l and α h defines the relative lower and higher bounds of the ranges. With the adaptive ranges we achieve a balance between diversity and precision of the confidence distribution along camera rays. As illustrated in Figure 4, the sampling over pixels with relatively low error is more concentrated around the adapted depth priors, while the sampling over pixels with large error is close to the original NeRF formulation.
Inference and View Synthesis
For inference, we can directly predict the depth map for each input view by resampling within the sampling range Figure 4: Guided optimization of NeRF [33]. We adopt multi-view consistency check on adapted depth priors to get error maps, which help calculate adaptive depth ranges for each camera ray to sample points for NeRF optimization. defined in Eq. (6) and applying Eq. (3) to compute the expectation. This gives an accurate output depth for the NeRF equipped with our proposed guided optimization scheme.
To further improve depth quality, we exploit the potential of using the view synthesis results of NeRF to compute perpixel confidence for the predicted geometry. If the rendered RGB at a specific pixel does not match the input training image well, we attach a relatively low confidence for the depth prediction of this pixel. The confidence S i j for the jth pixel in the ith view is specifically defined as: where C i gt and C i render are the groundtruth images and rendered images for each seen view with all the values divided by 255. The absolute difference is employed. This confidence map can be further used to refine the predicted depth map with off-the-shelf post-filtering techniques. We employ plane bilateral filtering introduced in [51] over the depth to get the final output, which improves depth quality especially for the regions where rendered RGB images are not accurate.
While the proposed guided optimization strategy needs the adapted depth priors as input to guide point sampling along the camera ray, we can still perform novel view synthesis by directly using the adapted depth priors from the nearest seen view. Empirically this is sufficient to produce accurate depth maps and significantly outperforms the original NeRF in terms of view synthesis quality (See Table 5 we randomly selected 8 scenes to evaluate our method. For each scene, we picked 40 images covering a local region and held out 1/8 of these as the test set for novel view synthesis. All images are resized as 484 × 648 resolution. Due to the scale ambiguity issue, we adopted the median groundtruth scaling strategy [63] for depth evaluation. Implementation Details: For the adapted depth priors, following CVD [29], we used the network architecture introduced in Mannequin Challenge [23] with its pretrained weights as our monocular depth network. 15 finetuning epochs were used in the scene-specific adaptation. We set K = 4 for multi-view consistency check and α l = 0.05, α h = 0.15 as the bounds of sample ranges. Please refer to our supplementary material for more details. Table 1 shows the results for depth estimation task on ScanNet [4]. For all methods, we used their released implementation in the experiments. We also report results without applying the filtering step. Our method outperforms state-of-the-art depth estimation methods in all metrics. Note that DeepV2D [50], DELTAS [45] and Atlas [34] are all trained on ScanNet with groundtruth depth supervision. With the proposed guided optimization scheme, our method mitigates the problem of the shape-radiance ambi-RGB adapted depth priors CVD optimization Our optimization Figure 5: The optimization of CVD [29] surprisingly degrades the quality of the depth priors due to unreliable flow correspondences, while our method achieves improvement with guided optimization of NeRF [ Table 3: Ablation studies on each component of our system. For the experiments 'NeRF + filter' and 'depth priors + filter', we compute the confidence scores by using the relative errors between the prediction depths and the groundtruth. The experiment was conducted on scene0521.
Results on Multi-view Depth Estimation
guity and demonstrates the potential of exploiting NeRF for accurate depth estimation. Figure 6 shows some qualitative results. While the original NeRF [33] fails to predict reasonable geometry, our method generates visually appealing depth maps. The confidence-based filter can further refine the predicted depth by smoothing the per-pixel estimation of NeRF [33].
To further study the advantages of optimizing over implicit volumes, we also applied the optimization of CVD [29] on our adapted depth priors. Results are shown in Table 2 and one example is exhibited in Figure 5. We surpris-RGB GT depths COLMAP [44] Atlas [34] CVD [29] NeRF [33] Ours w/o filter Ours Figure 6: Qualitative comparisons on ScanNet [4] dataset. Our method, without the post-filtering step, outperforms all compared methods in terms of depth quality. The filter further smooths the per-pixel estimated depth maps. Better viewed when zoomed in.
ingly find that the optimization of CVD degrades the depth quality of the initial depth priors. This is mainly due to wrong estimated correspondences from the employed flow network in [29]. The flow estimation is particularly challenging over poorly textured regions, which is ubiquitous in indoor scenes. The proposed guided optimization enables us to integrate depth priors on top of NeRF [33], which directly optimizes on raw RGB images, avoiding the challenging step of correspondence estimation in indoor scenes.
Ablation Studies
To better understand the working mechanism of our method, we performed ablation studies over each component of the proposed system. Results in Table 3 show that each component is beneficial to the final depth quality. This verifies the advantages of integrating depth priors into the optimization of NeRF [33].
We further study the design of adaptive ranges used in the guided optimization. It is shown that both the adaptive strategy and the use of bounds contribute to the performance gain. With the computed error maps, α l and α h avoid the samples being over-concentrated or overly random respectively, which enables the sampling to reach a balance between diversity and precision of the sampled points. Table 4: Ablation studies on the design of the proposed guided optimization with adaptive ranges. 'bound' denotes the use of α l and α h in Eq. (6). For the experiment without using adaptive depth ranges for each camera ray, we set a fixed relative depth range to [0.9, 1.1]. The experiment was conducted on scene0521.
Results on View Synthesis
We also observe that the proposed guided optimization scheme is beneficial to the view synthesis quality of NeRF. Figure 7 illustrates some visualizations. Table 5 shows results on novel view synthesis, where our method consistently improves NeRF on all 8 scenes. Although view synthesis is not the main focus of our work, we achieve comparable or even better results compared to state-of-the-art novel view synthesis methods [1,40]. Note that SVS [40] employs image-based novel view synthesis methods over the information extracted from sparse reconstruction. Our method, with the guided optimization scheme, opens a new way to employ the robust conventional sparse reconstruction to improve the synthesis quality directly over implicit 3D volumes. In addition, results in Table 6 show that our method can improve the view synthesis quality of NeRF on seen views. The guided optimization helps NeRF to focus on more informative regions and improves its capacity for rendering RGB images.
Conclusion and Future Work
In this work, we present a multi-view depth estimation method that integrates learning-based depth priors into the optimization of NeRF. Contrary to existing studies, we show that the shape-radiance ambiguity of NeRF becomes a bottleneck for NeRF-based depth estimation in indoor scenes. To address the issue, we propose a guided optimization framework to regularize the sampling process of NeRF during volume rendering with the adapted depth priors. Our proposed system demonstrates the significant improvement over prior works for indoor multi-view depth estimation, with a surprising finding that correspondence-based optimization can degrade the quality of depth priors in indoor scenes due to wrongly estimated flow correspondence. In addition, we also observe that the guided optimization improves the view synthesis quality of NeRF.
While our optimization is 3x faster than NeRF due to the advantages of guided optimization, the current method is still not efficient and thus hard to be scaled up to large datasets. Nonetheless, our work demonstrates the potential of using neural radiance fields for accurate depth estima-
A. Implementation Details
To train the proposed system, we mostly followed NeRF [33]. Specifically, we sampled 64 points in each ray and used a batch of 1024 rays. Since we did not adopt coarseto-fine strategy in the sampling process, we only need one network (the architecture is same with [33]) to optimize the neural radiance fields. We added random Gaussian noise with zero mean and unit variance to the density σ to regularize the network. In addition, following [33], positional encoding was also employed. Adam was adopted as our optimizer with the initial learning rate as 5×10 −4 and decayed exponentially to 5 × 10 −5 . We utilized PyTorch in our implementation. Each scene was trained with 200K iterations on a single RTX 2080 Ti. Error metrics. We follow the metrics in [20,29,34,45,50,63] to evaluate depth estimation results: • δ < t: % of y s.t. max( y y * , y * y ) = δ < t where y and y * indicate predicted and groundtruth depths respectively, and T indicates all pixels on the depth image.
B. Baseline Method Details
We compared our results with several state-of-the-art depth estimation method, which can be roughly classified as four categories: Conventional multi-view stereo: COLMAP [43,44], ACMP [58]. COLMAP is a non-learning MVS method for 3D reconstruction building upon PatchMatch stereo [2]. Based on COLMAP, ACMP introduces planar models to solve low-textured areas in complex indoor environments.
Learning-based multi-view stereo: DELTAS [45], Atlas [34]. These two methods are trained on ScanNet with groundtruth depth supervision. For DELTAS, we used two neighboring frames as the reference frames. Monocular depth estimation: Mannequin Challenge [23]. Mannequin Challenge is a state-of-the-art monocular depth estimation method. We directly used their pretrained weight for evaluation. Video-based depth estimation: CVD [29], DeepV2D [50]. For video-based methods, we sorted images in a scene according to the timeline. DeepV2D is trained on ScanNet with groundtruth depth supervision.
C. Hyperparameter Analysis
To further demonstrate the effectiveness of our method, we did hyperparameter analysis for the number of used minimum errors K, and the bounds α l , α h used in the guided sampling process. The experiments were conducted on scene0521. Table 7 shows experimental results. We find that using a K that is too small or too large will degrade the performance. On the one hand, it is possible to satisfy the multi-view consistency check although the depths are not correct. Small K will increase the probability of this phenomenon. On the other hand, there are pixels that do not overlap across some view pairs. Thus, the projection errors on some views are invalid and a large K may cover these invalid views. In addition, a large upper bound α h or a small lower bound α l for sampling range will lead to worse results, which indicates the necessity to set bounds in sampling process. | 2021-09-03T01:16:23.049Z | 2021-09-02T00:00:00.000 | {
"year": 2021,
"sha1": "149343c70c8c506c2e91112746549eea4300817b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "149343c70c8c506c2e91112746549eea4300817b",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
211078462 | pes2o/s2orc | v3-fos-license | Successful care transitions for older people: a systematic review and meta-analysis of the effects of interventions that support medication continuity
Abstract Background medication-related problems occur frequently when older patients are discharged from hospital. Interventions to support medication use have been developed; however, their effectiveness in older populations are unknown. This review evaluates interventions that support successful transitions of care through enhanced medication continuity. Methods a database search for randomised controlled trials was conducted. Selection criteria included mean participant age of 65 years and older, intervention delivered during hospital stay or following recent discharge and including activities that support medication continuity. Primary outcome of interest was hospital readmission. Secondary outcomes related to the safe use of medication and quality of life. Outcomes were pooled by random-effects meta-analysis where possible. Results twenty-four studies (total participants = 17,664) describing activities delivered at multiple time points were included. Interventions that bridged the transition for up to 90 days were more likely to support successful transitions. The meta-analysis, stratified by intervention component, demonstrated that self-management activities (RR 0.81 [0.74, 0.89]), telephone follow-up (RR 0.84 [0.73, 0.97]) and medication reconciliation (RR 0.88 [0.81, 0.96]) were statistically associated with reduced hospital readmissions. Conclusion our results suggest that interventions that best support older patients’ medication continuity are those that bridge transitions; these also have the greatest impact on reducing hospital readmission. Interventions that included self-management, telephone follow-up and medication reconciliation activities were most likely to be effective; however, further research needs to identify how to meaningfully engage with patients and caregivers to best support post-discharge medication continuity. Limitations included high subjectivity of intervention coding, study heterogeneity and resource restrictions.
Introduction
Medication management processes and behaviours support safe and effective medication use. These involve healthcare professionals, caregivers, organisations and the patient themselves. Medication-related problems (MRPs) and interruptions to, or discontinuity of, medication management occur frequently when older patients are discharged from hospital [1][2][3][4]. MRPs can lead to hospital readmission and poorer quality of life (QoL), resulting in higher healthcare utilisation [5,6]. Specific problems include reconciliation errors [7], patient confusion [3], inappropriate continuation of short-term medication [8] and inadequate monitoring [9].
Better and safer care transitions, especially hospital discharge, are an international priority [10-12]. Burke et al. 's ideal transition-of-care framework [12] recognises medication safety as a crucial element for successful transitions. Evaluation of interventions to support medication continuity indicated that patient education at discharge reduced the risk of adverse medication-related events, although evidence remains limited [13]. An American study further highlighted the value of pharmacy-supported interventions in reducing hospital readmissions [14]. However, neither of these studies evaluated the effectiveness of interventions delivered specifically to older populations.
Other systematic reviews have identified discharge interventions that reduce negative patient outcomes; however, their focus was broader than medications [15,16]. Evaluation of complex interventions, defined as those involving multiple components, outcomes, target behaviours or flexibility [17] is notoriously difficult [18,19]. To address this, Leppin and colleagues (building on work of Hansen et al. [16]) developed a taxonomy of interventional components allowing in-depth comparison and meta-analysis [15]. Guidance published in 2000 by the UK's Medical Research Council (MRC), who invest in research, also established an influential good practice framework [17] to help overcome evaluation challenges.
This review aims to build on this previous knowledge by evaluating interventions, aimed at supporting successful transitions of care for older patients through enhanced medication continuity, using a taxonomy of components.
Methods
To promote rigour and transparency, the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) checklist is presented (Supplementary Material A1) and the review is registered [PROSPERO (CRD42018086873)].
Search strategy
Published studies from 1st January 2003 to 1st September 2019 were sought from electronic databases (MED-LINE, EMBASE, CINAHL, PsycINFO, clinicaltrials.gov and Cochrane Database of Systematic Reviews). The start date of 2003 was chosen to coincide with predicted uptake of MRC guidance by researchers, as demonstrated by Datta and Petticrew [18], and therefore its subsequent implementation within trials.
Medical Subject Headings chosen in collaboration with a subject librarian, including key search terms related to care transitions (e.g. transitional care, patient handoff and discharge), were combined with those related to medication continuity (e.g. pharmacy services, medication systems and safety) (see Supplementary Material A2). Additional citations were identified through hand-searching reference lists and forward citation search. English language restrictions were imposed due to time and resource limitations.
Inclusion and exclusion criteria
Eligible studies included participants with a mean age of 65 years or older, who were being prepared for hospital discharge or who had a recent discharge (intervention provided within 1 month of discharge or on first post-discharge primary care visit). Study interventions had to describe activities relating to medication that supported continuity. Outcomes of successful transitions were of interest; primarily a reduction in hospital readmission rates. Secondary outcomes relating to the safe use of medication (e.g. MRPs and discrepancies) and QoL were also included as these factors contribute to successful transitions and can be mediated through medication continuity. The search was limited to randomised controlled trials (RCT) and cluster RCT (cRCT) as these are considered the gold standard in the hierarchy of evidence [20].
Selection process
One reviewer (JT) independently screened titles and abstracts against the selection criteria, removing duplicates. Those rejected were reviewed by a second author (VC) to reduce the exclusion of potentially relevant publications [21]. Disagreements were discussed and final inclusion was determined after full-text review.
Data extraction and quality assessment
Data extraction was performed independently by two reviewers (JT and VC) using a predefined template. Abstracted data included demographics, intervention details, outcome measures and findings. Protocols or further detail from the study authors were sought wherever possible.
The methodological risk of bias was independently assessed in accordance with the Cochrane Handbook [22] and the guidelines of the Cochrane Consumers and Communication Review Group [23]. Five domains were rated: random sequence generation, allocation sequence concealment, blinding (outcome assessment), completeness of outcome data and selective reporting. Performance bias was not assessed because blinding of participants and intervention personnel would be impossible. Patient-directed education related to medication but not focused on encouraging self-management and not occurring in the control arm Self-management (education or coaching) Patient-directed education or coaching directly focused on improving the patient's ability to self-manage their medication needs that does not happen in the control arm Medication intervention: reconciliation Creating the most accurate list possible of all medications a patient is taking and comparing it to the current order, with the goal of providing correct medications at all transition points when this does not happen or is performed by usual care staff in the control arm Medication intervention: review Critical examination of a patient's medication with the objective of reaching an agreement with the patient about treatment optimisation when this does not happen in the control arm Patient-centred discharge document Some difference in the format or usability of discharge materials to make them more relevant or accessible when compared to the control arm Collaboration within care team Healthcare professionals cooperatively working together, sharing responsibility for problem-solving and making decisions to carry out medication-related plans for patient care Timely cross-sector communication Engagement with other sector provider in communication about patient medication status when this does not occur or occurs at a later date in the control arm Patient hotline Presence of an open line for patient-initiated communication when this either does not exist in the control arm or is more restricted in availability or usefulness
Data synthesis and analysis
Information was used to form a description of the intervention components each patient received (when, how often and for how long). These activities were coded independently (JT and VC), guided by an adapted version of Leppin et al.'s taxonomy of interventional activities [15], modified by the reviewers for medication-related activities (see Table 1). Disagreements were resolved through discussion. Meta-analysis of all-cause readmission data was performed (where the risk ratio (RR) and 95% confidence intervals (CIs) could be calculated) using the longest reported follow-up period. Outcome effects were pooled using a Mantel-Haenszel randomeffects model in Cochrane Review Manager (RevMan) V5.3 software. The I 2 statistic was calculated to describe the percentage of variation due to heterogeneity rather than chance and publication bias was assessed. No other outcome data could be pooled due to variance in reporting measures.
Study inclusion
The search identified 2394 unique citations. A total of 2278 were excluded following title and abstract review. Full-text publications were assessed for 116 studies, resulting in 24 that met the selection criteria (see Figure 1). Consensus between reviewers was 94% with no studies excluded.
Study characteristics
Studies were conducted in 12 countries covering a range of public and privately funded healthcare systems (see Figure 2 for summary of characteristics). A total of 17,664 participants were enrolled (range, 25 [24]-4656 participants [25]) and the sample's mean age ranged from 66 [26,27] to 86 years [28] (Supplementary Material A3 provides full study characteristics). Nine studies described intervention bundles provided during hospital admission [25,27,[29][30][31][32][33][34][35], seven of which were delivered by the inpatient pharmacy team and one by geriatricians [34]. One involved an electronic intervention [25]. Intervention components were most often delivered once during the inpatient stay. Nine interventions were commenced during admission and continued postdischarge, bridging the transition [26,28,[36][37][38][39][40][41][42]. Five of these involved nurse-delivered interventions, sometimes acting as 'transition coaches', to facilitate the patient's role in self-care. Three were pharmacist-led [28,38,42] and one was multidisciplinary [41]. A further six studies evaluated interventions that commenced post-discharge [24,[43][44][45][46][47], of which five were delivered by pharmacists. One study [45] involved automatic electronic transfer of patient information to the primary care provider. Overall, intervention delivery ranged from a single time point to 12 months post-discharge. The most intensive activity period was between discharge and 3 months post-discharge.
Intervention component characteristics
Supplementary Material A5 summarises the medicationrelated activity components coded within each study using the adapted taxonomy. Inter-rater agreement was high (k = 0.77).
Studies used varying numbers and combinations of activities within intervention bundles. Most studies utilised three or more activities (mean = 4.6; range 1-8). Three studies involved single-component interventions [27,30,45]. The range of time to first post-discharge activity was 2 days to 2 months. Table 2 shows that the most commonly reported activities were patient education (n = 5; 56%), reconciliation (n = 6; 67%), provision of patient-centred documentation (n = 4; 66%) and timely cross-sector communication (n = 7; 78%). Two studies showed a reduction in hospital readmissions [33,34] [25] was considered to be high quality, with the other studies having selection or detection biases.
Interventions commenced during hospital admission and include continuing support post-discharge
The most widely used activity was patient education (n = 9; 100%) (see Table 2). Three studies provided education once, Casas et al. provided a two hour educational programme at discharge [37], Huang et al. a medication safety information brochure [40] and Ravn-Nielsen et al. used a 30min motivational interview [42]. Two studies utilised 'transition coaches' to deliver education throughout follow-up [36,39]. Three studies provided education at admission and discharge using pharmacists (to advise on medication changes) or nurses (to advise on chronic conditions) [28,38,41]. One study [26] provided disease-specific education in the participant's native language. Medication reconciliation (n = 7; 78%) and patient-centred discharge documentation (n = 5; 63%), such as a 'personal health record' containing medication information [39], were also used. Post-discharge telephone calls (n = 5) to provide reinforcement of selfmanagement [37,39], further education [26,38,40,42] and assessment of adherence [28,41], were conducted more frequently than home visits (n = 1). Three studies [39][40][41] used both methods, conducting a home visit within the first week post-discharge and subsequent weekly telephone calls.
Five of these studies demonstrated a statistically significant reduction in all-cause hospital readmissions [37][38][39][40]42]. All five interventions included follow-up (telephone, home visit or both) and education, continuing until seven [38] to 180 days post-discharge [42]. Four of these studies [26,37,39,40] were considered to be at the highest risk of bias, however, as allocation was not concealed or outcome assessors were not blinded. Chan et al. [26] did not find any difference between arms with the Care Transitions Measure-3 score, which assesses the quality of the transitional care experience (P = 0.18); however, Huang et al. [40] found a greater improvement in QoL score within their intervention arm (I: +18.6 versus C: +15.3; P < 0.001).
Interventions commenced post-discharge
The majority of post-discharge interventions were provided by pharmacy staff (n = 5): community pharmacists [43]; outpatient polyclinic pharmacists [46]; and trained intervention pharmacists [24,44,47]. Table 2 shows that medication reconciliation and medication review were provided in most of the intervention bundles (n = 4; 66%). Home visits (n = 4) were conducted more frequently than telephone calls (n = 1). Of the six interventions, none showed a statistically significant reduction in hospital readmission and all were considered to be high quality. Holland et al. demonstrated a 30% increase in readmission rates (P = 0.009) in their intervention arm [47], involving review and education, and a decrease in visual analogue QoL scores (I: −7.36 versus C: −3.24; P = 0.042). Other studies reported a reduction in MRPs [48] (not statistically significant) and improvement in medication discrepancies (P < 0.001) [46] by using pharmacists for post-discharge review or reconciliation.
Meta-analysis
Nineteen studies reported hospital readmission data and were therefore combined using meta-analysis ( Figure 2) (see Supplementary Material A6 for full forest plot). One could not be included [31] as the results were reported in a way that did not allow calculation of RR. Significant variability across studies was observed (I 2 = 70%
Discussion
This systematic review aimed to evaluate the evidence for interventions that support successful transitions of care for older people through enhanced medication continuity. We found interventions that bridged the transition for up to 90 days were more likely to support successful transitions and reduce adverse outcomes. These interventions used on average more components than those focusing solely on hospital admission or post-discharge time periods (6.2 versus 3.6 versus 3.8 respectively), reflecting their higher intensity and longitudinal nature. Other reviews of discharge interventions have shown that multiple components are significantly more effective than a single activity [16, 49-51] and that their effects are sustained [49]. Actual time taken to deliver the intervention components was rarely reported, but is important to consider in the context of busy healthcare settings. For example, Ravn-Nielsen et al. [42] reported an average of 114 min spent per patient. The longer term sustainability of resource intensive interventions such as these and how they can be integrated into 'usual care' should be deliberated.
In this review, patient education, reconciliation and timely cross-sector communication were the most widely used activities. Reconciliation, performed manually or via electronic intervention, was shown to significantly reduce hospital readmission (RR 0.88 [0.81, 0.96]) and was linked to fewer medication errors [25,29,31,48]. The benefits of reconciliation appear highly contested in the literature. When provided after hospital discharge, reconciliation has not been shown to effectively reduce post-discharge harm or improve health outcomes [52]. However, reconciliation provided during admission has demonstrated a reduction in healthcare utilisation and improved patient safety [53,54].
Interventions in this review were delivered by a range of healthcare professionals, with no professional appearing more effective than the other. Ten studies [33, 34, 37, 40-44, 46, 47] also involved caregivers; mostly as an information source during reconciliation activities. Caregivers often support older patients during their day-to-day health management and can effectively promote self-management [55]. They could, therefore, be engaged in wider activities amongst these interventions and further work should identify opportunities for caregiver involvement within medication continuity.
The most effective component within these intervention bundles was self-management coaching or education. Promoting self-management in older patients has received global attention as it is thought to improve a patient's ability to manage their long-term conditions. Despite this, selfmanagement activities were used in less than half of included studies (n = 8). It is known that older people with low levels of social, cognitive, and physical functioning are generally poorer self-managers [56]. Therefore, how such individuals are supported to self-manage their medication through interventions such as these requires further attention.
Telephone follow-up (RR 0.84 [0.73, 0.97]) also reached statistical significance within our meta-analysis. Other reviews of telephone follow-up interventions [57-59] have been unable to demonstrate a reduction in readmission rates; however Crocker et al. [57] highlighted that patient engagement with post-discharge clinical contact was improved. This contact may, therefore, provide opportunities for reinforcement of educational messages and resolution of MRPs; however, barriers to implementation (e.g. time, cost and personnel resourcing) may limit its use. Patient-centred health documentation has practical and psychological benefits for patients, such as bolstering memory, as a tool for sharing information or feeling more empowered to ask health-related questions [60]. Within our review, it is unclear how patients made use of their personalised documentation; however, all examples included an up-to-date list of their medications presented in an acceptable format.
There is consensus that timely cross-sector communication supports medication continuity at transitions [50, 61]. Although much emphasis has been given to improving communication at transitions [62], our meta-analysis did not find a significant effect on readmission rates (RR 0.90 [0.79, 1.02]). There have been technological advances to support timely communication and many of the included studies transferred information to the primary care provider, community pharmacy or outpatient services at discharge. Specific methods included as follows: fax [25,27,[31][32][33]38], telephone [34,42], email [26] and secure electronic platform [24,37,45]. We found no interventions describing a method allowing primary care providers to readily communicate back to hospital providers. This is a barrier to medication continuity within the UK primary care sector when clarification or further information is required [61]. Further studies are needed to test interventions supporting this aspect of cross-sector communication.
Limitations
Studies were highly heterogeneous, drawn from varying populations, care settings and included different combinations of components and delivery time points. It is difficult to attribute success to individual components within bundles and our meta-analysis illustrates a modest overall effect size. Therefore, these results cannot demonstrate causality and we cannot draw firm conclusions. There is currently no validated medication continuity-related measure, which would have allowed us to better combine results. Three potential studies were also excluded due to English language restrictions and unavailability of full-texts.
Coding intervention components can be a highly subjective process [63]. We used our best judgement, especially when intervention descriptions were lacking detail. To reduce bias, two reviewers independently coded the components. Interventions were only coded if the activity was explicitly stated.
Most of the included studies contained methodological flaws, which affected their risk of bias assessment. It was unclear whether appropriate methods were in fact utilised and not reported or simply not performed at all. To improve future trials, studies must ensure absolute blinding of outcome assessors and that allocation concealment and randomisation are appropriately performed and documented.
Conclusion
Overall, our results suggest that interventions that bridge the care transition best support older patients' medication continuity and have the greatest impact on reducing hospital readmission. Interventions that included self-management, telephone follow-up and medication reconciliation activities were most likely to be effective. Further work needs to identify how best to engage with patients and their caregivers in order to better support post-discharge medication continuity.
Supplementary data Supplementary data mentioned in the text are available to subscribers in Age and Ageing online. | 2020-01-09T09:14:37.575Z | 2020-02-11T00:00:00.000 | {
"year": 2020,
"sha1": "5609aae1a90b75e440a8738745b6a2697863a976",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/ageing/afaa002",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9de4cf3631c8f3574a05422c947d4ee15428809d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255148260 | pes2o/s2orc | v3-fos-license | North-to-South diversity of lipomycetaceous yeasts in soils evaluated with a cultivation-based approach from 11 locations in Japan
To understand the species distribution, diversity, and density of lipomycetaceous yeasts in soil based on their north-to-south location in Japan, 1146 strains were isolated from soil samples at 11 locations from Hokkaido to Okinawa Prefecture and taxonomically characterized. Lipomycetaceous yeast strains were isolated efficiently from soil by selecting watery mucoid-like colonies on agar plates with nitrogen-depleted medium. Twenty-four (80%) of the 30 known species of the genus Lipomyces were isolated from the soil samples collected in Japan, including species recently proposed. Among the species isolated, L. starkeyi was the most predominant in Japan, except on Iriomote Island, Okinawa, and accounted for 60-98% of the isolated strains. Lipomyces yarrowii was the dominant species on Iriomote Island (64%). The second most dominant species were L. chichibuensis in Saitama Prefecture and L. doorenjongii from Yamaguchi to Okinawa Prefecture. The species diversity of lipomycetaceous yeasts was in Japan and the significant correlation with the latitude of the sampling sites was revealed.
. However, the species distribution and ecology of lipomycetaceous yeasts have not been well studied using the recent molecular techniques. The density of lipomycetaceous yeasts in soil is low. In addition, the yeasts are nitrogen oligotrophy (Vishniac, 1983;Babjeva & Gorin, 1987;Kimura et al., 1998;Cornelissen, Botha, Conradie, & Wolfaardt, 2003;Yurkov, Kemler, & Begerow, 2011) therefore a specific medium is required for their isolation separating from the other yeast and mold species which grow rapidly and form large colonies on nutrient-rich media (Thanh, 2006;Yurkov et al, 2016).
Prior to this study, we found that the isolation method of Thanh (2006) was applicable and useful not only for separating the species, but also for counting the colony forming units (CFUs) of lipomycetaceous yeasts in soils. We also found that CFUs based on the sequencing identification enabled us to calculate the density of yeast at species level for understanding the ecology of yeasts.
Japan has diverse climatic conditions ranging from subarctic, through temperate, to subtropical, owing to its wide latitudinal range (approximately 45° N 148° E to 24° N 122° E). In addition, there is also marked variation in soil type related to the diverse climate, vegetation and temperatures. Hence, a wide variety of lipomycetaceous yeast species was expected for this country.
The aim of the present study was to examine the species distribution, diversity, and density of lipomycetaceous yeasts isolated from soil samples collected at various locations in Japan to understand the characteristics and ecological features corresponding to the locality.
Strain isolation and CFU counting
Isolation of yeast strains was carried out using 242 samples of moist soil collected at a depth of 5-10 cm at 11 locations in Japan from 2011 to 2016 (Table 1) (Phaff and Strmere, 1987;Yurkov, 2018). The collected samples were stored in sealed plastic bags and kept damp. Soil samples were classified according to the map shown on Japan Soil Inventory from the website of the Institute for Agro-Environmental Sciences, NARO (https://soil-inventory.rad. naro.go.jp/) (Kanda et al., 2018), which was based on the soil classification of World Reference Base (WRB) for Soil Resources 2006 (International Union of Soil Sciences [IUSS] Working Group WRB, 2006). The temperature measurements were based on data from the Japan Meteorological Agency (http://www.data.jma.go.jp/obd/ stats/etrn/index.php).
Soil (1 g) was suspended in 10 mL of saline (0.85% NaCl), and portions (100 μL) of the original and the serially diluted suspension were spread on a nitrogen-depleted medium (NDM; 20 g/L glucose, 0.85 g/L KH 2 PO 4 , 0.15 g/L K 2 HPO 4 , 0.5 g/L MgSO 4 ·7H 2 O, 0.1 g/L NaCl, 0.1 g/L CaCl 2 ·6H 2 O, 0.5 mg/L H 3 BO 3 , 0.04 mg/L CuSO 4 ·H 2 O, 0.1 mg/L KI, 0.2 mg/L FeCl 3 ·6H 2 O, 0.4 mg/L MnSO 4 ·H 2 O, 0.2 mg/L Na 2 MoO 4 ·2H 2 O, 0.4 mg/L ZnSO 4 ·7H 2 O, and 20 g/L agar or 15 g/L gellan gum) as described by Thanh (2006) and 0.1 g/L of chloramphenicol was added to the NDM for preventing bacterial growth. Soil samples (0.25 g) were also directly sprinkled onto NDM medium in plates after ground the granules of soil aggregation (approx. >5 mm) by spatula, which were then incubated at 25 °C for 3 wk. Since NDM medium is a selective medium for lipomycetaceous yeasts, the other yeast species did not grow sufficiently to be detected without a stereomicroscope. Lipomycetaceous yeast colonies that were recognized from their watery mucoid appearance due to extracellular polysaccharide (EPS) were confirmed by examining the existence of yeast cells in the colonies under a stereomicro-scope and subsequently counted ( Supplementary Fig. S1 A-C). The CFUs were counted from the plates inoculated by spreading the suspension. In case that colonies did not appear on any plates, the plates that were directly inoculated with the soil samples were used for CFU counting. After counting the CFUs, 5-10 colonies per sample were randomly selected (the colony appearances of lipomycetaceous yeasts were similar and indistinguishable). When picking up the lipomycetaceous yeast colonies on the NDM plates, there were the molds hypha but they were very thin and little owing to the nitrogen free medium, therefore easy to dodge it. Then, the yeasts colonies picked up were purified on YM agar medium (10 g/L glucose, 5 g/L peptone, 3 g/L yeast extract, 3 g/L malt extract, and 15 g/L agar; pH 5.6). In total, 1146 isolated strains were identified by LSU rDNA D1/D2 and TEF1 sequencing, followed by the molecular phylogenetic analysis method described below. CFUs of lipomycetaceous yeast species were calculated based on the ratio of the identified yeast strains to the plate media for compute the incidence and/or frequency of occurrences of the species.
DNA extraction, amplification, and sequencing
Genomic DNA was extracted from yeast cells cultivated on YM agar medium for 3-4 d at 25 °C and harvested using a loop. DNA was isolated using PrepMan Ultra (Applied Biosystems, Foster City, CA, USA) according to the manufacturer's instructions.
To sequence the LSU rDNA D1/D2 and TEF1 genes, these DNA fragments were amplified by polymerase chain reaction (PCR) using EX Taq kits (Takara Bio Inc., Kusatsu, Shiga, Japan) and a Mastercycler Ep Gradient S Thermal Cycler (Eppendorf, Hamburg, Germany). The standard primer pairs used for amplification and sequencing were NL1 and NL4 for LSU rDNA (O'Donnell, 1993), and EF1-983F and EF1-2218R for TEF1 (Rehner & Buckley, 2005;Kurtzman et al., 2007). The PCR products were purified using the Agencourt AMPure purification system (Beckman Coulter Inc., Brea, CA, USA), and sequencing was performed using BigDye Terminator v3.1 Cycle Sequencing Kits (Applied Biosystems). DNA fragments generated from the sequencing reactions were purified using the Agencourt CleanSEQ system (Beckman Coulter) and analyzed using either an ABI PRISM 3130 or 3730xl Genetic Analyzer (Applied Biosystems), according to the manufacturers' instructions.
Phylogenetic analysis and identification
The isolates were identified by phylogenetic analysis based on LSU rDNA D1/D2 and TEF1 sequences using the maximum likelihood (ML) method with the Hasegawa-Kishino-Yano model (Hasegawa, Kishino, & Yano, 1985), or the General Time Reversible model (Nei & Kumar, 2000) respectively, using the MEGA X software (Kumar, Stecher, Li, Knyaz, & Tamura, 2018). Phylogenetic trees were constructed for total of 91 isolates and 52 ex-types or authentic strains belonging to the family Lipomycetaceae or Saccharomyces cerevisiae and Schizosaccharomyces pombe for their LSU rDNA D1/D2 and TEF1 sequences (Supplementary Figs. S2, S3) by using the 1146 sequences of isolates determined in this study and those from the public database for references. The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test (1000 replicates) is shown next to the branches (Felsenstein, 1985). The tree was drawn to scale with branch lengths in the same units as those of the evolutionary distances used to infer the phylogenetic tree. The evolutionary distances were computed using the maximum composite likelihood method (Tamura, Nei, & Kumar, 2004) and are expressed in units of the number of base substitutions per site. All ambiguous positions were removed from each sequence pair (pairwise deletion). There were 497 positions in the LSU rDNA D1/D2 and 760 positions in the TEF1 in the final dataset.
The strains were considered identified if they formed a single clade containing an ex-type strain of a known species in the tree. If no clade was formed with any ex-type strain, the strain was identified as the most closely related species in the LSU rDNA and TEF1 phylograms (Supplementary Figs. S2, S3) for tentative identification and considered as the species itself in this study.
Growth temperature test
Growth temperature was examined by cultivation on YM agar medium at 4, 10, 15, 30, 33, 35, or 37 °C. Growth was observed for two weeks. The colonies that formed on the plate media were judged using three levels according to the colony size: "+," good growth; "w," weak growth; "-," no growth; and "ND," not determined.
CFUs of lipomycetaceous yeasts in soil
To investigate the ecological features and characteristics of lipomycetaceous yeasts, the isolation method described above (2.1) was used. Supplementary Figure S1 A-C shows the colonies grown on NDM agar plates for three weeks. Only watery mucoid yeast colonies were detected under the stereomicroscope (Supplementary Fig. S1 B-C) on the agar plates. Five to ten colonies from each soil sample were randomly selected, purified, and established as yeast strains. In total, 1146 strains were isolated, all of which were identified as lipomycetaceous yeasts by LSU rDNA D1/D2 domain and TEF1 sequences ( Fig. 1; Supplementary Figs. S2, S3, Table 2). Based The numbers in the row of the scientific name means isolated lipomycetaceous strains.
The value in parentheses means the frequencies of occurrences of the species isolation in the location.
-5doi: 10.47371/mycosci.2022.09.003 on this identification, the ratio of the identified yeast strains was applied to the number of yeast CFUs in each soil sample (Supplementary Table S2). Thus, this isolation method was effective for the quantitative counting of lipomycetaceous yeast colonies from soils.
The average, median, mode, and maximum and minimum mucoid lipomycetaceous yeast CFUs were determined after culturing the soil samples on the isolation medium ( Supplementary Fig. S1 D-E). The CFUs in 1 g of soil at each location ranged from 80 to 656 on average (Table 2), and the species composition was diverse, particularly those in the Okinawa Prefecture compared to the other locations. Although the average CFU was 334 CFU/g soil, no lipomycetaceous yeasts were found in 25% of the soil samples (9 out Table 2). The median and mode values were both 100 CFU/g. These results were similar to those of Di Menna (1966), who reported 10 2 -10 4 CFU/g in unimproved or forest soil samples. The density of lipomycetaceous yeasts was also shown as lower than that of the other yeasts species in some soil samples (Di Menna, 1957, 1965Sláviková & Vadkertiová, 2000Yurkov, Kemler, & Begerow, 2012;Yurkov, 2017), with an average of 10 4 -10 6 CFU/g at a depth of 0-10 cm (Glushakova, Kachalkin, Tiunov, & Chernov, 2017). Therefore, the method used in this study was effective for isolating and counting the lipomycetaceous yeasts. It was also reliable for the evaluating the density of the majority of the lipomycetaceous yeast species because of its high recovery of isolation and detection with 24 (80%) of the 30 Lipomyces species (Figs. 1, 2; Table 2).
Distribution of the major lipomycetaceous yeast species in Japan
In total, 1146 lipomycetaceous yeast strains were isolated from soils at 11 locations, from Furano and Sapporo in Hokkaido in the north; through Yamagata, Niigata, Saitama, Chiba, Shizuoka, Hyogo, and Yamaguchi Prefectures in Honshu; Kagoshima Prefecture Table 2). Although TEF1 could be used for species identification in most cases, several isolates were not confirmed to a specific species and were tentatively identified as that of the highest affinity (shown with the word "aff." before the specific epithet), such as for Lipomyces doorenjongii (van der Walt, Smith, & Roeijmans, 1999) (Supplementary Table S1; Supplementary Figs. S2, 3). Regarding species diversity, 2-5 species were generally isolated from each location, whereas 10 were isolated from Kamogawa, Chiba, and 13 from Iriomote Island, Okinawa Prefecture (Table 2).
These two locations showed high species diversity because they contained rare lipomycetaceous yeast species that were not found at the other locations. Four samples from Kamogawa, Chiba Prefecture revealed 5 species, Lipomyces melibiosirhaffinosiphilus, L. kiyosumicus, L. chibensis, L. kamogawensis, and L. amatsuensis. These species were described as new species for our isolates in our previous study (Yamazaki et al., 2020). The isolation sources for these species were mainly soils of the rhizospheres of fir trees (Abies firma Sieb. et Zucc.) or broad-leaved deciduous tree vegetation among the 20 soil samples (Supplementary Table S2). The soil vegetation of Abies firma should be confirmed in a further study by increasing the number of isolation times, because the species variety (2-5 species) was detected in Kamogawa, Chiba Prefecture, but in similar vegetation in Shizuoka, Shizuoka Prefecture this variety was not detected (Supplementary Table S2).
Among the 13 species isolated from samples of Iriomote Island, Okinawa Prefecture, seven species, Lipomyces taketomicus, L. yaeyamensis, L. iriomotensis, L. haiminakanus, L. komiensis, L. nakamensis, and L. sakishimensis, were described as new species in our previous study (Yamazaki et al., 2020). These were isolated from five soil samples, including forests, thickets, bushes, and a sakisimasuounoki plant community vegetation among the 46 soil samples from this location (Supplementary Table S2).
We found four major lipomycetaceous yeast species, Lipomyces starkeyi (Lodder, J. & Kreger-van Rij, N. J. W., 1952), L. yarrowii (van der Walt et al., 1999), L. doorenjongii and L. chichibuensis (Yamazaki & Kawasaki, 2014) in this study ( Table 2). The isolates of these four species accounted for 1053 strains (92%) of the 1146 lipomycetaceous yeasts. Figure 2 shows the distribution of these species in Japan. The distribution of these species was clearly correlated with the latitude of the sampling site.
Lipomyces starkeyi
The isolated strains of this species were divided into several variant groups in the TEF1 gene phylogenetic tree ( Supplementary Fig. S3). The growth temperature test of the isolated strains indicated that two (Fr20GeDr4 and Fr23AgDr5) out of 10 (20%) strains of L. starkeyi from Furano, Hokkaido Prefecture, which is a subboreal zone, did not grow at 30 °C, although the other strains were able to grow at this temperature (27 of 27 strains) at other locations (Supplementary Table S3). This species tends to be sensitive to high temperature, which may be one of the reasons why L. starkeyi was not isolated from Iriomote Island, Okinawa Prefecture. These results indicate that this species consisted of several variant groups in Japan with diverse genetic and phenotypic characteristics.
Lipomyces yarrowii M.T. Smith & Van der Walt 1999
Lipomyces yarrowii was the dominant species in the samples from Iriomote Island, Okinawa (a subtropical zone), where L. starkeyi was not isolated (Figs. 1, 2; Table 2), and this was the only location where this species was found (Table 2). Lipomyces yarrowii has been isolated from tropical zones, such as Mauritius and Brazil, according to strain information from the website of the Westerdijk Fungal Biodiversity Institute (CBS; CBS-KNAW culture collection). Strains of this species include CBS 7557 T (isolated near Curepipe, Mauritius), CBS 7558 (near Vacoas, Mauritius), CBS 7785 (Amazonas Province, near Manaus, Brazil), and CBS 7789 (Amazonas Province, near Manaus, Brazil). Therefore, it is believed that this species inhabits tropical to subtropical climates. Four strains (Ir9AgDr1-1, Ir19-2ADl2, Ir26AgDr3, and Ir37AgDr1) out of five (80%) did not grow or grew weakly at 10 °C in the growth temperature test (Supplementary Table S3). In addition, Table 1 shows that the lowest annual air temperature in each location is below freezing, except for on Iriomote Island, which is 11 °C. These results indicate that the lack of growth ability at low temperatures of L. yarrowii strongly affects the distribution of this species. This is one of the possible reasons why Iriomote Island is the only location where L. yarrowii strains were isolated in this study.
Lipomyces doorenjongii Van der Walt & M.T. Smith 1999
Lipomyces doorenjongii strains, including those tentatively identified in this species, were isolated from Niigata Prefecture to Iriomote Island, Okinawa Prefecture. The species was isolated at the highest frequency in Yamaguchi and Shimane Prefectures in the southwest region of Honshu Island (Table 2; Fig. 2), indicating that the species inhabits temperate to subtropical climates.
Lipomyces doorenjongii was first recognized as a species related to L. starkeyi, L. kockii, and L. mesembrius and subsequently described as an independent species based on DNA-DNA reassociation analysis (The values of DNA relatedness are from 59% to 69%) (Smith, Poot, Batenburg-van der Vegte, & Van Der Walt, 1995;van der Walt et al., 1999). Kurtzman et al. (2007) conducted multigene phylogenetic analysis using the four DNA sequences such as LSU rDNA D/D2, SSU rDNA, Mitochondrial SSU rDNA and TEF1, for taxonomic analysis of the lipomycetaceous yeasts and they showed that the D1/D2 sequence could not clearly distinguish the four species (Few nucleotides substitution) especially L. mesembrius and L. doorenjongii (No nucleotides substitution). Yamazaki & Kawasaki (2014) and indicated that the TEF1 gene can effectively be used for distinguishing the related species.
There is little ecological information regarding this species. Four strains (Ir38AgDl1, Ir39GeDr1, Ir40AgDr1, and Ir44GeDl1) out of 12 (33%) of L. doorenjongii isolated from Iriomote Island did not grow or grew weakly at 10 °C, similar to L. yarrowii, whereas most of the other strains isolated from Yamagata, Niigata, Chiba, Shizuoka, Hyogo, Yamaguchi, and Kagoshima Prefectures were able to grow at this temperature (Supplementary Table S3). These results indicate that the distribution of L. doorenjongii could correlate with the lowest air temperature of the sampling site and that the isolation zone is limited to the subtropical to temperate zones of the southwest area of Japan.
Lipomyces chichibuensis A. Yamazaki & H. Kawasaki 2014
Lipomyces chichibuensis was isolated from Sapporo, Hokkaido to the Yamaguchi Prefecture, and showed a broad distribution similar to that of L. starkeyi. Lipomyces chichibuensis was first isolated in Japan using the same method as that used in the present study (Yamazaki & Kawasaki, 2014). Figure 1 and Supplementary Table S2 show the distribution of species and CFUs/g according to the local vegetation, and that L. chichibuensis was mainly isolated from soil samples around Cryptomeria japonica (Japanese cedar) and Chamaecyparis obtusa (Hinoki cypress) trees. However, the distribution of this species could depend not on vegetation and soil type (class) but on location. The species was rarely isolated from Japanese cedar and/or Hinoki cypress in regions other than Chichibu, Saitama Prefecture (Figs. 1, 2; Table 2).
Conclusion
In this study, we investigated the distribution, diversity, and density (CFU/g) of lipomycetaceous yeast species in Japanese soils using a cultivation-based approach. Methods used in this study were demonstrated to be an effective tool for evaluating the density of lipomycetaceous yeasts and separating them from the other large majority of yeast species existing in the soil samples.
The species belonging to the genus Lipomyces have been successfully isolated and covered 80% (24 out of 30) of the known species from soils in Japan, although there were several differences from previous reports on the occurrence of L. lipofer and L. tetrasporus.
Regarding species diversity, 2-5 species were generally isolated from each location, whereas 10 and 13 species were isolated from Kamogawa, Chiba Prefecture, and Iriomote Island, Okinawa Prefecture, respectively ( Table 2). The species diversity of the two locations was composed of rare lipomycetaceous yeast species described as new species in our recent study (Yamazaki et al., 2020).
These were exclusively isolated from these locations: 5 species from 4 out of 20 (20%) soil samples from Kamogawa, Chiba Prefecture, and 7 species from 5 out of 46 (11%) soil samples from Iriomote Island, Okinawa Prefecture.
The results of the species coverage showed that the method used in this study enabled effective isolation of Lipomyces species. Rare lipomycetaceous yeast species were successfully isolated in addition to the majority of the dominant species using this method. The variation of soils, climates, and vegetation of Japan due to the broad range of latitudes may have contributed to the diversity of lipomycetaceous yeasts. This result is supported by Vishniac (2006) which reported the relation between yeast species distribution and latitudinal gradient.
Our results indicated that L. starkeyi was dominant among lipomycetaceous yeast species in many locations, suggesting that this species may have the ability to adapt to the environment with rapid growth in temperate and subarctic climates and various vegetation and soil types from Hokkaido to Kagoshima Prefecture. The phylogenetic tree based on TEF1 showed that there were several variant groups within the species (Supplementary Fig. S3), and the growth temperature test indicated variations among the isolates of this species (Supplementary Table S3). Phylogenetic variation within L. starkeyi including Japanese wild strains was also found in carbon utilization, such as that of D-glucose, D-xylose, L-arabinose, D-galactose, D-mannose, and D-cellobiose for lipid production (Oguri et al., 2012).
This study showed that the isolation technique and sample collection, considering the soil location and vegetation, provided efficient isolation of various lipomycetaceous yeast strains. As Lipomyces species have the potential to produce a large amount of lipids from various sugars as raw materials, new strains with high performance will be isolated using this isolation method from the environments of Japan. | 2022-12-27T16:02:53.229Z | 2022-12-26T00:00:00.000 | {
"year": 2022,
"sha1": "2cb8880a09f1c5820397814deeba621a380cf4bb",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/mycosci/64/1/64_MYC593/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "368614d44df7459f5fbe1498f9f4fd7051bf55c7",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18803807 | pes2o/s2orc | v3-fos-license | Epidermal transient receptor potential vanilloid 1 in idiopathic small nerve fibre disease, diabetic neuropathy and healthy human subjects
Aims: The transient receptor potential vanilloid 1 (TRPV1) plays an important role in mediating pain and heat. In painful neuropathies, intraepidermal TRPV1 nerve fibre expression is low or absent, suggesting that pain generated is not directly related to sensory nerve fibres. Recent evidence suggests that keratinocytes may act as thermal receptors via TRPV1. The aim was to investigate epidermal TRPV1 expression in patients with neuropathic conditions associated with pain. Methods and results: In a prospective study of distal small nerve fibre neuropathy (DISN; n = 13) and diabetic neuropathy (DN; n = 12) intraepidermal nerve fibre density was assessed using the pan axonal marker PGP 9.5 and epidermal TPVR1 immunoreactivity compared with controls (n = 9). Intraepidermal nerve fibres failed to show TRPV1 immunoreactivity across all groups. There was moderate and strong TRPV1 reactivity of epidermal keratinocytes in 41.8% and 6% for DISN, 32.9% and 2.9% for DN and 25.4% and 5.1% for controls, respectively. Moderate keratinocyte TRPV1 expression was significantly increased in DISN compared with controls (P = 0.01). Conclusion: Our study suggests that in human painful neuropathies, epidermal TRPV1 expression is mainly in keratinocytes.
Introduction
Capsaicin, the primary pungent compound in 'hot' chili peppers, produces pain and inflammation when placed on skin or mucous membranes. These responses are a consequence of capsaicin activating the transient receptor potential vanilloid 1 (TRPV1), a non-selective cation channel receptor within C and A d nociceptors. 1 Several reports have demonstrated the existence of TRPV1 in sensory neurons 2,3 and the cloning of rat TRPV1 was a breakthrough in understanding its properties and physiological functions. 4 TRPV1 has now been found to be widely distributed amongst different types of tissues and has been identified in the human brain, 5 kidney, 6 bronchial epithelial cells 7 and recently in human keratinocytes in the epidermis. 8 TRPV1, by its activation through noxious temperature, protons and cannabinoid chemicals, plays an increasingly recognized role in the biology of dermatological and neuropathic conditions, 8 with anatomical distribution and disease influencing expression. TRPV1 expression in keratinocytes has recently led researchers to propose that keratinocytes may act as thermal receptors. 9 TRPV1 has been shown to be essential for the modalities of pain sensation and thermal hyperalgia. 10 However, its role in the pathogenesis of neuropathic pain, such as occurs in peripheral neuropathies, remains controversial. Although one study in a patient with postherpetic neuralgia has shown increased TRPV1+ intraepidermal nerve fibres (IENF), a recent study in patients with painful neuropathy found that TRPV1+ IENF were reduced and sometimes completely absent, suggesting that TRPV1 in epidermal axons is not primarily involved in the pathogenesis of neuropathic pain. 11 Rather, it has been suggested that the skin as a whole may act as a polymodal nociceptor which undergoes functional changes in painful conditions. 11 We set out to study the expression of epidermal TRPV1 in human skin by investigating two different types of neuropathy compared with healthy controls. Idiopathic small nerve fibre neuropathy was chosen because spontaneous heat pain is one of its primary characteristics, diabetic neuropathy (DN) because this is a common cause of sensory neuropathy.
Materials and methods
In a prospective study, hypothenar intraepidermal nerve fibre density (IENFD) from healthy subjects and from subjects referred for investigation of neuropathy was assessed. 12 In the subgroup reported here, the distribution of TPVR1 in the epidermis of the human skin is described. We studied the different epidermal TPVR1 staining intensity and distribution of patients with length-dependant DN and distal idiopathic small nerve fibre neuropathy (DSFN) and compared these with healthy subjects.
subjects After informed consent, those patients were included who had length-dependent DN with glove and stocking sensory symptoms (hypoaesthesia and ⁄ or dysaesthesia) with reduced touch perception to cotton wool and vibration sensation (128 Hz tuning fork) occurring in the presence of diabetes and who were without pathological levels of vitamin B12, thyroid-stimulating hormone and T4. Patients with sensory symptoms, but normal sensory testing to superficial touch and vibration were excluded.
Distal idiopathic small nerve fibre disease (DISN) was diagnosed in the presence of paraesthesiae (abnormal sensory perception) with additional findings of distal predominant small nerve fibre dysfunction (increased thermal and pin-prick sensation) on neurological examination. 13 On sensory testing the paraesthesiae are typically painful. Sensory symptoms had to occur without neurophysiological and clinical evidence of large nerve fibre disease (normal ulnar and median nerve sensory motor nerve conduction, normal sural nerve conduction and normal peroneal motor nerve conduction and normal monofilament testing (200 mg) for all five digit tips bilaterally. Detailed history and tests excluded diabetes mellitus, amyloidosis, toxic substances (alcohol, lead) and inherited sensory and autonomic neuropathies.
Healthy subjects were recruited from hospital staff and students from the National University of Singapore. One neurologist (E.P.W.-S.) examined all healthy subjects to exclude neuropathy based on the absence of clinical features of sensory and motor symptoms and a normal sensory motor examination. Inclusion was furthermore dependent upon a negative history for diabetes, alcoholism, radiculopathies, ulnar or median nerve entrapments and no previous exposure to chemotherapy or other neurotoxic medications. Subjects <21 years old, with hand sepsis or ulceration were excluded.
Procedures
All subjects gave informed consent and the National University Hospital Institutional Review Board approved the study. Hospital ethical guidelines are in accordance with the 1964 Declaration of Helsinki.
skin biopsy specimens
Skin biopsy specimens were taken from the hypothenar region of the non-dominant hand using a 3-mm punch after local infiltration with 2% lignocaine, as previously described. 12 Quantification of epidermal TRPV1 expression was performed using computerized image analysis (Image-Pro Ò Plus software; Media Cybernetics Ò , Silver Spring, MD, USA) linked to an upright microscope (Olympus Ò BX60, DP70 digital camera). The whole of the epidermis including basal and suprabasal areas was evaluated.
immunocytochemistry The skin biopsy specimens were cryoprotected by immersion in 15% sucrose and sectioned at 30 lm Transient receptor potential vanilloid 1 675 thickness using a cryostat. Sections were mounted on glass slides, washed in five to six 1-h changes of phosphate-buffered saline (PBS) and incubated for 1 h in a solution of serum (Vector, Burlingame, CA, USA) to block non-specific binding of the antibody. This was followed by incubation overnight in affinity-purified rabbit polyclonal antibodies to neuropeptide protein gene product (PGP) 9.5 or TRPV1. PGP 9.5 is found in vertebrate neurons and neuroendocrine cells and is commonly used to detect intraepidermal nerve fibres. 14 The antibody to TRPV1 was purchased from Affinity Bioreagents (Golden, CO, USA). The sections were rinsed with PBS and incubated for 1 h at room temperature in a 1:200 dilution of biotinylated goat antirabbit IgG (Vector), followed by three changes of PBS to remove non-reacting secondary antibody. The sections were then reacted for 1 h at room temperature with an avidin-biotinylated horseradish peroxidase complex. The bound antibodies were visualized by treatment for 5 min in 0.05% 3,3,diaminobenzidine tetrahydrochloride and 0.2% nickel ammonium sulphate in Tris buffer with 0.05% hydrogen peroxide. The colour reaction was stopped with several washes of Tris buffer, followed by PBS. Sections were mounted on gelatin-coated glass slides, dehydrated and lightly counterstained with methyl green before cover-slipping. Control experiments replaced the primary antibody with PBS or non-immune rabbit IgG serum fraction. The experiments were performed on the skin of the hypothenar region and showed lack of immunoreactivity.
intraepidermal nerve fibre density assessment IENFD assessment was achieved by staining skin obtained from the hypothenar region with PGP 9.5, as previously described. 12 quantification of trpv1 receptor staining in keratinocytes Assessment of immunoreactivity with TRPV1 was performed using a previously described and standardized semiquantitative assessment of photographs taken from antibody-labelled sections. 15 Using Image-Pro Ò Plus software, the colour image was converted into a black-and-white image with eight levels of monochrome grey tones (Figure 1). This picture was termed the 'filter'. The operator then selected the cut-off level of the perceived limit of TRPV1 staining. The operator was aided in this selection by a diagrammatic scale of the range of staining intensities seen in the 'filter'. The software program arbitrarily attaches a number ranging from 0 to 225 across all intensities of immunoreactivity, with absent (visually perceived as white) reactivity being ranked 225. The chosen numerical values used to assign intensity of reactivity with the Image-Pro Ò Plus software were 155-225 for absent staining (equivalent to immunonegativity); 154-124 for moderate and 123-0 for the highest level of A C B Figure 1. A, standard skin transient receptor potential vanilloid 1 (TRPV1) immunoreactivity in a patient with distal idiopathic small nerve fibre disease. B, the same image converted to black and white with eight levels of monochrome grey tones. C, the same image further converted into colour coding: green represents the most intense immunoreactivity, yellow moderate reactivity and red immunonegativity. (TRPV1 peroxidase stain.) immunoreactivity. Moderate and intense reactivity was selected by the operator at levels thought to represent moderate and intense immunoreactivity of cells. The 155-225 range (equivalent to immunonegativity) was assigned a red colour, 154-124 (moderate reactivity) yellow and 123-0 (highest immunoreactivity) green ( Figure 1). The Image-Pro Ò Plus software was then used to calculate the areas of the different intensities of TRPV1 reativity of the epidermal tissue. The data were formulated as the percentage of epidermal tissue without and with moderate and strong immunoreactivity for TRPV1. Measurements were performed for five fields per patient chosen at random from two sections oriented longitudinally (425 lm). The operator delineated the field of epidermal skin to be assessed by the software (Figure 1). The percentage area of TRPV1 keratinocyte immunoreactivity was then automatically derived for the whole epidermal region as well as separately for the suprabasal and basal compartments. This subdivision was designed to assess separately the more actively dividing basal cell layers from the more differentiated keratinocyte cells located higher in the epidermis. 16 The suprabasal epidermal compartment was taken as the top 2 ⁄ 3 region in the absence of anchoring papillae extending into the dermis. For anchoring papillae two cell layers superior to the basal laminae were included. The mean suprabasal and basal epidermal regions of absent, moderate and intense TRPV1 reactivity were obtained for each subject and subsequently summarized as means for each of the three groups.
analysis One-way anova was used to calculate differences in intensity of reactivity between the three groups for the epidermis as a whole and subdivided into suprabasal and basal compartments. P < 0.05 was considered to be significant.
clinical data
The mean blood sugar level of the diabetic patients was 9.5% (range 5.9-13.8%). In our laboratory, nondiabetic levels are <6.4%, optimal diabetic control is 6.5-7.0% and poor diabetic control is >8.0%. Six of the patients with idiopathic small nerve fibre disease and three of those with diabetes complained of a spontaneous or intermittent burning sensation in the hands or feet at the time of the biopsy. It was not possible to estimate accurately the length of time the burning sensation had been present. Normal numbers of epidermal nerves were observed in control patients, and significantly (P < 0.05) fewer nerve fibres were seen in idiopathic small nerve fibre disease and DN (Table 1).
trpv1 immunohistochemistry
The epidermis showed heterogeneous immunoreactivity, with increased reactivity occurring more in keratinocytes of the suprabasal layer of the epidermis (Figure 2). Tables 1-3 show the intensity of reactivity of the three different clinical groups according to analysis of the whole epidermal region and subdivision DISN, distal small nerve fiber disease; DN, diabetic neuropathy; IENFD, intraepidermal nerve fibre density. The numbers in bracket represent the standard deviation. *Indicates a significant difference of P = 0.015 between small nerve fibre and healthy groups.
of the epidermis into suprabasal and basal regions. A statistically significant increase in TRPV1 immunoreactivity was observed in keratinocytes of patients with idiopathic small nerve fibre disease. TRPV1 expression was significantly increased in the whole epidermal region in DISN when compared with controls (P = 0.01). Further analysis showed that the suprabasal areas were consistently more immunoreactive than basal regions. Keratinocyte reactivity (moderate) for TRPV1 in the suprabasal region was significantly increased (P = 0.005) in DISN compared with healthy groups, but failed to reach significance for the diabetic group and when extending analysis to the basal region. Keratinocyte reactivity was mostly cytoplasmic, with little or no reactivity of the nucleus (Figure 2).
Discussion
This study reports on the epidermal distribution of TRPV1 in the glabrous hand skin from healthy controls, patients with DISN and DN. The most important finding was the increased keratinocyte immunoreactivity to TRPV1 in patients with DISN in the absence of IENF immunoreactivity with TRPV1. For many years it was thought that TRPV1 was expressed primarily in peripheral sensory neurons. 17 Studies performed both in the skin of humans and rodents have carefully investigated the distribution of TRPV receptors in nerve tissue and across different tissues. 8, 18 Stander's study investigating the distribution of TRPV1 in cutaneous sensory nerves of human skin pointed out that in healthy humans, IENF do not or only poorly immunoreact with TRPV1. 8 Skin areas with strong TRPV1 reactivity were the dermal sensory axons, hair follicles, blood vessels and keratinocytes. In contrast, a recent study has found TRPV1 expression throughout the peripheral nervous system, including intraepidermal fibres. 11 The IENF immunonegativity in both our study and that of Stander et al. showing reduced or absent TRPV1 IENF density in the epidermis of patients with neuropathic pain. 11 The finding of this study that keratinocyte TRPV1 is significantly increased in patients with DISN suggests that one of the primary characteristics of DISNspontaneous heat pain -may be related to increased keratinocyte TRPV1. The robust evidence of strong TRPV1 expression in keratinocytes 9 has led to the belief that keratinocytes are capable of functioning as one of the main thermosensory receptors. 19 Studies using cultured human epidermal keratinocytes have confirmed that expressed TRPV1 responds to heat and noxious stimuli 20 and that activation of epidermal keratinocyte TRPV1 results directly in release of cyclooxygenase-2, one of the main mediators of inflammation. 21 The finding of reduced IENFD in our patients with DISN further supports the notion that the IENF themselves do not play a primary role in the production of heat sensation. To our knowledge, only one study has demonstrated increased TRPV1 immunoreactivity in a patient with painful neuropathy (postherpetic neuralgia). 22 It is interesting to note that although the authors report a somewhat patchy increase of IENF with increased TRPV1, the immunofluorescence also seems clearly to show increased keratinocyte expression in the skin afflicted with neuropathy. The authors propose that pain is maintained by a peripheral nociceptive maladaptation located within the skin, a theory apparently supported by neuropathic pain directly and drastically responding to excision of the painful skin.
The association of increased keratinocyte TRPV1 in human conditions with pain has recently been demonstrated in women with breast pain, in patients with rectal hypersensitivity and faecal urgency, as well as in pruritic skin of patients with prurigo nodularis. 8,23, 24 We recently described a patient with steroid-responsive small nerve fibre symptoms of burning feet and hands, where keratinocytes showed strong TRPV1 expression. 25 On follow-up skin biopsy after steroid administration, keratinocyte TRPV1 expression was drastically reduced in parallel with remission of clinical symptoms (manuscript in preparation).
The semiquantitative technique employed in this study has shown that in the epidermis of the healthy human glabrous skin about one-third of the keratinocytes show moderate TRPV1 expression, around 5% intensely so. The suprabasal epidermal area showed highest expression across all three groups, suggesting that expression may be linked to greater differentiation of keratinocytes, which is least in the basal epidermal region. 16 This contrasts with the study by Stander et al., which found the highest expression in the basal keratinocytes. 8 This may be due to differences in TRPV1 antibody specificity, in addition to the different skin type (glabrous) examined in the present study.
Our study has several shortcomings. Because of small numbers, it was not possible to perform statistical analysis to analyse the effect of burning compared with no burning across the groups or within groups. This would be of considerable interest in future studies. Furthermore, neuropathic pain or thermal hypersensitivity was also seen in some of the patients with DN. This overlap of clinical symptoms may help to explain why the mean keratinocyte TRPV1 expression in the DN group was also higher compared with controls.
In conclusion, our data show that increased keratinocyte TRPV1 expression in patients with small nerve fibre neuropathy and, to a lesser extent, in DN may play a role in explaining some of the typical clinical features of increased sensitivity to noxious stimuli.
Conflict of interest
None declared. | 2018-04-03T05:46:47.679Z | 2007-11-01T00:00:00.000 | {
"year": 2007,
"sha1": "169d298bd54e3d98e43976699487dcdd1aa90e23",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1365-2559.2007.02851.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b2fe8d651dde13174dbccb9b5cc98d17ea7d3b8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203986457 | pes2o/s2orc | v3-fos-license | Clinicopathological correlations of mesenteric fibrosis and evaluation of a novel biomarker for fibrosis detection in small bowel neuroendocrine neoplasms
Purpose Mesenteric fibrosis (MF) in small intestinal neuroendocrine neoplasms (SINENs) is often associated with significant morbidity and mortality. The detection of MF is usually based on radiological criteria, but no previous studies have attempted a prospective, multidimensional assessment of mesenteric desmoplasia to determine the accuracy of radiological measurements. There is also a lack of non-invasive biomarkers for the detection of image-negative MF. Methods A multidimensional assessment of MF incorporating radiological, surgical and histological parameters was performed in a prospective cohort of 34 patients with SINENs who underwent primary resection. Pre-operative blood samples were collected in 20 cases to evaluate a set of five profibrotic circulating transcripts—the “fibrosome”—that is included as an “omic” component of the NETest. Results There was a significant correlation between radiological and surgical assessments of MF (p < 0.05). However, there were several cases of image-negative MF. The NETest-fibrosome demonstrated an accuracy of 100% for the detection of microscopic MF. Conclusions The detection of MF by radiological criteria has limitations. The NETest-fibrosome is a promising biomarker for fibrosis detection and further validation of these results would be needed in larger, multicentre studies.
Introduction
The development of mesenteric fibrosis (MF) in small intestinal neuroendocrine tumours (SI NETs) is associated with significant morbidity [1,2] and may also adversely affect patient prognosis [3][4][5]. Despite its sinister and substantial clinical ramifications, MF remains an underresearched area of neuroendocrine neoplasia and its pathophysiology is poorly understood [1].
Typically, the presence of mesenteric desmoplasia is determined radiologically. However, the assessment of MF is a problematic area, because there is very limited literature on the multidimensional evaluation of fibrosis using a triangulation of different methodologies. To the best of our knowledge, the present study is the first report of a prospective evaluation of mesenteric desmoplasia using different methods of assessment. Our hypothesis was that conventional imaging may have limitations particularly for the detection of small amounts of fibrosis that can be revealed by histological examination of the mesenteric mass. Although histological measures of MF are not routinely used, we decided to evaluate two parameters: (1) the width of fibrous bands, which was used in an older study of MF [6] and (2) the Collagen Proportionate Area (CPA), which has been previously used in the field of hepatology as an index of liver fibrosis severity [7].
Based on our hypothesis that clinical assessments of MF may not necessarily detect minimal degrees of fibrosis evident at histological level, we decided to also evaluate a non-invasive biomarker with potential utility for the detection of 'image-negative' desmoplasia. Currently, there is a lack of clinically useful biomarkers for fibrosis in SI NETs. Although several non-invasive biomarkers have been investigated in the context of carcinoid heart disease [1], only a few studies have assessed the utility of non-invasive biomarkers (serum CTGF [Connective Tissue Growth Factor], urinary 5-HIAA [5-hydroxyindoleacetic acid]) in MF [3,8,9]. However, these biomarkers have modest performance metrics, require further validation and have not gained acceptance.
The NETest is a PCR-based tool that measures a panel of 51 circulating transcripts and has an excellent sensitivity and specificity for the diagnosis of neuroendocrine neoplasia [10,11]. We hypothesised that a subset of genes within this 51-gene panel with defined roles in fibrosis development-the fibrosome-may be predictive of mesenteric desmoplasia. Therefore, the main aims of this prospective study were firstly to evaluate MF using a multidimensional approach and assess the accuracy of radiological criteria, and secondly to evaluate the performance of the NETest-fibrosome panel as a blood-based biomarker for the detection of MF.
Materials and methods
A total of 34 patients with SI NETs, who underwent primary resection at the Royal Free Hospital, ENETS Centre of Excellence between 2016 and 2018, were prospectively recruited into this study. Informed consent was obtained from each patient included in the study. The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki (6th revision, 2008) as reflected in a priori approval by the institution's human research committee (UCL Biobank Ethical Review Committee approval [reference number NC2017.003]). A summary of patient characteristics is provided in Table 1.
A multidimensional assessment of MF was used, and the following components were assessed: (i) The radiological severity of mesenteric desmoplasia was based on the scoring system originally proposed by Pantongrag-Brown et al. [6] using the following categories: (a) No radiological evidence of mesenteric desmoplasia (Absence of radiating strands), (b) Mild desmoplasia (≤10 thin radiating strands), (c) Moderate desmoplasia (>10 thin strands or <10 thick strands) and (d) Severe desmoplasia (≥10 thick strands).
(ii) The histological assessment of MF was based on the histological slide with the maximum amount of fibrous tissue. Surgical resection specimens (rather than biopsy material) were used for this purpose to minimise the risk of sampling error. In fibrotic patients the mesenteric mass and surrounding tissue were examined for fibrous tissue using Sirius Red staining, while in non-fibrotic patients (who did not have a mesenteric mass), the non-fibrotic mesentery adjacent to the resected primary tumour was examined. The histological slide was stained with a connective tissue stain (Sirius Red) and two parameters were measured: (a) The width of the thickest fibrous band surrounding the tumour. This technique was used previously by Pantongrag-Brown et al. and showed a correlation with the radiological assessment of MF [6]. In their publication, Pantongrag-Brown et al. also introduced a new histological parameter, the so-called 'fibrosis grade', which is based on the maximum width (grade 1: width < 1 mm, grade 2:1-2 mm, grade 3: >2 mm) [6]. (b) The CPA, which represents the percentage of collagen in the stroma surrounding the tumour. This is a quantitative method of measuring fibrous tissue using digital image analysis and has been validated in liver cirrhosis [7,12].
Optimisation/characterisation of the inter-observer variability
The cross-sectional imaging (CT/MRI scan) was assessed independently by two assessors (CS and JB) with good inter-observer agreement. In a small number of cases (n = 3) a minor discrepancy was observed between the two assessments and consensus was reached between the assessors after a final review of the imaging studies.
The histological slides were assessed independently by two assessors (AH and SA) with good inter-observer variability. In the case of minor discrepancies (<20% difference between the two measurements) the mean value of the two assessments was calculated and used for our analysis. In the small number of cases with more significant discrepancies (>20% difference between the two measurements), consensus was reached between the two assessors after a final review of the slides.
(iii) A surgical assessment of the extent of MF in relation to the entire small bowel mesentery was also provided using the following categories: This assessment was provided by the operating surgeon (the same surgeon [OO] performed the macroscopic assessment of mesenteric desmoplasia in all the cases).
A total of 20 patients were included in the biomarker assessment study (a subset of 19 patients from the mesenteric desmoplasia evaluation study and an additional patient who had extensive MF and unresectable disease, who was not included in the desmoplasia assessment study, since no histology was available). The characteristics of this patient cohort are summarised in Online Resource 1. No patients had carcinoid heart disease or other fibrotic conditions.
The presence of MF was assessed using a multidimensional approach, incorporating radiological, surgical and histological parameters. We then evaluated the utility of the Fibrosome in the detection of both macroscopic and microscopic fibrosis. Two patients were classified as 'nonfibrotic' and eighteen as 'fibrotic'.
A total of 31 blood samples were collected preoperatively (within 24 h of surgery) in 5 ml EDTA tubes and stored in −80°C within 2 h of collection (samples immediately stored on ice/4°C after sampling). Deidentified samples were shipped on dry ice to Wren laboratories, USA for analysis. The methodology of NETest measurements has been previously described [13][14][15].
For this study, we assessed a subset of five circulating transcripts (from the entire 51-gene molecular signature) with known roles in fibrosis, namely: CTGF, CD59, APLP2 (amyloid precursor-like protein 2), FZD7 (frizzled homologue 7) and BNIP3L. These five genes (with the exception of CTGF) have not been investigated in the context carcinoid-driven fibrosis but have been linked to fibrosis in other conditions. FDZ (Frizzled) are seven-transmembrane receptors that bind Wnt proteins and mediate the canonical and non-canonical Wnt signalling pathways. Wnt signalling plays important roles in tissue development and repair, as well as carcinogenesis, but more recently it has also been implicated in fibrogenesis [16,17]. FDZ7 in particular has been shown to mediate TGFβ-induced pulmonary fibrosis via the non-canonical Wnt signalling pathway and lead to the expression of collagen I, fibronectin, CTGF and α-SMA in lung fibroblasts [18]. CTGF is a known mediator of fibrosis, which acts downstream of TGFβ, and has been previously investigated in carcinoid-related desmoplasia and other fibrotic conditions [1,9,19]. BNIP3L is also implicated in cardiac fibrosis, where it is known to promote TGFβ expression in cardiac fibroblasts [20]. Moreover, CD59 is a regulator of complement activation and inhibits the formation of the membrane attack complex. The complement system is involved not only in innate immunity and adaptive responses but also in tissue repair and fibrosis [21,22]. Thus, CD59 may be viewed as a regulator of fibrosis. Finally, APLP2 is widely expressed in human cells and has been implicated in cancer progression. A recent study in Drosophila demonstrated that APLP2 expression promotes cell migration by inducing matrix metalloproteinase MMP1 expression, which in turn leads to basement membrane degradation [23]. Therefore, this protein may play a role in extracellular matrix remodelling and its precise role in carcinoid-related fibrosis needs to be further investigated.
Statistical analysis was performed using GraphPad Prism® version 8 and SPSS version 25 statistical software. A p-value < 0.05 was considered statistically significant.
Inter-observer variability
There was a 91% agreement in the radiological assessment of mesenteric desmoplasia between the two assessors. In addition, the inter-observer variability in the histological measurements was very small for both the CPA (Spearman's correlation r = 0.86998 [95% CI 0.7487, 0.9347], p < 0.0001) and width of fibrous band measurements (Spearman's correlation r = 0.9174 [95% CI 0.8366, 0.9591], p < 0.0001) between the two assessors.
Correlation of surgical and radiological assessments of MF
There was a statistically significant correlation between the surgical and radiological methods of assessment of MF (Fisher's exact test, p = 0.014) ( Table 2). Of those patients without the evidence of MF on cross-sectional imaging (n = 15), no fibrous tissue was detected intra-operatively in nine cases (60%), while the remaining patients (40%) had fibrosis macroscopically. On the other hand, when fibrosis was detected radiologically (n = 19), this was also seen intra-operatively in most cases (n = 16) (84%) ( Table 2).
Correlation of surgical/radiological and histological assessments of MF
In several cases there was histological evidence of fibrosis around the mesenteric mass, which was not seen radiologically, indicating the presence of image-negative mesenteric desmoplasia (Figs. 1-3). Similarly, often MF was present histologically but not detected intra-operatively by macroscopic inspection (Figs 1 and 3)
Evaluation of the NETest-fibrosome as a biomarker for MF
In this small cohort of 20 patients there was one patient who did not appear to have obvious mesenteric desmoplasia at macroscopic assessments of fibrosis, although some minimal fibrosis was detected histologically (Fig. 3). In this case, a thin fibrous capsule was seen around a small mesenteric lymph node. Although the natural history of mesenteric mass formation is not well documented in the literature, this small fibrotic lymph node would conceivably have developed into a larger fibrotic mesenteric mass, if it had been left in situ. Therefore, the ability of a biomarker to detect both macroscopic and microscopic fibrosis may be of clinical utility in anticipating the development of fibrosis, when this is not evident using solely macroscopic assessments.
Patients with macroscopic and microscopic MF were included in the fibrotic group and the ability of the five circulating transcripts from the NETest (APLP2, BNIP3L, CD59, CTGF and FZD7) to define a fibrotic phenotype was assessed.
The mathematical combination of the five circulating transcripts achieved an AUC of 1.000 (95% CI 1.000, 1.000, p < 0.001) and a predictive model based on the combination of these transcripts exhibited an accuracy of 100% for predicting the presence of MF (sensitivity 100%, specificity 100%) ( Table 3). This demonstrated the ability of these five circulating transcripts to determine the presence of desmoplasia, not only when it was macroscopically evident but also when it was detected only histologically.
Discussion
The present study is the first report of a prospective correlation of surgical, radiological and histopathological findings of MF associated with SI NETs and also the first study evaluating a circulating transcriptomic signature as a biomarker for the prediction of MF.
It is quite surprising that we could identify only one old, small, retrospective study published by Pantongrag-Brown et al. nearly 25 years ago, that evaluated the severity of MF by both radiological and histological criteria in midgut NETs with a mesenteric mass. Interestingly, in this study 21 cases with an associated mesenteric mass were evaluated by both methods (computed tomography and histology) and fibrotic tissue was detected histologically in all those cases [6]. This is in keeping with our observations that suggest that the presence of a mesenteric mass was invariably associated with the development of fibrosis, although sometimes this was not detected on imaging and was only seen histologically as a 'fibrous capsule' (Figs 1 and 3). This is a quite unusual pattern which has not been reported Fig. 1 Correlation of the presence of MF by radiological/ surgical criteria with histological measurements of fibrosis. In several cases, there was histological evidence of fibrosis which was not seen on imaging studies or intra-operatively c However, on histology a fibrotic capsule is seen surrounding the mesenteric lymph node, indicating the presence of image-negative mesenteric desmoplasia previously and is distinct to the typical 'stellate' or 'spokewheel' appearance, which is described in the literature as a pathognomonic sign of a midgut carcinoid with associated mesenteric desmoplasia. Thus, our data demonstrate clearly that cross-sectional imaging often underestimates the presence of fibrosis. Although the clinical significance of imagenegative MF is currently unknown (since this is a new concept and no previous studies have assessed the evolution Fig. 3 Review of surgical (a), radiological (b) and histological (c) assessments in a patient with a SI NET. a The primary tumour and mesenteric lymph node were removed laparoscopically. A small, soft palpable lymph node was seen intra-operatively with no obvious surrounding fibrosis. b Similarly, the CT scan showed a small lymph node with some subtle spiculation, but no evident desmoplasia with the typical 'stellate pattern'. c The histological slide of the lymph node with Sirius red staining showed a fibrotic capsule around the small (~14 mm) metastatic lymph node. This minimal amount of fibrous tissue was not obvious at macroscopic assessments Thus, our better understanding and recognition of imagenegative desmoplasia may be important not only for clinical purposes but also because it will hopefully allow the investigation of the natural history of this entity in future studies.
Although our study showed that macroscopic assessments (radiological or surgical) of MF were often inaccurate and therefore histological measurements should be the gold standard for the determination of MF, the most significant limitation of histological measures is that a surgical resection specimen is required. Therefore, the development of circulating biomarkers with a high sensitivity and specificity for the pre-operative detection of image-negative mesenteric desmoplasia may have important clinical utility.
In the present study we evaluated a subset of five genes from the NETest that are related to fibrosis and assessed their performance metrics in the detection of macroscopic and microscopic fibrosis. The NETest is a PCR-based 51 transcript signature that has an excellent (>90%) sensitivity and specificity for the diagnosis of gastroenteropancreatic NETs, and has been shown to outperform conventional secretory biomarkers, such as chromogranin A [11,13,14,24,25]. In addition, this molecular signature correlates with disease status [26,27] and captures the hallmarks of neuroendocrine neoplasia [28]. The NETest has also been shown to predict response to somatostatin analogue therapy [15], peptide receptor radionuclide therapy [29,30], operative resection and ablation strategies [31].
Given the ability of this multianalyte to act as a liquid biopsy that can capture the multidimensionality of neuroendocrine neoplasia, we hypothesised that a subset of five genes from the NETest (APLP2, BNIP3L, CTGF, CD59 and FDZ7) that are involved in fibrosis-the fibrosomemay be a clinically useful and accurate biomarker of MF. In this small cohort of 20 patients, who did not have carcinoid heart disease or other fibrotic disorders, the fibrosome could accurately predict the presence of microscopic (imagenegative) fibrosis (100%). This mirrors the ability of circulating transcripts (NETest) to detect microscopic tumour burden, when conventional imaging modalities (CT/MRI and 68 Ga PET/CT) are negative (image-negative liver disease) [32], although the clinical implication of such micrometastatic disease and its impact on medical management strategies remain unclear.
There are several limitations to the present study. Firstly, the number of patients is relatively small and therefore validation of our findings in larger, ideally multicentre, prospective studies would be needed. Secondly, the radiological evaluation of MF included CT imaging in some cases and MRI in others, and although no studies have compared the sensitivity and specificity of these different techniques in fibrosis detection, this may have led to some discrepancies in these evaluations. Also, recent advances in imaging modalities mean that a direct comparison with the older study of Pantongrag-Brown et al. published in 1995 may not be entirely valid. However, this is the only study in the literature where such clinico-pathological evaluations of MF were performed. Thirdly, the surgical assessments of MF were rather subjective and based on a macroscopic evaluation during surgery when an accurate assessment can sometimes be difficult (for example, in the context of bleeding). Finally, we assessed a circulating molecular signature as a biomarker for fibrosis. Conceivably the levels of circulating transcripts in the blood might be affected not only by the levels of gene expression in the tissue, but also the size of the primary tumour and fibrotic mesenteric mass, as well as treatments (e.g. somatostatin analogues), and although this is currently not known, it should be mentioned as a potential limitation.
In conclusion, this study has utilised a triangulation of different methodologies to assess MF in SI NETs and has introduced the concept of image-negative mesenteric desmoplasia. It has also investigated the role of a novel circulating biomarker in the detection of MF. In future studies, these findings would need to be externally validated in additional and larger patient cohorts. Furthermore, the clinical role of this circulating molecular signature in other fibrotic complications of neuroendocrine tumours (such as carcinoid heart disease) would need to be explored, as well as its specificity for carcinoid-related fibrosis in patients with other fibrotic conditions. These studies will define the role of this promising novel biomarker and delineate its clinical utility in a variety of clinical applications.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional research committee (UCL Biobank Ethical Review Committee approval [reference number NC2017.003]) and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed consent Informed consent was obtained from all individual participants included in the study.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2019-10-10T09:21:58.472Z | 2019-10-09T00:00:00.000 | {
"year": 2019,
"sha1": "9b0cdae9d5d2543d8d5646d9e0f82ccf9cbb8978",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12020-019-02107-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "1172ac6897c2d45c16f17700ae270de9d64cad5e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18477084 | pes2o/s2orc | v3-fos-license | Dietary Nucleotides Supplementation Improves the Intestinal Development and Immune Function of Neonates with Intra-Uterine Growth Restriction in a Pig Model
The current study aimed to determine whether dietary nucleotides supplementation could improve growth performance, intestinal development and immune function of intra-uterine growth restricted (IUGR) neonate using pig as animal model. A total of 14 pairs of normal birth weight (NBW) and IUGR piglets (7 days old) were randomly assigned to receive a milk-based control diet (CON diet) or diet supplemented with nucleotides (NT diet) for a period of 21 days. Blood samples, intestinal tissues and digesta were collected at necropsy and analyzed for morphology, digestive enzyme activities, microbial populations, peripheral immune cells, expression of intestinal innate immunity and barrier-related genes and proteins. Compared with NBW piglets, IUGR piglets had significantly lower average daily dry matter intake and body weight gain (P<0.05). Moreover, IUGR markedly decreased the villous height and villi: crypt ratio in duodenum (P<0.05), as well as the maltase activity in jejunum (P<0.05). In addition, IUGR significantly decreased the serum concentrations of IgA, IL-1βand IL-10 (P<0.05), as well as the percentage of peripheral lymphocytes (P<0.05). Meanwhile, the down-regulation of innate immunity-related genes such as TOLLIP (P<0.05), TLR-9 (P = 0.08) and TLR-2 (P = 0.07) was observed in the ileum of IUGR relative to NBW piglets. Regardless of birth weight, however, feeding NT diet markedly decreased (P<0.05) feed conversion ratio, increased the villous height in duodenum (P<0.05), activities of lactase and maltase in jejunum (P<0.05), count of peripheral leukocytes (P<0.05), serum concentrations of IgA and IL-1β as well as gene expressions of TLR-9, TLR-4 and TOLLIP in ileum (P<0.05). In addition, expressions of tight junction proteins (Claudin-1 and ZO-1) in ileum were markedly increased by feeding NT diet relative to CON diet (P<0.05). These results indicated that IUGR impaired growth performance, intestinal and immune function, but dietary nucleotides supplementation improved nutrients utilization, intestinal function and immunity.
Introduction
Intra-uterine growth restriction (IUGR) refers to the impaired growth and development of a mammalian embryo/fetus or its organs during pregnancy [1,2]. Approximately 5*10% of human neonates suffer from IUGR [3]. Neonates with IUGR have increased morbidity and mortality during the early life period, including the delayed postnatal growth and development, as well as increased susceptibility to infection [4]. Studies have shown that functions of internal organs, nutrient metabolism and immune system are impaired in IUGR human-beings [5][6][7][8] and animal models [9][10][11].
The various nutritional interventions have been developed to optimize the growth and health of IUGR neonates [12][13][14]. Nucleotides are a group of bioactive agents playing important roles in nearly all biochemical processes, such as transferring chemical energy, biosynthetic pathways and coenzyme components [15]. The nucleotide requirements could be met by three sources including de novo synthesis, salvage pathways and food. Generally the milk of mammal animals have higher contents than any other food origin [16]. Under certain conditions such as stress, immunological challenges and disease state, exogenous nucleotides become essential nutrients to optimize intestinal and immunological function [15,17,18]. It has been demonstrated that nucleotide supplementation could increase weight gain and antibody responses to tetanus toxoid of infants [19,20]. To our knowledge, however, it is lacking about the effects of nucleotide supplementation in formula on growth and health parameters of IUGR neonates, which are often fed artificial formula to ensure catch-up growth and development [21,22]. Therefore, this study was to determine whether dietary supplementation of nucleotides could improve the growth performance, intestinal development and immune function of IUGR neonates. Piglets are generally accepted as the animal model for infant nutrition, because the structural and physiological similarities of gastrointestinal tract between pigs and human beings [23,24]. As a multi-fetal domestic animal, moreover, pigs have exhibited naturally occurred IUGR due to utero-placental insufficiency [23].
Milk replacer diets
Milk replacer powder was formulated according to the previous study [25]. The nucleotidesupplemented diet was prepared by adding milk replacer powder with a mixture of pure nucleotides (29.6 g 5´-adenine-monophosphate, 14.2 g 5´-cytosine-monophosphate, 40.8 g 5´-guanosine-monophosphate, 5.8 g 5´-inosine-monophosphate, and 650.5 g 5´-uridinemonophosphate), a total of 740.9 g nucleotides per 100 kg milk replacer powder. Pure nucleotides were donated by Zhen-AO Group Co. Ltd. (Dalian, China) and had purities of 97%, according to the analysis of the manufacturer. The content of individual nucleotide in the final solution was designed according to the average content of nucleotides in sow milk during day 7 to day 28 after birth [26]. The formula milk was prepared by mixing 1 kg of milk replacer powder (dry matter, DM 87.5%) with 4 liters of water, in which nutrients composition and levels were similar as sow milk [9].
Animal and treatment
All of the procedures were approved by the Institutional Animal Care and Use Committee of Sichuan Agricultural University.
According to the previous studies [25,27], fourteen healthy pregnant sows at parity 3 gave birth at full term (115±2 d gestation). Newborn male piglets (Pig Improvement Company 327 × 1050) with birth weight near the mean birth weight (±0.5 standard deviation, SD) were identified as normal birth weight (NBW), whereas piglets at least 1.5 SD lower birth weight were defined as IUGR. Following this criteria, fourteen pairs of NBW piglets at 1.56 (SD 0.05) kg and IUGR littermates at 0.91 (SD 0.03) kg were selected from the 14 sows, who had same litter size (10 live piglets per litter). All piglets were weaned at 7 day of age and moved to be individually fed with milk-based diet every 3 hours by bottle feeding between 06.00 and 24.00 hours in nursing cages (0.8 m × 0.7 m × 0.4 m). For nutritional treatments, seven pairs of NBW and IUGR piglets were assigned to receive control diet (CON), while the other 7 pairs were allocated to receive nucleotides-supplemented diet (NT). Therefore, four groups of piglets were created and studied: NBW-CON; IUGR-CON; NBW-NT; IUGR-NT (n = 7). As the previous study [25], the milk-based diet was prepared by mixing 1 kg of formula powder with 4 litres of water to a milk solution, one hundred milliliter of milk-based diet contained 5.06g protein, 4.64g lactose and 5.20g lipids, which were similar as that in the same volume of sow milk, containing 5.00g protein, 5.06g lactose and 7.90g lipids [28]. All piglets had free access to drinking water. Room temperature was maintained at approximately 30°C and the humidity was controlled between 50% and 60%. The body weight (BW) and formula milk intake of piglets were recorded daily. The average daily dry matter intake (ADMI) was calculated via multiplying the average daily intake of formula milk by its DM content (%), while daily intake of formula milk was calculated as the difference between the offered amounts and the refusals.
Blood sample collection
On day 28, D-xylose was orally administrated to piglets at the dose of 500 mg/kg BW after an overnight fast [29]. D-xylose solution was prepared by dissolving D-xylose powder (Sigma-Aldrich, St. Louis, MO, USA) at 50 mg/ml of deionized water. One hour after administration of the D-xylose solution, a 10-mL blood sample was collected by venipuncture of jugular vein. A part of the blood sample was injected into vacuum tubes containing sodium heparin for the examination of leucocytes and lymphocyte subtype, another part of the sample was allowed to coagulate for 40 min before centrifugation (10 min, 2,375 × g at 4°C), the plasma samples were stored at -80°C until analysis.
Tissue sample collection
After blood sampling, all piglets were anaesthetized with an intravenous injection of pentobarbital sodium (50 mg/kg BW) and slaughtered. Piglets were weighed and crown-rump length (CRL) was taken (the supine length of the piglet from the crown of its head to the base of its tail) at d 28. Body mass index (BMI; BW/CRL 2 ) was calculated for each piglet. The liver, spleen, kidney, heart and pancreas of each piglet were weighed immediately. The length and weight of the small intestine were measured after the removal of luminal contents. Duodenal, jejunal and ileal samples of approximately 2 cm in length were stored in 4% paraformaldehyde solution for histological analyses. The rest of the jejunum and ileum were snap frozen and stored at fridge with −80°C until further analysis. Finally, colonic digesta were collected immediately after removal of the colon and frozen at −80°C.
Peripheral leucocytes and lymphocyte subtype detection
The examination of leucocytes (neutrophil, lymphocyte and monocyte counts) was conducted through an automatic blood analyser (Bayer HealthCare, Tarrytown, NY). Total peripheral blood lymphocytes were separated from heparinised peripheral blood, then stained with mouse anti-porcine CD3e-SPRD (PE-Cy5) (catalogue no. 4510-13), CD4a-FITC (catalogue no. 4515-02) and CD8a-PE (catalogue no. 4520-09), which were purchased from Southern Biotechnology Associates (Birmingham, AL, UK). PBS and 1.0% BSA (MP Biomedicals, Aurora, OH, USA) were used as diluents and washing buffer. Flow cytometry analysis was performed on a BD FACSCalibur flow cytometer (Becton Dickinson, San Jose, CA, USA), repeated for the same sample and compared for repeatability.
Measurement of plasma Immunoglobulin Subset and Cytokines
Commercially available enzyme immunoassays were performed according to the instructions from the manufacturer for the following markers: IgA (Bethyl Lab. Inc., Montgomery, USA), IL-1β (R&D Systems, Oxford, UK), TNF-α (R&D Systems, Oxford, UK), IL-10 (Bio Source/ Med Probe, Camarillo, CA, USA). Absorbance at 450 nm was determined using a Bio-Tek synergy HT microplate reader (BioTek Instruments, Inc., Winooski, USA). The detection limits were 12.5 ng/ml for IgA, 7.0 pg/ml for TNF-α, 30.0 pg/ml for IL-1β and 8.0 pg/ml for IL-10, respectively. The inter-and intra-assay coefficients of variation were less than 10%.
Determination of D-xylose in plasma
The D-xylose absorption test was carried out according to the method described by Mansoori et al. (2009) [29]. Briefly, D-xylose standard solutions were prepared by dissolving D-xylose in saturated benzoic acid at concentrations of 0, 0.7, 1.3, 2.6 mmol/L, then D-xylose standard solutions and 50 μL of plasma were added to 5 mL of phloroglucinol color reagent solution (Sigma Chemical Inc., St. Louis, MO, USA) and heated at 100°C for 4 min. The samples were allowed to cool until room temperature in a water bath. The absorbance of all samples and standard solutions were measured using the spectrophotometer at 554 nm (Model 6100, Jenway LTD., Felsted, Dunmow, CM6 3LB, Essex, England, UK).The standard solution of 0 mmol/L D-xylose was considered as blank.
Small-intestinal morphology
The duodenal, jejunal and ileal samples were preserved in 4% paraformaldehyde solution, then embedded in paraffin. Each sample (duodenum, jejunum and ileum) was used to prepare 5 slides and each slide had 3 sections (5 μm thickness), which were stained with eosin and hematoxylin. For each section, 20 well-oriented villi and crypts were measured for intestinal morphology (Optimus software version 6.5; Media Cybergenetics, North Reading, MA), then villicrypt ratio (VCR) was calculated.
Digestive enzyme activities
After thawing, the frozen jejunal sample (approximately 2 grams) was weighed and homogenized for 5 min in the 9 times volume of 50 mM Tris-HCl buffer (pH 7.0), then centrifuged at 3000g for 10 min. The supernatant was collected and stored at −20°C for the enzyme assay. Total proteins were extracted and the concentration was determined according to the procedure of bicinchoninic acid assay (Solarbio, Inc., Beijing, China). Activities of disaccharidaseincluding maltase, sucrose and lactase were measured using commercial kits according to the manufacturer's instructions (Nanjing Jiancheng Bioengineering Institute, Nanjing, USA). The absorbance at 505 nm was determined with spectrophotometer (Beckman Coulter DU-800, CA, USA). The activities of disaccharidase were expressed as U/mg protein. One unit (U) was defined as 1 nmol of maltose, sucrose and lactose as substrate for the enzymatic reaction.
Cell Cycle of Spleen by Flow Cytometry Method
The percentage of cells entering S and G2/M-phases of cell cycle were assessed by the flow cytometric analysis. At day 28 of the experiment, the spleen was excised from each piglet to determine the cell cycle stages by flow cytometry. Splenic cell suspension was prepared by dissecting spleen into small pieces and filtering them through the 300-mesh nylon gauze. Then, the cells were washed and suspended in phosphate buffer at a concentration of 1×10 6 cells/mL. A total of 500 μL cell suspension was transferred into 5-mL culture tube and centrifuged at 3000g for 5 min. The cell suspension was perrmeabilized with 1 mL of 0.25% Triton X-100 for 20 min at 4°C, then the cells were washed with phosphate buffer. Propidium iodide (5 μL) was added into 100-μL cell suspension and incubated for 30 min at 4°C in the dark room. Finally, 400 μL of phosphate buffer was added and the cell cycle stages were assayed by flow cytometry (Becton Dickinson, San Jose, CA, USA) within 45 min and analyzed by ModFit software (Verity Software House, Inc., USA). Proliferating index value (PI value) was calculated following the formula: Total RNA extraction and real-time PCR Total RNA was extracted from the frozen intestinal tissues (approximately 100 mg) using Trizol Reagent (Invitrogen, Carlsbad, CA, USA), according to the manufacturer's instructions.
RNA integrity and quality were determined by agarose gel electrophoresis (1%) and spectrometry (A260/A280). RNA concentration was confirmed by nucleic-acid/protein analyzer (Beckman DU-800, CA, USA). A commercial reverse transcription (RT) kit (TaKaRa, Japan) was used for the synthesis of cDNA. The RT products (cDNA) were stored at -20°C for relative quantification by polymerase chain reaction (PCR). Primers were designed by Primer Express 3.0 (Applied Biosystems, Foster City, CA, USA) and shown in Table 1. cDNA was amplified using a real-time PCR system (ABI 7900HT, Applied Biosystems, USA). The mixture (10 μL) contained 5 μL of SYBR Green Supermix (TaKaRa, Japan), 1 μL of cDNA, 0.4 μL of each primer (10 μM), 0.2 μL of ROX Reference Dye and 3 μL of ddH 2 O. The cycling conditions were used as follows: denaturation at 95°C for 15 sec, followed by 40 cycles of denaturation at 95°C for 5 sec, annealing at 60°C for 30 sec, and extension step at 72°C for 15 sec. Product size was determined by agarose gel electrophoresis. The standard curve of each gene was run in duplicate and three times for obtaining reliable amplification efficiency values as described previously [30]. The correlation coefficients (r) of all the standard curves were > 0Á99 and amplification efficiency values were between 90 and 110%. The most stable housekeeping genes (βactin and GADPH) were chosen for normalization. Relative mRNA abundance was determined using the Δ cycle threshold (ΔCt) method, as outlined in the protocol of Applied Biosystems. In brief, a ΔCt value is the Ct difference between the target gene and the reference gene (ΔCt = Ct target − Ct reference ). For each of the target genes, the ΔΔCt values of all the samples were calculated by subtracting the average ΔCt value of the corresponding IUGR-CON group. The ΔΔCt values were converted to fold differences by raising 2 to the power -ΔΔCt (2 -ΔΔCt ) according to Livak and Schmittgen (2001) [31].
Gut Microbial Population Determination
Bacterial DNA was extracted from the intestinal digesta using the Stool DNA Kit (Omega Biotek,Inc.,GA,USA) according to the manufacturer's instructions. Quantitative RT-PCR for total bacteria was performed with SYBR Green PCR reagents (Takara, Kyoto, Japan), whereas quantitative RT-PCR for Bifidobacterium, Lactobacillus, Bacillus and Escherichia coli were performed with Taq Primers, fluorescent oligonucleotide probes were commercially synthesized (Life Technologies Ltd., Beijing, China). The RT-PCR primers and probes combination were presented in Table 2. A 10-fold serial dilution of the copies, ranging from 1 × 10 1 to 1 × 10 12 copies/μl, were used to construct the standard curves. The copy numbers (copies/μl) were calculated via measuring the concentration of the plasmid using the spectrophotometer (Beckman Coulter DU-800, CA, USA) according to the equation: DNA copy numbers = (DNA concentration in μg/μl × 6.0233 × 10 23 copies/mol)/(DNA size (bp) × 660 × 10 6 ). The standard curves
Western blotting
Protein extracts were obtained by homogenizing ileal tissues using the protein extraction kit (Beyotime Biotechnology, Jiangsu, China) according to the manufacturer's guide. The protein content was measured using the bicinchoninic acid protein assay kit (Pierce, Rockford, IL, USA). The antibody was used in our experiment: goat polyclonal anti-ZO-1 (sc-8146, Santa Cruz Biotechnology, Santa Cruz, CA, USA), goat polyclonal anti-claudin-1 (sc-17658, Santa Cruz Biotechnology, Santa Cruz, CA, USA) and mouse monoclonal anti-β-actin (sc-47778, Santa Cruz Biotechnology, Santa Cruz, CA, USA). Western blot analysis was performed as previously described [32]. Chemiluminescence detection was performed using the ECL Plus TM Western Blotting Detection System (Amersham, Arlington Heights, IL, USA) according to the manufacturer's instructions. The relative expression of target protein was normalized using βactin as the internal protein, the normalized values were used for comparison between groups.
Statistical analysis
The data were analyzed by Duncan's multiple comparisons for the 2 × 2 factorial experimental design using the General Linear Model (GLM) procedure of SPSS statistical software (Ver.20.0 for Windows, SPSS, Chicago, IL, USA) in the following model: y ijk = μ + a i + b j + (ab) ij + e ijk ( i = 1, 2, j = 1, 2, k = 1, 2,. . ., 14), where y ijk represents the dependent variable, μ is the mean, a i is the effect of BW (IUGR, NBW), b j is the effect of Diet (CON, NT), (ab) ij is the interaction between BW and Diet, and e ijk the error term. Results are presented as means with their standard errors (SEM). Differences were considered as significant when P < 0.05, and a tendency was recognized when P< 0.10.
Growth performance
For the whole experimental period, IUGR piglets had lower average daily gain (ADG) and ADMI (P<0.05) than that of NBW piglets in either CON or NT diet group. Moreover, regardless of the diet, IUGR piglets had lower (−24%, P<0.05) average daily gain (ADG) and ADMI (−25%, P<0.05) compared with NBW piglets, respectively. Meanwhile, the initial BW, final BW and BW gain of IUGR piglets were lower (−25*37%, P<0.05) than those of NBW neonates. In addition, for either IUGR or NBW piglets, there was similar ADMI between CON and NT diet. Regardless of BW, however, ADG tended to be higher in piglets receiving NT relative to CON diet, consequently, feed conversion ratio (FCR) was markedly decreased (−17%, P<0.05)( Table 3).
Organ indices
As shown in Table 4, regardless of the diet, weights of internal organs such as heart, liver, spleen, kidney, pancreas and intestine were markedly decreased (−27*34%, P<0.05) in IUGR relative to NBW piglets. Moreover, the crown-rump length (CRL) and body mass index (BMI) at d 28 were lower (−9*11%, P<0.05) in IUGR than that of NBW piglets, respectively. However, the relative intestinal length to BW was significantly higher (+15%, P<0.05) in IUGR relative to NBW piglets. Regardless of BW, BMI tended to be increased (+8%, P = 0.08) in piglets fed NT diet compared with piglets fed CON diet. Intestinal morphology IUGR significantly decreased the villous height and VCR (−11*14%, P<0.05) of duodenum compared with NBW piglets. Irrespective of the BW, villous height in duodenum was markedly increased (+9%, P<0.05) in piglets fed NT diet relative to CON diet (Table 5). Mean values within a row with different superscript letters were significantly different (P< 0.05).
Plasma cytokines and immunoglobulin A
As shown in Table 7
Gut Microbial Population
IUGR piglets had markedly lower (−4%, P<0.05) population of Bacillus in colonicdigestacomparedwith NBW piglets, but no signficant differences were observed for pouplations of Escherichia coli, Bifidobacterium,Lactobacilli and total bacteria among groups (Table 10). Mean values within a row with different superscript letters were significantly different (P< 0.05).
Discussion
The previous studies have shown that IUGR delayed postnatal growth [9,25]. It has been proposed that the impaired intestinal function [33], endocrine status [34] and nutrient metabolism [35,36] contributes to the growth check of IUGR neonates. In this study, there was lower formula milk intake and growth rate in IUGR relative to NBW piglets, however, supplementing nucleotides in formula markedly improved nutrients utilization, as indicated by the decreased FCR. Accordingly, the growth rate was faster in piglets fed NT relative to CON diet. Consistently, Singhal et al. (2010) demonstrated that feeding nucleotide-supplemented formula increased body weight gain of infants [19], however, some other studies in pig model showed that NT diet did not markedly affect body weight gain and FCR [37,38]. The growth response of neonates to NT diet could be related to the physiological stage, supplemental contents and type of nucleotides [20,37,39]. In this study, nucleotides supplemented in formula had similar pattern and contents as the 5´-monophosphate nucleotide in sow milk, which is higher than that included in weaning diet for piglets [37,38]. In order to clarify the mechanism of growth-promoting effect in neonates by supplementing nucleotides, the intestinal responses of IUGR piglets to NT diet were further investigated. Consistent with previous results [27,40], IUGR impaired intestinal morphology and digestive enzyme activities, which are important factors to delay the postnatal growth. However, feeding NT diet increased the villous height, activities of lactase and maltase, thus resulting in the better feed conversion ratio relative to piglets fed CON diet. The improvements on intestinal morphology and enzyme activity could be related to the nucleotides, which are demanded to enhance the proliferation and maturation of enterocytes [41,42]. In addition, as reported before [43,44], the decreasing plasma concentration of D-xylose in IUGR piglets suggests the poor absorptive capability of IUGR intestine. In contrast, feeding NT diet markedly increased the plasma concentration of D-xylose and intestinal gene expression of PEPT1 in both IUGR and NBW piglets, suggesting the nutrients absorption may be increased by feeding NT diet.
In this study, moreover, the immunological response of IUGR piglets to NT diet was determined. There was decreasing plasma concentration of IgA in IUGR relative to NBW piglets. Consistently, the cord blood levels of IgG, IgA and IgM were markedly lower in IUGR relative to normal infants [45]. However, an one year longitudinal study showed no significant differences were observed for plasma levels of IgG, IgA and IgM between IUGR and NBW infants [46]. Regardless of BW, however, feeding NT diet markedly increased plasma concentration of IgA, which is also demonstrated to be higher in weaned pigs receiving nucleotides-containing ingredients [37,47]. Similarly, the infants receiving nucleotides-enriched formula had increased IgG antibody concentrations to tetanus and diphtheria toxoid vaccines [20]. In addition, cellular immunity may be impaired in IUGR piglets, as indicated by the lower lymphocyte percentage, decreased concentrations of IL-1β and IL-10 in IUGR relative to NBW piglets. Furthermore, IUGR may negatively affect spleen function according to the lower percentages of G 2 M phase splenocytes, which would inhibit the proliferation of immune cells in spleen [48,49]. Accordingly, the counts of lymphocyte and T-cells were significantly lower in IUGR infants [46]. In this study, however, cellular immunity-related leukocyte counts and IL-1β concentration as well as IL-1β:IL-10 were markedly increased in IUGR piglets fed NT diet, suggesting IUGR neonates may preferentially utilize nucleotides for cellular immunity. It has been reported that the nucleotides-deprived pyrimidine and purine bases are highly required for leukocyte proliferation [50].
Although previous study indicated that NT supplementation is able to improve intestinal microbiota in infants [51], the effect of NT diet on microbial populations was not observed, except the lower population of Bacillus in IUGR relative to NBW piglets in the present study. Toll-like receptors are typical pattern recognition receptors in mediating mucosal innate host defense to maintain mucosal and commensal homeostasis [52]. The MyD88, TRAF-6 and NF-κB are downstream signaling molecules shared by TLR-2, 4 and 9 [53], while SIGIRR and TOLLIP are crucial regulators negatively regulating the NF-κB signaling response [54,55]. It has been demonstrated that the TLR-4-Myd88-NF-κB signal pathway is involved in the inflammation [56]. In this study, the lower gene expression of TOLLIP in ileum suggests that IUGR intestine may have immature innate immunity. The lower expression of TOLLIP has been observed in necrotizing enterocolitis intestine with the excessive inflammatory response to colonizing bacteria [57]. Intriguingly, feeding NT diet markedly increased gene expressions of TLR-9, TLR-4 and TOLLIP, indicating the positive effect of nucleotides supplementation on intestinal innate immunity. The signaling extent of TLR-4-Myd88-NF-κB pathway has closely related to the intestinal barrier [58], which may be improved by supplementing nucleotides, as shown by the increased protein expression of Claudin-1 and ZO-1 in ileum of piglets fed NT diet. Claudin-1, ZO-1 and Occludin are typical structural proteins of epithelial tight junction [59], the higher expression of these proteins indicates there was decreasing risk of inflammatory bowel diseases [60].
In conclusion, IUGR piglets had impaired growth, intestinal and immune function relative to NBW piglets. However, dietary nucleotides supplementation improved feed efficiency, associated with better digestive and absorptive capability as well as immune function of IUGR piglets. | 2018-04-03T02:38:47.985Z | 2016-06-15T00:00:00.000 | {
"year": 2016,
"sha1": "f7a4576c41a6f0f8b43faf56966a2d00e53019ae",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0157314&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f7a4576c41a6f0f8b43faf56966a2d00e53019ae",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
236976389 | pes2o/s2orc | v3-fos-license | Circulant Shift-based Beamforming for Secure Communication with Low-resolution Phased Arrays
Millimeter wave (mmWave) technology can achieve high-speed communication due to the large available spectrum. Furthermore, the use of directional beams in mmWave system provides a natural defense against physical layer security attacks. In practice, however, the beams are imperfect due to mmWave hardware limitations such as the low-resolution of the phase shifters. These imperfections in the beam pattern introduce an energy leakage that can be exploited by an eavesdropper. To defend against such eavesdropping attacks, we propose a directional modulation-based defense technique where the transmitter applies random circulant shifts of a beamformer. We show that the use of random circulant shifts together with appropriate phase adjustment induces artificial phase noise (APN) in the directions different from that of the target receiver. Our method corrupts the phase at the eavesdropper without affecting the communication link of the target receiver. We also experimentally verify the APN induced due to circulant shifts, using channel measurements from a 2-bit mmWave phased array testbed. Using simulations, we study the performance of the proposed defense technique against a greedy eavesdropping strategy in a vehicle-to-infrastructure scenario. The proposed technique achieves better defense than the antenna subset modulation, without compromising on the communication link with the target receiver.
resilience against eavesdropping attacks as it concentrates the transmitted radio frequency (RF) signals along the direction of the intended user and reduces the signal transmitted along unintended directions, i.e. directions other than the direction of the intended user [2].
The directional beam patterns, in practice, are not perfect due to the design constraints in mmWave radios. Due to the high power consumption with fully digital arrays in a wideband setting, commodity mmWave radios are usually based on hybrid or analog antenna arrays that use RF phase shifters [1]. Moreover, the resolution of the RF phase shifters in these arrays is limited to few bits to reduce the hardware complexity [3]. The low resolution of phase shifters results in imperfections in the directed beam patterns which leak the RF signal along the unintended directions. In this paper, we study the RF signals leaked with such low resolution phased arrays and show that this leakage can be exploited by a mobile eavesdropper, such as an unmanned aerial vehicle (UAV) in a vehicle-to-infrastructure (V2I) scenario.
A standard approach to improve physical layer security (PLS) in an mmWave system is to reduce the energy leakage by appropriately designing a beamformer using channel state information (CSI) or the position of the eavesdropper [4], [5]. In [4], a precoding technique was proposed to reduce the energy leaked along the direction of the eavesdropper. In [5], defense mechanisms that exploit partial CSI to design precoders were developed to minimize the energy leakage. In this work, we claim that an eavesdropper can still breach such defenses that only focus on minimizing the energy leakage along potential eavesdropping directions. This is because a mobile eavesdropper can still achieve good received power by moving to a different direction, or by shifting closer to the transmitter (TX). The defense techniques in [4] and [5] also require fully digital antenna arrays and partial information about the eavesdropper, neither of which may be available in a practical system with analog or hybrid phased arrays.
Defense mechanisms that do not require fully digital arrays and are unaware of the eavesdropping location were proposed in [6], [7]. In [6], [7], hybrid beamformers were designed to transmit artificial noise (AN) along the unintended directions. Such AN-based defense techniques, however, degrade the performance at the intended receiver (RX). This is because either AN is induced at the RX or the power allocated for data transmission is reduced. An alternative approach that induces spatially selective AN requires partial CSI or position information of the eavesdropper which may not be available at the TX [8]- [10].
Directional modulation (DM)-based physical layer defense techniques are also promising for secure mmWave communication. These methods modify the beamformer at every sym-bol such that the constellation is maintained along the intended direction and distorted along other directions [11]- [21]. Various algorithms to design DM-based symbol-level precoding have been proposed for secure multiple-input multiple-output (MIMO) communication with a digital antenna array [11]- [16]. In the context of mmWave systems with hybrid or analog antenna array, DM-based methods have been proposed in [17]- [21]. For instance, the Antenna Subset Modulation (ASM) technique proposed in [17] switches off a subset of antennas at every symbol.
Switching at random changes the beamformer which affects the amplitude and phase of the transmitted symbol in all directions. By adjusting the phase of the transmitted symbol, the intended symbol is received at the RX while the symbol at the eavesdropper is distorted. A similar technique in [18] selects a random subset of antennas to destructively combine the RF signals at the unintended directions. Unfortunately, the methods in [17], [18] reduce the mainlobe gain under the per-antenna power constraint. As a result, the RX observes a lower power when compared to the use of an ideal directional beam. In [19], a time-modulated DM-based technique was proposed for secure mmWave communication. Another DM-based technique for actively driven phased arrays, where an amplifier is cascaded after each low-resolution phase shifter, was developed in [20]. Our defense technique, in contrast, is designed for low-resolution phased arrays with passive phase shifters under the per-antenna power constraint. Our method also does not require CSI of the eavesdropper.
In this paper, we propose a novel DM-based approach to defend against an eavesdropper without impacting the communication performance at the RX. Our method called Circulant Shift-based Beamforming (CSB) applies a random circulant shift of the standard beamformer in every symbol duration. These random circulant shifts induce random phase changes in the symbols received along different directions. As the TX knows the phase change induced along the intended direction, it adjusts the transmitted symbol such that the RX receives the symbol without any phase distortion. The symbol observed along any other direction, however, is corrupted by APN. We characterize the statistical properties of the APN induced by CSB along the on-grid directions and show that the equivalent channel between the TX and the eavesdropper suffers from an ambiguity in the phase of the received symbol. As a result, coherent modulation techniques such as M -PSK cannot be decoded by an eavesdropper located along the on-grid directions even if the eavesdropper observes a high received power.
The proposed CSB has three key advantages over the techniques designed for mmWave systems. First, there is a smaller power loss at the RX compared to the ASM-based approach, as 4 CSB activates all the antennas. Furthermore, circulantly shifting a beamformer does not change the beamforming gain at the discrete angles defined by the common DFT codebook. Second, our method is designed for low-resolution phased arrays without the assumption of active antenna elements as opposed to the prior work in [20]. Third, CSB has a low complexity than other DMbased beamforming methods as CSB does not require any real-time optimization to compute the beamformer to achieve secure communication.
We would like to mention that our technique is different from recent PLS methods based on spatial modulation (SM) [22] and index modulation (IM) [23]. In the SM-based defense techniques [22], the TX selects a subset of antennas based on the CSI of the channel between itself and the RX. Then, the RX uses the CSI to decode the data symbols. An IM-based defense technique such as the one discussed in [23] uses rule-based mapping for index modulation in OFDM-IM. In contrast, our proposed CSB defense does not focus on antenna selection or IM.
Our method only applies circulant shifts of the beamformer to corrupt the phase of the received symbols at the eavesdropper. The contributions of this paper can be summarized as follows: • We propose CSB for secure communication under RF energy leakage due to low resolution phase shifters. Our technique applies random circulant shifts of the beamformer together with appropriate phase correction in the transmitted symbol, to introduce APN in the unintended directions. The phase correction ensures that the RX obtains the correct transmitted symbol. We also theoretically analyze the secrecy mutual information (SMI) of the proposed defense technique.
• We validate the key idea underlying the proposed defense mechanism using an mmWave phased array testbed. Considering the phase noise limitation of our phased arrays, we design an experiment suitable to measure the phase change induced due to circulant shifts. We, then, experimentally show that circulant shifts induce different phase shifts along different directions.
• We design a first of its kind mobile eavesdropping attack in a V2I mmWave system with low-resolution phased arrays. For this attack, we formulate a 2D trajectory optimization problem to track the directions of the RF energy leakage over time. We numerically show how standard beamforming is vulnerable to such an attack, and discuss the use of CSB technique to defend this attack.
Organization: Section II contains the geometrical channel model and the definitions used in the paper. In Section III, we describe the proposed CSB for secure communication. Our experiment design to validate the proposed CSB is explained in Section IV. In Section V, we discuss our trajectory optimization-based mobile eavesdropping attack on the low-resolution phased array. Finally, we give simulation results in Section VI.
Notations: We denote the unit imaginary number by j = √ −1. We use boldface capital letter A to denote a matrix, boldface small letter a to denote a vector, and a, A to denote scalars.
A T ,Ā, and A * represent the transpose, conjugate and conjugate transpose of A. We denote (i, j)−th element of the matrix A by [A] i,j . The inner product of matrices A and B is defined
II. SYSTEM MODEL
In this section, we describe the channel and the system model used in this paper. We also discuss the imperfections in the beams generated with low-resolution phased arrays.
A. Coordinate system
We consider the geometrical setup depicted in Fig. 1 where the TX is equipped with a planar antenna array centered at (0, 0, 0). The plane of the TX array is perpendicular to the XZ-plane, and the array is tilted at an angle θ tilt towards the ground. For ease of analysis, we convert the rectangular coordinate system into a modified spherical coordinate system shown in Fig. 1.
The origin of the modified spherical coordinate system is defined as the center of the TX array.
The modified spherical coordinate system defines the elevation angle as the angle between the projections of (x, y, z) and the perpendicular to the TX array on XZ-plane. In contrast, the conventional spherical coordinate system defines the elevation angle as the angle between the (x, y, z) and its projection on XY-plane. This modification in coordinate system simplifies the definition of the array response matrix by decoupling the phase variations across two dimensions of the TX array.
We denote the RX coordinate in the rectangular and modified spherical systems by (x R , y R , z R ) and (r R , θ R , φ R ). These coordinates are defined under the assumption that the center of the TX is (0, 0, 0). Similarly, we use (x E , y E , z E ) and (r E , θ E , φ E ) to represent the coordinates of the eavesdropper in the rectangular and the modified spherical systems. We also define the angular coordinates of the RX and the eavesdropper, relative to the TX, as (θ R , φ R ) and (θ E , φ E ).
B. Channel model
In this paper, we model the mmWave channel between the TX and the RX as a narrowband line-of-sight (LoS) channel. The TX is equipped with a half-wavelength spaced uniform planar array (UPA) with N T × N T antenna elements. Although we assume an equal number of antennas along the azimuth and the elevation dimension for notational convenience, our design can also be generalized to other rectangular array geometries. The RX and the eavesdropper are assumed to be in the far field of the TX. For simplicity, we assume that the RX and the eavesdropper are equipped with a single mmWave antenna. The techniques discussed in this paper also apply to a multi-antenna RX and a multi-antenna eavesdropper under the far field assumption.
We now describe the array response matrices at the TX for the links associated with the RX and the eavesdropper. We define the Vandermonde vector a(θ) = 1, e −jπ sin θ , . . . , e −j(N T −1)π sin θ T .
As the angular coordinate of the RX relative to the TX is (θ R , φ R ), the array response matrix between the TX and the RX can be expressed as The definition of the elevation angle φ R in the modified spherical system allows the use of same array response function a(·) along both dimensions of the antenna arrays. Similar to the RX, we define the array response matrix associated with the eavesdropper as Under the LoS assumption, the TX-RX and the TX-eavesdropper channels are just a scaled versions of the corresponding array response matrices.
C. Signal model
We derive the signal model at a time instant t when the RX and the eavesdropper are located at The TX array response matrices associated with the RX and the eavesdropper are denoted by V(θ R,t , φ R,t ) and V(θ E,t , φ E,t ). The TX applies a beamformer F t to direct its signals towards the RX. We use x t to denote the symbol transmitted by the TX. We assume that both the beamformer and the transmitted symbols are normalized, i.e. ||F t || 2 F = 1 and E[|x t | 2 ] = 1. We denote the phase offset due to the propagation delay between the TX and the RX by ν R , the power received at the RX by P r R,t , and the independent and identically distributed (IID) complex Gaussian noise by n R,t ∼ CN (0, σ 2 ). Then, the signal received by the RX at time t is Similarly, let ν E be the phase offset due to the propagation delay between the TX and the eavesdropper, P r E,t be the power received by the eavesdropper, and n E,t ∼ CN (0, σ 2 ) be the IID complex Gaussian noise of the channel between the TX and the eavesdropper. Then, the signal received by an eavesdropper at (r E,t , θ E,t , φ E,t ) is Conventional beamforming methods that are agnostic to the eavesdropper maximize the signal power at the RX. For example, F t = V(θ R,t , φ R,t )/N T results in the maximum signal-to-noise ratio (SNR) of ρ R,t = P r R,t N 2 T /σ 2 at the RX. Such a beamformer, however, cannot be applied in low resolution phased arrays due to the limited resolution of phase shifters. This is because the phase of the entries in V(θ R,t , φ R,t ) do not necessarily take quantized values.
D. Practical beamformer design
We assume that the resolution of the phase shifters is q bits. In practice, q is a small number to limit the hardware complexity, e.g., 1 ≤ q ≤ 3 [24], [25]. In this case, the entries of the beamformer F t can only take finite phase values within the set B q = { 2πi 2 q : i = 0, 1, . . . , 2 q − 1}. Under this constraint, the phase of every element in the desired unquantized beamforming matrix is usually quantized to q levels for hardware compatibility. In this section, we describe the phase quantization procedure and its impact on the generated beam pattern.
The q-bit phase quantization function rounds the phase to the nearest element in B q , i.e., Q q (x) = arg min β∈Bq |β − x|. We denote the phase of a complex number x as (x). Thus, we can write the q-bit quantized beamformer corresponding to F t as We would like to mention that this approach of rounding off the phase to the nearest element in B q is one of many ways to calculate limited-resolution beamformer. Other methods to find the feasible beamformer are presented in [24]- [26].
The quantization of the phase shifts introduces imperfections in the generated beam pattern.
These imperfections cause energy leakage along the unintended directions, as shown in Fig. 2.
We observe from Fig. 2 that the energy leakage is significant with low-resolution phased arrays using q = 1. Specifically, the beam patterns generated by one-bit phased arrays with a rectangular array geometry are mirror symmetric about the boresight direction (see Appendix A for proof).
An eavesdropper such as a mobile adversary can exploit the energy leakage by moving to the directions where the leakage is large, to eavesdrop on the TX. Furthermore, the eavesdropper can shift closer to the TX along this direction to receive a higher SNR. As a result, defense mechanisms that just minimize the energy leakage are not well suited in a mobile setting where the eavesdropper can re-position itself. Therefore, in this work, we propose a DM-based defense mechanism that corrupts the phase of the received symbols at the eavesdropper. Furthermore, the phase corruption due to our method is independent of the energy received by the eavesdropper. The array is tilted at 15 • towards ground. The TX beamforms towards an RX whose angular coordinate is (−30 • , −42 • ).
III. CIRCULANT SHIFT-BASED BEAMFORMER DESIGN
In this section, we propose CSB as a defense against eavesdropping on a TX equipped with a low-resolution phased array.
A. Baseline 2D-DFT codebook
Our CSB technique is applied on top of the standard 2D-DFT codebook used in uniform planar phased arrays. Due to the use of q-bit phase shifters, we define the quantized version of the 2D-DFT codebook as When a beamformer is selected from the codebookF, the received signal at the RX and the eavesdropper can be computed from (5) and (6).
In the design of our defense mechanism, we assume that the RX and the eavesdropper are on- Although this assumption is required in the analysis of the proposed defense mechanism, we show in Section VI that our method works well even when the RX is off-grid provided the angular coordinate of the RX is known.
B. Circulantly shifting a beamformer
We define a matrix operator P m,n that circularly shifts the input matrix by m steps along each column, and by n steps along every row. Specifically, for an N × N matrix A, where (·) %N denotes the modulo-N operation. The matrix P m,n (A) is interpreted as an (m, n)
2D-circulant shifted version of A.
Now, we study the impact of circulantly shifting a beamformer on the received signal. We observe from (5) and (6) that the scaling introduced by the beamformer in the received symbol is V(θ, φ), F . We defineF as the set containing the q−bit quantized versions of the standard 2D-DFT beamformers. Our CSB technique is based on the key idea that circulantly shifting a beamformer at the TX affects the phase of the received signal differently in distinct directions.
We discuss this property in Lemma 1.
Lemma 1. Let the angular coordinate of an on-grid receiver (RX or eavesdropper) be (θ, φ) For an on-grid receiver, the (k, )-th element of the Similarly, the inner product between the circulantly shifted beamformer P m,n (F) and V(θ, φ) is where (a) is based on the observation We make three key observations from our result in Lemma 1. First, as | V(θ, φ), P m,n (F) | = | V(θ, φ),F |, it follows that the beamforming gain at the RX remains the same for any circulant shift applied at the TX. Second, radios at different angular coordinates (θ, φ)'s, equivalently different 2D-DFT grid locations (i, j)'s, observe different phase changes when circulantly shifting the transmit beamformer. Therefore, as long as the eavesdropper is not in the LoS path between the TX and the RX, the phase change induced at the RX and the eavesdropper are different when circulantly shifting the beamformer. Third, we notice that N 2 T distinct 2D-circulant shifts can be applied at the TX for every standard beamformerF. As different circulant shifts induce different phase changes in any direction, our CSB-based defense can randomize the phase at the eavesdropper by applying a random circulant shift ofF. It is important to note that circulantly shifting a beamformer at random also induces random phase changes at the RX which is undesirable.
Our CSB-based defense technique determines the phase change induced at the RX apriori, and adjusts the phase of the transmitted symbol accordingly. Such an approach ensures that the RX receives the correct transmitted symbol while the eavesdropper observes a phase perturbed as the phase adjusted transmit symbol. The symbol x t is sent over the beamformer P m,n (F t ) to the RX at 2D-DFT grid location (i R,t , j R,t ).
The signal received by the RX can be simplified using Lemma 1 as Therefore, by using the circularly shifted beamformer P m,n (F t ) and the phase rotated symbol x t , the received signal at the RX remains unchanged.
We now show that CSB perturbs the phase of the symbol received along the directions different from that of the RX. We assume an on-grid eavesdropper and use (i E,t , j E,t ) to denote its 2D-DFT grid location. With the circularly shifted beamformer and the phase-adjusted transmitted symbol, the signal received by the eavesdropper is As the eavesdropper and the RX are located along different directions, we have (i R,t , j R,t ) = (i E,t , j E,t ) for any t. In this case, we observe from (18) the RX is preserved while the phase of the symbol at the eavesdropper is corrupted. An example of the received constellation at the eavesdropper with the CSB technique is shown in Fig. 3.
C. Achievable secrecy mutual information
In this section, we first characterize the phase errors induced at the eavesdropper and then calculate the SMI achieved by CSB.
We call the phase errors induced by CSB as APN. We define ∆i t = i R,t − i E,t and ∆j = j R,t − j E,t as the difference in the DFT grid coordinates corresponding to the RX and the eavesdropper. The error in the phase of the received symbols at the eavesdropper, i.e., the APN, can be expressed using (18) as We also define g t = gcd(∆i t , ∆j t ). In Lemma 2, we derive statistical properties of the APN.
We avoid the subscript t for simplicity of notation.
Lemma 2. Consider independent random variables M 0 and N 0 that are uniformly distributed Then, 13 Proof. The proof contains two steps: We prove the first step (i) by induction. For the case (m, n) = (0, 0), ∆Φ = 0 ∈ Ω Φg . We assume that for the pair (m, n), ∆Φ = 2π , where is some integer. Then, for the pair (m + 1, n), where the equality (a) uses the fact that ∆i = gk for some integer k if g = gcd(∆i, ∆j).
We now prove the second step (ii) in Lemma 2. To show that ∆Φ is uniformly distributed over Ω Φg , we prove that there are same number of (m, n) pairs such that ∆Φ = We denote by m 0 , n 0 as the smallest values of m, n that satisfy (m∆i + n∆j) i.e., m 0 ∆i+n 0 ∆j = g +kN T , for some integer k ≥ 0. We also consider an integer pair (k 1 , k 2 ), where r is some integer. Thus, for each permissible pair (k 1 , k 2 ), there exists a pair (m, n) = Observe that the number of permissible pairs (k 1 , k 2 ) only depend on ∆i, ∆j, N T , and not on . Therefore, for every , there are same number of (m, n) pairs, such that ∆Φ = Proof. To prove this lemma, we first find a condition when two symbols e j2πk 1 /M and e j2πk 2 /M in a constellation M cannot be distinguished due to the APN induced by CSB. For two symbols to be indistinguishable under APN, the difference in the phases of the both symbols must be in where p 1 is an integer and ∈ N T gcd(N T ,g) . Observe that (g ) %N T + p 2 N T = g , for some integer p 2 . As a result, we can write We define g = gcd(g, N T ). Then N T = g u 1 and g = g u 2 , for some integers u 1 , u 2 . Additionally, note that u 1 = |Ω Φg |. By re-arranging (27), we get To satisfy (28) Example 1. Consider a TX with N T = 16 that uses a QPSK constellation. In the high SNR regime at the eavesdropper, the mutual information transfer to the eavesdropper is log 2 (4/ gcd(|Ω Φg t |, 4)) bits/symbol. If g t / ∈ {0, 8}, the mutual information between the TX and the eavesdropper is 0 bit/symbol. Alternatively, if g t = 8 the mutual information between the TX and the eavesdropper is 1 bit/symbol. Therefore, with CSB defense, the eavesdropper can only receive meaningful information along the certain directions associated with g t = 8 and g t = 0. Combined with directional beam patterns, the performance of eavesdropper is limited by low energy leakage or high phase corruption.
We now use Lemma 3 to derive the SMI with CSB defense by considering an M -PSK constellation. The SMI, measured in bits/symbol, is defined as the difference between the information transferred over the TX-RX channel and the TX-eavesdropper channel. We denote mutual information (MI) of the channel between TX and RX by I R , and MI of the channel between TX and eavesdropper by I E . Thus, we can define the SMI C S at time t as We define I(ρ, M ), measured in bits per symbol, as the spectral efficiency of the channel with SNR ρ and the input M -PSK constellation [27]. Additionally, if the eavesdropper is located at an on-grid position at time t such that gcd(∆i t , ∆j t ) = g t , then from Lemma 3, communication over the CSB-secured TX-eavesdropper channel using M -PSK modulation is equivalent to communication over the unsecured TX-eavesdropper channel using M/ gcd(g t , M )-PSK constellation. Thus, if the angular coordinate of the RX at time t is (θ R,t , φ R,t ), and that of the eavesdropper is (θ E,t , φ E,t ), then using beamformerF t at time t, we can calculate the SMI with CSB defense as For an effective eavesdropping attack, the eavesdropper attempts to minimize C S (t) by positioning itself to appropriate (θ E,t , φ E,t ). In the presence of CSB defense, the position of the eavesdropper, however, affects not only the SNR at the eavesdropper but also |Ω Φg t |, i.e., the equivalent constellation observed by the eavesdropper. Thus, CSB defense reduces information transfer to the eavesdropper by corrupting the constellation.
Remark: For the design of the CSB defense, we considered a narrowband single-path channel.
In a multi-path environment with different angle of departures, the RX receives a combination of desired constellation and a phase perturbed constellation. Due to the use of directional beams at the TX, however, the signals received from the non-dominant paths will have significantly less energy, thereby resulting in small perturbations in the constellation at the RX.
/ T x / 3 U 0 r d x K 5 A 0 C V k E V B C t m 6 4 T g 5 c S B Z w K N s y 2 E s 1 i Q v u k y 5 q G R i R k 2 k v H a Y f 4 w C g B k h l K g I 8 V m c n U h J q P Q h 9 0 x k S 6 O n f 3 k j 8 y 2 s m 0 D n x U h F C b C I T h Z 1 E o F B 4 t H p O O C K U R A D Q w h V 3 G T F t E c U o W A e l J 1 9 w v + k V r D d I t w 6 e b L D p o g g / b Q P j p E L i q i M r p A F V R F F F 2 j O / S A H i 1 p 3 V t P 1 v O k d c 6 a z u y g H B e v g B i 4 Z N X < / l a t e x i t >
Beamformer F # Training sequence
D. Implementing CSB -A packet level overview
In this part, we describe the details related to implementation of CSB. Fig. 4 describes a typical PHY layer packet structure in IEEE 802.11ad protocol [28]. The training sequences, mainly short training field (STF) and channel estimation field (CEF), are used for the frame synchronization, carrier frequency offset (CFO) and phase offset correction. Then, data symbols are transmitted by the TX, followed by another packet or a short beam training field.
We propose to use CSB defense during the data symbol transmission. Specifically, the TX
IV. EXPERIMENTAL VALIDATION
In this section, we design an experiment to validate the premise of CSB defense. Specifically, our experiment estimates the phase change induced by circularly shifting a beamformer and shows that the estimated phase change is consistent with the result in Lemma 1.
A. Hardware setup
We use two N210 USRPs, each as the baseband processor at the transmitter and the receiver. Each USRP is connected to a separate SiBEAM Sil6342 phased array operating at 60.48 GHz. These phased arrays are uniform linear arrays with 12-antenna elements. Each element is connected to a 2-bit phase shifter that can be configured independent of the others. A block diagram with the hardware connections is shown in Fig. 5. We use the following procedure to The combination of the phase states applied to the 12 × 1 phased array realizes a specific beamformer. For the experiment, we emulate a one-bit phased array by using only two states out of four available phase states. Using one-bit phased array allows us to analytically predict the leaked RF signal which is mirror symmetric to the target direction as proven in Appendix A. Unlike ideal phased arrays, the off-the-shelf phased array used in our experiment does not provide the precise phase shifts of {0, π} due to hardware imperfections. The phase offsets from 0 and π are estimated at each antenna using the calibration procedure described in [29]. With the knowledge of the phase offsets associated with the phase states, the phase of every entry in the beamformer is mapped to the nearest phase offset available at that antenna element.
B. Experimental procedure
In Fig. 6(a) To estimate the phase change due to the change in the beamformer, we first correct the frequency offset of each STF, and calculate the phase offset of each Ga-sequence in an STF. As a result, any significant change in the phase offsets of consecutive Ga-sequences can be attributed to circulantly shifting the beamformer. The measured phase change is either due to (i) the transition from the test beamformer to its m-circularly shifted beamformer or (ii) the transition from the m-circularly shifted beamformer to the test beamformer. To distinguish between the two phase changes, we use different dwell durations for the test beamformer and its m-circulant shift. In particular, we implement the test beamformer for 1/3rd of the period duration and its m-circulant shift for 2/3rd of the period duration. Under such a setting, if two consecutive phase changes occur at a lag of 1/3rd of the period duration, we can conclude that the later phase change is due to the transition from the test beamformer to its m-circulant shift.
In Fig. 6(b), we show the difference between the phase offsets of consecutive Ga-sequences within a packet. The periodic pairs of spikes indicate sudden changes in the phase offset of consecutive Ga-sequences. These jumps are due to change in the beamformer. Furthermore, the long duration after the second, fourth and sixth spike is due to the transition from the beamformer to its m-circulant shift. By measuring the changes in the phase offsets and averaging them, we get the phase change due to the transition from the test beamformer to its m-circulant shift along a direction. Similarly, we measure the phase shift along different directions for every m ∈ {1, 2, ..., 11}.
C. Experimental results
The measurements collected using our experimental procedure are post-processed to verify Lemma 1. For the experiment, we use a one-bit quantized beamformer (q = 1) for directional beamforming along 10 • relative to the boresight. Due to the one-bit quantization, the beam pattern is symmetric about the boresight, i.e., the beam has two main lobes at 10 • and −10 • . Different circulant shifts of this beamformer are applied at the TX. In each case, the phase change induced due to circulant shift is measured by placing a receiver at 10 • . The experiment is repeated again by moving the receiver to −10 • . From Fig. 7, we observe that the phase change is linear with applied circulant shift m as derived in Lemma 1. The slope of this linear variation is also consistent with the angle from the boresight, as shown in Fig. 7. As the phase change induced at the RX by circulantly shifting a transmit beamformer can be predicted, the phase of the transmitted symbols can be adjusted at the TX for correct decoding along the direction of the RX. Such an adjustment, however, does not correct the phase perturbation at the eavesdropper. This is because the phase change induced by circulant shifting a beamformer is different along different directions.
V. AIRSPY: AN ATTACK ON V2I NETWORK
In this section, we describe an attack, called AirSpy, on a planar low-resolution phased array TX in a downlink V2I network. We assume a mobile UAV eavesdropper that is aware of the resolution of the RF phased array at the TX and the position of the RX. The attack is achieved by computing a UAV flight path that efficiently taps the leaked RF signals in a mechanically feasible manner. We first define the secrecy rate of the link between the TX and the RX. Then, we develop an attack by formulating a trajectory search problem under the mechanical constraints on the UAV. Finally, we discuss a dynamic programming-based algorithm for trajectory search.
A. Secrecy rate
To measure the severity of a physical layer attack, we define the secrecy rate corresponding to a beamformerF t as A greedy attack strategy is one that finds an optimal eavesdropping position (θ E , φ E ) = (θ R,t , φ R,t ) which minimizes the secrecy rate at every time instant. Such a greedy approach, however, may be mechanically infeasible under a finite velocity constraint. A good attack strategy is one that identifies and tracks multiple RF leakage signals over time for long term exploitation under the velocity constraint.
B. Learning algorithm for eavesdropping trajectory design
In this section, we define a trajectory and the set of feasible trajectories that satisfies the mechanical constraints on the motion of the UAV. Then, we propose an efficient dynamic programming-based algorithm that finds a UAV trajectory to eavesdrop on the TX. Our design assumes perfect knowledge of the RX location over a time interval, and minimizes the sum secrecy rate in this interval.
We consider a TX equipped with a planar antenna array situated at a height h from the ground.
We assume that the RX is a vehicular receiver that travels on a linear ground trajectory defined by the line {x = , z = −h}. To incorporate the mechanical constraints on the eavesdropping UAV and design a numerically efficient algorithm, we limit the motion of the UAV to a virtual plane called the UAV Plane. This plane is parallel to the plane of the TX antenna array at a distance d, as shown in Fig. 8. The azimuth and elevation angles subtended by the UAV plane at the center of the TX antenna array are both equal to β, where β ∈ (0, π). We use P d to denote the set of points on the UAV plane, i.e., For any angular coordinate of the eavesdropper (θ E , φ E ) ∈ [−β/2, β/2] 2 , there is a unique 2Dcoordinate on the UAV plane. With the UAV plane constraint, the eavesdropper trajectory design problem is simplified from 3D to 2D.
We use a 2D coordinate system centered at the UAV plane to denote points on the UAV plane.
For notational convenience, we define a mapping S 2 : [−1, 1] 2 → P d such that (x u , y v , z u ) = S 2 (u, v). We discretize the time index t with a sampling period T s , and minimize the sum secrecy rate over discrete time instances for computational tractability. For that, we define a trajectory in Definition 1.
where (u t , v t ) ∈ [−1, 1] 2 and t = 0, 1, . . . , N −1, such that t-th element of the sequence represents the coordinate of the UAV with respect to the center of the UAV plane at time tT s . We denote t-th element of the trajectory τ N,d by τ N,d (t) = (u t , v t ).
We would like to mention that only a subset of the trajectories in Definition 1 are permissible for the UAV. First, the trajectory must meet the maximum permissible velocity constraint on the UAV. Second, the UAV following this trajectory should not block the LoS path between the TX and the RX at any time instant. Based on these constraints, we define the set of permissible trajectories in Definition 2. Recall that the mapping S 1 converts rectangular coordinates to modified spherical coordinates, and S 2 changes the reference from the center of the UAV plane to the center of the TX antenna array.
Definition 2.
Let v max be the maximum permissible velocity of the UAV, (θ R,t , φ R,t ) be the angular coordinate of the RX with respect to TX at time t, and (r t , θ t , φ t ) denote the angular coordinate of the UAV such that (r t , θ t , φ t ) = S 1 (S 2 (u t , v t )). Then, a discrete trajectory τ N,d is a permissible We use T N,d, to denote the set of all permissible trajectories.
The parameter in (35) characterizes the minimum permissible angular distance between the RX and the UAV, with respect to the TX. The constraint in (35) prevents the UAV from blocking the LoS path between the TX and the RX.
We now formulate the discrete trajectory optimization problem. The eavesdropper first computes the q-bit quantized beamformerF t corresponding to the RX for all t. Then, the function C t (F t , τ (t)) is evaluated over a discrete time grid. Finally, the optimal trajectory τ * N,d, can be defined as The problem in (36) finds an optimal trajectory from a set of permissible trajectories that maximizes the total secrecy rate over time T .
We solve the optimization problem in (36) using a dynamic programming-based trajectory search. For that, we first define the state space, actions and reward as follows: 1) State: The state of the UAV at time index t is given by . We also define the state at time t as s t = (u, v). We use a discrete G × G spatial grid to represent the coordinates (u, v) ∈ {−1 + 2i/G : i ∈ [G]} 2 .
2) Action: An action a t = (s, s ) at time t is defined as the transition from state s = (u, v, t) to s = (u , v , t + 1). An action a t = (s, s ) is a valid action if there exists a permissible trajectory τ ∈ T N,d, that makes a transition from state s to s . We denote the set of all valid actions by A.
3) Reward: As the goal of the eavesdropper is to minimize (32), we define the reward R associated with an action a t = (s, s ) as where (r t+1 , θ t+1 , φ t+1 ) = S 1 (S 2 (s )). Since the definition of the reward solely depends on the next state, we denote R(a t ) = R(s ) where a t = (s, s ).
We now describe an adaption of dynamic programming called value iteration to solve (36) [30].
An algorithm to estimate the value function is given in Algorithm 1. Then, the optimal sequence of states that maximizes the reward, equivalently the optimal trajectory, is found using Algorithm 1. We discuss the performance of the proposed trajectory search algorithm in Section VI.
We would like to highlight that our trajectory optimization algorithm requires the knowledge of the sequence of standard beamformers, i.e., {F t } T t=0 , which can be computed from the trajectory of the RX. Furthermore, in a V2I system, the trajectory of the RX can be estimated based on the traffic geometry and vehicle dynamics. Although the design of sophisticated real-time attacks that incorporate additional mechanical constraints such as the acceleration and power of the UAV is an interesting research direction, it is not within the scope of this work.
VI. NUMERICAL RESULTS
In this section, we show the severity of the proposed attack and the benefit of the proposed CSB defense. Specifically, we first discuss the SMI achieved by CSB defense compared to the benchmark DM-based technique, ASM [17]. We then show the severity of the AirSpy attack on a V2I TX, and explain the benefits of using CSB in terms of symbol error rate (SER) against such an attack.
A. Performance of the defense technique
In this part, we compare the CSB technique with ASM in terms of the SMI. To this end, we consider a 16 × 1 linear phased antenna array at the TX and the use of the QPSK modulation.
We consider an RX located at 25 • with respect to the broadside angle of TX array. We plot the SMI for different angular positions of the eavesdropper located at the same radial distance from TX as the RX. We denote the ASM technique by ASM-c where c denotes the fraction of active antennas at the TX.
In Fig. 9, we show the numerically estimated SMI of CSB defense, and ASM defense with 0.3, 0.5 and 0.7 fraction of active antennas. We notice that ASM performs poor along the directions of the energy leakage. This is due to the fact that the AN induced by ASM is small when compared to the RF signal leakage with low-resolution phased arrays. Furthermore, ASM defense also suffers from lower received power at the RX under the common per-antenna power constraint. In contrast, CSB defense achieves better SMI as compared to ASM. We also plot the theoretical mutual information transfer at high SNR for on-grid positions of the eavesdropper as characterized in Lemma to the position of the eavesdropper such that g t = 8 as discussed in Example 1. Due to lower energy leakage, however, the SMI along that direction is still higher than ASM.
B. Severity of AirSpy attack
In this part, we numerically show the severity of the proposed attack. We first provide the trajectory of the UAV calculated with our trajectory design algorithm. Then, we study the secrecy rate of the system corresponding to the designed trajectory.
We consider a downlink V2I scenario, shown in Fig. 8, where the TX is equipped with a planar mmWave phased array with 16 × 16 elements. The TX array is located at h = 8 m above the ground and is tilted downward by 15 • . A vehicular RX travels on a straight lane at a distance of = 3 m from the TX at a speed of 20 m/s. We assume that the RX is in a connected mode with this TX when the transceiver distance along the y−dimension is within 10 m, i.e., y t ∈ [−10, 10]. As the vehicle moves at 20 m/s, the RX is connected to the TX for 1 second.
We call this 1 second duration as an episode.
We assume that the UAV eavesdropper traverses on a plane at a distance d = 1 m from the TX array. For the simulation, we consider a bounded region of the plane such that the angle subtended by the region at the center of TX antenna array is β = 160 • . We limit the speed of the UAV to 17 m/s [31]. In this setting, we first plot the eavesdropping trajectory designed using our dynamic programming-based algorithm when the RX moves from point (3, −10, 8) to (3,10,8) in an episode. The trajectories derived for attacks on 1-bit and 2-bit phased arrays are shown in Fig. 10(a). We notice that the optimal trajectory for eavesdropping on a one-bit phased array TX is consistent with the analytical solution derived in Appendix A. The solution can be explained from the observation that the beams generated with a one-bit phased array are mirror symmetric about the boresight direction. In case of 2-bit phased arrays, however, the optimal eavesdropping trajectory derived with our method exhibits an interesting phenomena. The UAV diverges from the direction of the strongest side-lobe at about 0.8 seconds and 1.2 seconds. This divergence is important to minimize the sum secrecy rate over an episode. Such a change results in better eavesdropping than a feasible greedy trajectory that simply follows the strongest sidelobe. We illustrate this observation using a video that is available on our website [32].
In Fig. 10(b), we show the evolution of the secrecy rate as the eavesdropper follows the trajectory shown in Fig. 10(a) during one episode. The secrecy rate when using one-bit phased arrays at TX is consistently 0 because the energy received at the UAV eavesdropper is higher than the energy received at the RX. This is because the UAV eavesdropper is closer to the TX than the RX. The secrecy rate using the trajectory designed for 2-bit phased arrays at the TX is also below 0 for the same reason, except during the time when the eavesdropper deviates from the path traced out by the strongest side-lobe.
In both the one-bit and the two-bit scenarios, the rate at the eavesdropper is significantly higher than the rate at the RX. In such a case, any defense strategy that slightly reduces the leaked RF signals does not help in minimizing the secrecy rate. Furthermore, strategies that null the leaked RF signal in a particular direction are also not useful. This is because a mobile eavesdropper can optimize its trajectory in the new setup to track the other side-lobes. Therefore, any defense technique that reduces the energy leakage cannot tackle the issue of eavesdropping with a mobile eavesdropper. Our CSB defense corrupts the phase of the symbols along the directions other than the direction of the RX, instead of reducing the energy leakage.
Remark: Although the secrecy rate is a non-negative quantity, we plot negative values in Fig. 10 to show the large difference between the rates at the RX and the eavesdropper over an episode.
C. Defense against AirSpy
We describe the benefits of using CSB defense over ASM in a low-resolution phased array under the AirSpy attack. We use a system setup similar to the one used to analyze the attack. For the simulation of CSB and ASM defense, we consider both the RX and the eavesdropper perform perfect synchronization and we only focus on the performance during the data transmission.
Additionally, we consider that the TX corrects the phase change as characterized in Lemma 1 when the RX is along an on-grid direction or an off-grid direction. Since the nearest on-grid direction associated with the RX is known to the TX in the form of the beam selected from the DFT codebook, our defense method does not require additional information to maintain the communication performance at the RX. Note that the phase change due to circulant shifts characterized in Lemma 1 is only valid along the on-grid directions. We will show using simulations that the phase correction based on nearest on-grid direction still maintains the performance at the RX along the off-grid directions. Furthermore, we assume a standard receiver to calculate the SER.
In Fig. 11(a), we show the average SER at the RX and the eavesdropper as the function of the SNR received at the RX. Note that the SER at the RX is higher than the SER at the eavesdropper when using ASM-0.6 for the defense. This is due to two reasons. First, the received power at the eavesdropper is higher than the received SNR at the RX as the TX-eavesdropper distance is much smaller than the TX-RX distance. Second, the AN induced by ASM which adds to the noise at the eavesdropper is not sufficient enough to perturb the constellation at the eavesdropper.
Thus, the effective signal power received at the eavesdropper due to the signal leakage from the low-resolution phased arrays is higher than the AN induced by ASM. In contrast, CSB defense scrambles the phase of the signal along the directions other than that of the RX, thus, corrupting the signal irrespective of the signal power.
In Fig. 11(b), we show the average SER at the eavesdropper and the RX for different ASM parameter c. The SER at the eavesdropper when using CSB defense is higher than ASM defense for any parameter c. Additionally, the SER at the RX is also consistently lower when using CSB as compared to using ASM. It can also be observed from Fig. 11(c) that the use of CSB defense also provides an increased SNR at the RX when compared to ASM. From Fig. 11(b) and Fig. 11(c), we can conclude that CSB achieves a large SER at the eavesdropper, while the SER and the SNR at the RX is maintained without any significant degradation from the standard case.
VII. CONCLUSION
In this paper, we developed a directional modulation-based beamformer design technique called CSB, to defend against an eavesdropping attack on low-resolution phased arrays. The proposed CSB defense applies random circulant shifts of the low resolution beamformer to scramble the phase of the received symbol in the unintended directions. As a result, CSB blinds an eavesdropper that taps the leaked RF signals. We characterized the phase ambiguity introduced at the eavesdropper and derived the secrecy mutual information. We also designed an experiment on an mmWave testbed using 60 GHz phased arrays and showed that circulantly shifting a beamformer induces different but predictable phase shifts along different directions.
The predictability of the phase shifts allows the TX to adjust the phase of the transmitted symbol to maintain the communication between the TX and the RX. Finally, we developed an eavesdropping attack for low-resolution phased arrays in a V2I network and evaluated the performance of CSB under such an attack. Our results indicate that CSB achieves a better defense than similar state-of-the-art benchmark techniques. | 2021-08-12T01:16:25.041Z | 2021-08-10T00:00:00.000 | {
"year": 2021,
"sha1": "116bc38b769792eef4e0ebfc057858a427c0770c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2108.04942",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "66e7f09ef08b2032b3facb3d83b4e9e30e89ba9c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
234058907 | pes2o/s2orc | v3-fos-license | Characteristics of Porous Asphalt Mixture by Using a Bottom Ash Boiler as a Filler
Porous asphalt is asphalt mixture using open gradation dominated by coarse aggregate to produce a large enough cavity. Bottom ash waste produced by a boiler used in the extraction process of palm oil into crude palm oil. Shells and pulps that has been burned at high temperature between 500°C to 700°C will later become of boiler bottom ash. The aim of this study was to inspect the performance of Porous Asphalt Mixture characteristics which combining bottom ash boiler and cement as filler using Retona Blend 55 as a binder. The specimen preparation were designed by the Australian Asphalt Pavement Association (AAPA) method by parameter of Cantabro Loss (CL), Asphalt Flow Down (AFD), Voids In Mix (VIM), Stability and Marshall Quotient (MQ). The Optimum Asphalt Contain (OAC) obtain was used to prepare specimens within OAC with variation 50 % bottom ash boiler and 50 % cement as filler. The result of study showed that the OAC obtained 6%. Almost all parameter values were meet to the required specification of AAPA (2004). The value of CL is 9.25 %, AFD value is 0.19%, Stability 573.27 kg, flow 4.7 mm and VIM 12.29%. The value of VIM not meet to required specification (18%-25%).
Introduction
Pavement is composed of various materials originating from nature. The selection of material was carried out based on various factors including the pavement structure requirements, economic, durability, ease of work and local area experience (Hardiyatmo 2019). Increased infrastructure development in Indonesia especially roads has led to scarcity and high prices of materials in the market. Innovation is needed to find alternative materials that can be used to overcome these problems.
Aceh Province is an area that has a fairly large area of oil palm plantations, oil palm today is indeed to be excellent because of its very high economic value. In the South West region of Aceh, oil palm plantations are very easy to find as are palm oil processing factories. These factories produce waste including oil palm shells and oil palm fibres. At present the utilization of palm oil waste has a variety of uses including being used as a road hardener / asphalt replacement, especially in oil palm plantations. This waste is also used as fuel for the furnace where steam is usually called a boiler. Boilers are used in the process of extracting palm fruit into crude palm oil (CPO). The results of the combustion process of palm kernel shell and palm pulp in the boiler machine produces waste, one of which is a bottom ash. Bottom ash will be accommodated at the bottom of the furnace. Bottom ash have many pores and greyish white. This boiler bottom ash is a waste that is mostly produced by crude palm oil. An alternative solution in overcoming scarcity is to utilize bottom ash boiler boiler crust as road pavement material. Utilization of bottom ash boiler crust aims to reduce factory waste so as not to cause environmental pollution around the plant.
Flexible Pavement is a type of road pavement that is widely used in Indonesia. One type of mixture in flexible pavement is porous asphalt mixture. Porous Asphalt is a mixture of asphalt that is designed to have higher porosity than other types of pavement. According to (Diana 2004) porous asphalt is an open graded asphalt hot mixture with a large percentage of coarse aggregate, a small percentage of fine aggregate, thus providing a large air cavity. This air cavity is expected to be able to escape water when it rains, so water is not flooded on the road surface.
The ingredients of the pavement layers consist of aggregate, asphalt and filler. According to (Hardiyatmo 2019), filler material is a fine-grained material that passes through filter no. 200 (0.075 mm), can consist of rock dust, limestone, Portland cement, or other non-plastic materials. Fillers must be dry and free of other harmful substances.
(Suparma 2014) conducted a study using palm ash ashes as a filler in HRS-base mixture. The study was conducted by making variations of oil palm ash as a substitute for filler 0%, 25%, 50%, 75%, and 100%. The results of the mixed characteristics test showed that the HRS-Base mixture using an aggregate of palm fiber ash ash and oil palm shell ash has the potential to be resistant to deformation but less resistant to cracking due to tensile.
Based on the above problem conditions, it is necessary to conduct research on the use of a bottom ash boiler crust as a filler in porous asphalt mixture
Experimental/Methods
The entire study was conducted at the Transportation Laboratory of the Faculty of Engineering, Syiah Kuala University, Banda Aceh ( Figure 1). The study conducted a collection of data that is useful for the research process, the data needed is primary data and secondary data. Primary data were obtained from marshall test results of concrete asphalt mix specimens. While secondary data is supporting data obtained from material production brochures and other literature.
Material Preparation and Procurement 2.1.1 Aggregate
According to (Sukirman 2003), aggregate is the main ingredient of the road pavement structure, which is 90-95% aggregate based on weight percentages, or 75-85% aggregate based on volume percentages. Thus the quality of road pavement is also determined by the nature of the aggregate and the aggregate yield used by other materials. The aggregate properties that determine its quality as road pavement material are gradation, cleanliness, aggregate hardness and durability, grain shape, surface texture, porosity, ability to absorb water, specific gravity, and adhesion with asphalt. Ageragat used in this research is split aggregate derived from stone crusher Kaway Beton, Aceh Barat
Retona Blend 55 Aspal
Asphalt retona blend is one of the asphalt which is the production of PT. Olah Bumi Mandiri. This asphalt is asphalt which has been modified / the result of a mixture of oil asphalt pen 60 or pen 80 with asbuton refined butona asphalt which is fabricated.
Bottom Ash Boiler Filler
Palm kernel shells and palm pulp that have been burned at high temperature between 500°F to 700°F will deform into Palm Oil Clinker (POC) that has been widely accepted as disposal waste in the industry. After being burned under high temperature, palm kernel shell and palm pulp will deform into remnants which is considered as boiler waste and it comes in the form of: 1. Ashes (Bottom Ash and Fly Ash); accumulated under the furnace, collected into ashes collecting point and is relatively heavyweight; 2. Clinker; boiler clinkeris derived from palm kernel shell and attached to the boiler wall. Bottom ash will be accommodated at the bottom of the furnace. Bottom ash have many pores and greyish white. Bottom ash filtered to pass filter no. 200 in accordance with the specifications required. The bottom ash used in the study came from a palm oil processing factory operating in Padang Sikabu, Aceh Barat
Test Specimen
The test specimen is made up of 2 (two) groups, namely: Asphalt concrete wearing course (AC-WC) specimens with asphalt retona blend 55 for determining optimum asphalt contain. After OAC is obtained, then the specimens are made on OAC asphalt mixture with a variation of 50% bottom ash boiler and 50% cement use a mixture of asphalt pourus based on that quoted from the (Anonim 2004).
Specimens to find the optimum asphalt content (OAC)
Tested specimens were created from Asphalt Concrete -Wearing Course (AC-WC) mixture used BinaMarga 2010 Spesification. Its contains bottom ash boiler as filler. Marshall Evaluation was carried out on the specimens to investigate their Optimum Asphalt Content (OAC). The characteristics of asphalt concrete mix can be checked using Marshall Test. This check is intended to determine the resistance (stability) to the plastic discharge (flow) of the asphalt mixture. The test specimen is made using filler variations in accordance with the mix design plan contained. The specimen is made into variations with addition bottom ash filler 50% and cement filler 50% (15 specimen). The design of the number of test specimens Table 2.
Specimens on the optimum asphalt content (OAC)
After obtaining OAC, then the test specimens are made on the OAC. The specimens used AAPA (Australian Asphalt pavement Association) 2004. The total specimens are 9 specimens for Marshall testing. Furthermore, the specimens were made again for CL and AFD testing of 6 (three) test specimens so that the number of test specimens became 18 pieces. The design of the number of test specimens Table 4. Table 5 shows that specimens of AC WC mixture with varied content of Retona Blend 55 (4,5; 5; 5,5; 6; 6,5); these specimens did contain bottom ash and cement as filler. Marshall Evaluation was carried out on the specimens to investigate their OAC. Table 6 shows that specimens of Pourus Asphalt mixture with variation 50:50% on 6% OAC ; these specimens did contain bottom ash and cement as filler. | 2021-05-10T00:04:06.062Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "1d7cfbc0f30203d854d704570464c55dd243751e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1764/1/012167",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "75f67beae2c1f7c9af2b34c3897463ba61e59532",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
259093510 | pes2o/s2orc | v3-fos-license | Steroid responsive idiopathic calcitriol induced hypercalcemia: a case report and review of the literature
Background Idiopathic Calcitriol Induced Hypercalcemia is a rare cause of a common condition of hypercalcemia. Hypercalcemia is most commonly the result of hyperparathyroidism and together with hypercalcemia of malignancy accounts for over 95% of cases. Idiopathic Calcitriol Induced Hypercalcemia can mimic hypercalcemia secondary to granulomatous diseases like sarcoidosis, but with apparent absences of both imaging and physical exam findings consistent with the disease. We report here a 51-year-old man who presented with recurrent nephrolithiasis, hypercalcemia, and acute kidney injury. Case presentation A 51-year-old man presented with severe back pain and mild hematuria. He had a history of recurrent nephrolithiasis over the course of a 15-year period. On presentation his calcium was elevated at 13.4 mg/dL, creatinine was 3.1 mg/dL (from baseline of 1.2), and his PTH was reduced at 5 pg/mL. CT abdomen and pelvis showed acute nephrolithiasis which was managed medically. Work up for the hypercalcemia included an SPEP which was normal, Vit D,1,25 (OH)2 was elevated at 80.4 pg/mL, CT chest showed no evidence of sarcoidosis. Management with 10 mg prednisone showed marked improvement in the hypercalcemia and he no longer had any symptoms of hypercalcemia. Conclusion Idiopathic Calcitriol Induced Hypercalcemia is a rare cause of hypercalcemia. All reported cases benefit from more intensive long-term immunosuppression. This report helps consolidate the diagnosis of Idiopathic Calcitriol Induced Hypercalcemia and encourages researchers to better investigate its underlying pathogenesis. Supplementary Information The online version contains supplementary material available at 10.1186/s12882-023-03203-4.
Background
Symptomatic hypercalcemia with subsequent nephrolithiasis is not an uncommon problem in the clinical arena. The vast majority of patients present in the context of primary hyperparathyroidism or lymphoma [1]. Alternatively, an important category for consideration is granulomatosis disease such as sarcoidosis, tuberculosis, fungal infections, leprosy, and Crohn's disease [1,2]. The unifying mechanism, best exemplified by sarcoidosis, is associated with increased extra-renal 1α-hydroxylase activation in macrophages, leading to increased 1,25(OH) 2 D (calcitriol) levels [2,3]. Less common causes, such as the injection of silicone for cosmetic purposes [4] and paraffin oil in young male bodybuilders, [5] can lead to granulomatosis inflammation and hypercalcemia through a similar mechanism.
Other endocrine disorders could also present with symptomatic hypercalcemia encompass adrenal insufficiency, primary hyperparathyroidism, vitamin D or A intoxication [1]. The differential is broad and the following work-up emphasizes common etiologies, yet idiopathic calcitriol induced hypercalcemia has emerged as a rare diagnosis of exclusion over the years. Only a few case reports [1,6,7] have reported cases of idiopathic calcitriol induced hypercalcemia with successful treatment using corticosteroids. Our aim is to add to this literature to reaffirm this possible diagnosis without elevation of angiotensin converting enzyme (ACE). We therefore present the case of a man with recurrent nephrolithiasis, hypercalcemia, and acute kidney injury.
Case presentation
A 51-year-old man originally from East Asia with past medical history of significant for hypertension was referred to nephrology due to recurrent kidney stones and elevated creatinine. The patient reports having kidney stones in 1989 and 2006 that passed spontaneously. In 2017, the patient was found to have bilateral symptomatic calcium kidney stones requiring lithotripsy. His kidney function has fluctuated (Cr 0.9-3.1 mg/dL) over the preceding two years, with consistent hypercalcemia, hypercalciuria (637 mg/24 hours), and low PTH level.
Seeing this patient for the first time in July 2019, initial workup for possible causes of hypercalcemia included intact PTH (iPTH) suppressed at 5 pg/mL, calcitriol (1,25-dihrdoxyvitamin D ) elevated at 80.4 pg/mL, calcidiol (25-hydroxyvitamin D) low at 15.1 ng/mL, parathyroid hormone-related peptide (PTHrP) normal (< 2.0 pmol/L), serum protein electrophoresis (normal), ACE levels low at 2 U/L, complete blood count (normal), an elevated serum calcium (11.9 mg/dL) and creatinine (1.8 mg/dl), a whole-body bone scan (normal), and urinalysis (normal). Although not necessary, an ultrasound of the neck was also obtained and was found to be normal.
Chest X-ray showed a calcified granuloma. CT scan showed an upper lobe solid nodule concerning for possible primary lung malignancy independent workup through pulmonology were negative for tuberculosis, sarcoidosis, and lung malignancy.
During subsequent visits, he had persistently elevated serum calcium level (11.3 mg/dL) and ionized calcium level (6.0 mg/dL). Given the lack of alternative diagnoses, idiopathic calcitriol induced hypercalcemia was pursued as a primary diagnosis and the patient was started on prednisone 30 mg daily in December 2019.
Outcome and follow up
Upon follow-up a few weeks later, the patient's calcium decreased (10.3 mg/dL) and iPTH has increased (13 pg/mL). Urine calcium also decreased from 681 mg to 581 mg and then 386 mg gradually. Prednisone was gradually tapered down to 20 mg daily and then 10 mg daily as we continued to monitor the patient's labs in 2020. Hypercalcemia continued to improve (9.9 mg/dL) and prednisone down to 5 mg daily in July 2021. Unfortunately, calcium increased to 11.8 mg/dl in 3 months, iPTH was suppressed again, so the prednisone was titrated up to 10 mg daily with calcium controlled around 10 mg/dL. While the patient was taking prednisone 10 mg daily, suddenly developed the kidney stone again in July 2022. The patients' records were reviewed and it was determined another physician placed the patient on ergocalciferol 50,000 unit weekly for vitamin D deficiency in March 2022 without a full grasp of the patient's clinical presentation. However, serum calcium was 11.6 mg/ dl and calcitriol was 104 pg/ml in July 2022, prompting the nephrology team to discontinue ergocalciferol. Subsequently, calcium dropped down to 10 mg/dL in January 2023. The patient has been on prednisone 10 mg daily for 3 years.
Discussion and conclusions
The patient described has a long history of symptomatic hypercalcemia and recurrent nephrolithiasis. A broad work-up was pursued eliminating common etiologies of hypercalcemia, starting with primary hyperparathyroidism. Intact PTH was found to be suppressed and neck ultrasound was unremarkable, making this diagnosis unlikely. Vitamin D testing demonstrated elevated calcitriol levels, which is seen in the clinical context of lymphoma and chronic granulomatosis disease [1,2,7]. These conditions were unlikely in our patient given lack of constitutional and inflammatory systemic symptoms, unremarkable bone scan, and chest CT and X-ray follow up ruled out the possibility of primary lung malignancy and active granulomatous diseases.
While hypercalcemia is seen in 10-20% of patients with Sarcoidosis, [3,7] the diagnosis is unlikely in the absence of any physical exam, laboratory, or radiologic evidence of Sarcoidosis [2,7]. Furthermore, low ACE levels make this diagnosis even less likely seeing that ACE levels are elevated in 60% of patients with Sarcoidosis [3]. Active tuberculosis infection is also associated with hypercalcemia [2] and was explored given our patient's history of tuberculosis, but workup was negative. The patient denied taking vitamin D or A, silicon, or paraffin oil, ruling out external causes of hypercalcemia. One consideration is that while this patient underwent a broad workup, an underlying pathology explaining the observed hypercalcemia may reveal itself with time. This is a possibility so the authors will remain vigilant moving forward.
The first report discussing idiopathic calcitriol induced hypercalcemia emerged in 1994, where Kreisberg [1] explored all the possible etiologies of hypercalcemia and appropriate workups through clinical problem-solving. He never found a definitive diagnosis, and his patient improved on low dose prednisone that was continued long-term. Evron et al. [6] subsequently coined the term idiopathic calcitriol induced hypercalcemia suggesting it as a possible diagnosis for three patients they treated. Their patients presented with low iPTH, high calcitriol and high ACE levels. After finding no evidence of Sarcoidosis and normalization of calcium levels post prednisone treatment, they suggested term idiopathic calcitriol induced hypercalcemia as a separate disease entity. Rijckborst et al. [7] consolidated the diagnosis further describing one patient with low iPTH, high calcitriol and high ACE levels. Their patient also underwent a robust investigative workup demonstrating no clear etiology that could explain the observed hypercalcemia. Their patient also responded to prednisone but required a low maintenance dose to prevent disease recurrence. Our patient's presentation and treatment course differ from those previously reported in respect to ACE levels. Elevated ACE levels in previous case reports [6,7] lead them to postulate alternation in macrophage function as a possible underlying mechanism for idiopathic calcitriol induced hypercalcemia. Low ACE levels in our patient might emphasize the need to look for alternative causes that could explain elevated calcitriol and calcium levels. Additionally, our case report outlines a case of specific recurrent nephrolithiasis from a young age. Although idiopathic calcitriol induced hypercalcemia is not generally associated with nephrolithiasis, previous history of kidney stone can explain the susceptibility of nephrolithiasis in the setting of hypercalcemia and hypercalciuria.
Calcitriol plays a key role in calcium homeostasis. Systemically, calcitriol binds to vitamin-D receptors in the kidneys, parathyroid glands, intestines, and bones to raise serum blood calcium levels by facilitating intestinal absorption, renal tubular reabsorption, and bone release. The calcium-binding protein encoded by the transcription factor calcitriol carries both calcium and phosphate ions concurrently through intestinal epithelial cells [8]. By stimulating osteoclasts via the release of receptor activator of nuclear factor kappa-B ligand (RANKL) from osteoblasts, calcitriol promotes bone resorption. By causing apoptosis, calcitriol dramatically reduces the growth of T lymphocytes and normal human epidermal keratinocytes. Through this distinct array of self-regulatory mechanisms, calcitriol increases serum calcium levels. Therefore, elevated calcitriol levels idiopathically will lead to hypercalcemia that is resistant to feedback pathways that usually reduce parathyroid hormone production and decreases calcium levels. Corticosteroids inhibit the production of calcitriol in macrophages, which reduces gut absorption of calcium, RANKL expression, and would theoretically promote bone retention reducing calcium levels overall [9]. However, in reality, long term use of glucocorticoids can result in a variety of side effects including bone weakening. Glucocorticoid can activate osteoclast first, later can inhibit osteoblast, which can cause osteoporosis, fractures and osteonecrosis [10]. Long term medium or high dose glucocorticoid use can cause cushingoid facial features, buffalo hump, induce refractory hyperglycemia or worsen existing diabetes, increase risk of infection, and cause a variety of other harmful effects [11]. Titrating the steroid dosage to ensure symptoms do not return, while minimizing the overall steroid burden can be difficult. Yet, this down titration process is necessary to ensure appropriate treatment of Idiopathic Calcitriol Induced Hypercalcemia while minimizing unintended consequences. While the patient continues steroid therapy, it is necessary to monitor for long-term toxicity associated with this treatment. For instance, bone mineral density (BMD) will need to be monitored one year after steroid initiation with appropriate follow-up screening every two years if BMD is stable [10]. In addition to BMD, other parameters that require surveillance with long-term steroid use include lipids, glucose parameters, and intraocular pressures. Authors are considering alternative management in the future. One such alternative is to trial Mycophenolate mofetil, an immunosuppressive agent that has been effective in the treatment of Sarcoidosis [12].
In summary, this case report discusses the case of a 51-year-old male with symptomatic hypercalcemia in the setting of Idiopathic Calcitriol Induced Hypercalcemia. Treatment with prednisone showed marked improvement of symptoms and long-term management continues to be optimized. This report builds on previous literature unifying evidence supporting Idiopathic Calcitriol Induced Hypercalcemia as a distinct disease entity while simultaneously reinforcing the need to explore alternative hypotheses for disease pathogenesis.
While identifying information was not included in this report, written informed consent was obtained from the patient to share this report in an online open-access publication. | 2023-06-07T14:12:29.050Z | 2023-06-06T00:00:00.000 | {
"year": 2023,
"sha1": "1dde836cbc1b76e2ba28a5c2ee09b06e7095e6df",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "1dde836cbc1b76e2ba28a5c2ee09b06e7095e6df",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270349393 | pes2o/s2orc | v3-fos-license | Current progress in the regulation of endogenous molecules for enhanced chemodynamic therapy
Chemodynamic therapy (CDT) is a potential cancer treatment strategy, which relies on Fenton chemistry to transform hydrogen peroxide (H2O2) into highly cytotoxic reactive oxygen species (ROS) for tumor growth suppression. Although overproduced H2O2 in cancerous tissues makes CDT a feasible and specific tumor therapeutic modality, the treatment outcomes of traditional chemodynamic agents still fall short of expectations. Reprogramming cellular metabolism is one of the hallmarks of tumors, which not only supports unrestricted proliferative demands in cancer cells, but also mediates the resistance of tumor cells against many antitumor modalities. Recent discoveries have revealed that various cellular metabolites including H2O2, iron, lactate, glutathione, and lipids have distinct effects on CDT efficiency. In this perspective, we intend to provide a comprehensive summary of how different endogenous molecules impact Fenton chemistry for a deep understanding of mechanisms underlying endogenous regulation-enhanced CDT. Moreover, we point out the current challenges and offer our outlook on the future research directions in this field. We anticipate that exploring CDT through manipulating metabolism will yield significant advancements in tumor treatment.
Introduction
Cancer has been a severe threat to human life. 1 Currently, the main cancer treatments typically involve surgery, radiotherapy, and chemotherapy, yet they offer limited clinical benets and are usually associated with serious side effects. 2 Therefore, there is an urgent need to explore effective and tumor-specic treatment modalities aiming to completely eradicate tumors, prevent alterations in genetic and energy metabolism patterns. 5In contrast, excessive ROS can cause irreversible oxidative cell damage, ultimately resulting in cell death. 6Considering the vulnerability of cancer cells to oxidative stress, several ROS-based therapies have emerged, 7 including photodynamic therapy (PDT), 8 radiotherapy (RT), 9 sonodynamic therapy (SDT), 10 chemodynamic therapy (CDT), 11,12 immunotherapy, etc. 13 CDT, an emerging efficient ROS-mediated therapeutic modality, was rst given by Shi and coworkers in 2016. 14][26][27] Despite extensive attempts, the therapeutic outcome of CDT relying solely on Fenton reaction catalysts remains unsatisfactory. 280][31] Mounting evidence indicated that in specic biochemical contexts, aberrant metabolism of tumor cells can cause differential susceptibility of cells to CDT. 32,33 A detailed understanding of how endogenous metabolic pathways in tumor cells inuence CDT processes is of great therapeutic interest.Herein, we rst summarize the mechanisms by which H 2 O 2 , iron, lactate, glutathione (GSH), and lipid metabolism impact Fenton chemistry-mediated CDT.Then, the current state-of-the-art designs of CDT nanomedicines integrating metabolic regulation abilities for enhanced tumor therapeutic efficiency will be critically discussed.Last but not least, we discuss the representative challenges and future directions for developing CDT agents.We expect the emergent metabolic reprogramming strategy to push forward the great process of research in the CDT eld, which will benet the clinical treatment of cancers.
Pathways controlling CDT
Considering that various endogenous molecules' metabolic processes interfere with the efficiency of Fenton-type reactions that are involved in CDT, researchers are beginning to focus on identifying the metabolic pathways that can impact therapeutic efficacies of CDT.The classications of currently identied metabolic pathways in tumors, including H 2 O 2 , iron, lactate, GSH, and lipid metabolism, are summarized in Fig. 1.
Endogenous H 2 O 2 , as an important substrate of the Fenton reaction, can be produced from various cellular metabolic processes in cancer cells.For instance, electrons leaked from the endoplasmic reticulum stress or mitochondrial respiratory chain can be captured by molecular oxygen (O 2 ) to form a superoxide (cO 2 − ), which is further disproportionated into H 2 O 2 by superoxide dismutase (SOD). 34In addition, the production of H 2 O 2 can also be initiated by activated nicotinamide adenine dinucleotide phosphate (NADPH) oxidase (NOX) or nicotinamide adenine dinucleotide (phosphate): quinone oxidoreductase1 (NQO1). 35And then, the ferrous ion (Fe 2+ ) possesses more powerful cOH-producing ability compared to other Fenton-reactive metal ions, enabling the strategy of regulating iron metabolism attractive for achieving efficient CDT. 36,37First, the extracellular transferrin (Tf)-ferric ion (Fe 3+ ) (Tf-Fe 3+ ) complex is carried into cells through cell surface Tf receptor protein 1 (Tfr1). 38In the endosome, ferrireductaseactive prostate 3 (STEAP3) mediates the transformation of Fe 3+ into Fe 2+ and divalent metal transporter 1 (DMT1) transports Fe 2+ to a cytosolic labile iron pool (LIP). 39Excess Fe 2+ is excreted out of cells via ferroportin or is oxidized to Fe 3+ by the ferritin heavy chain (FHC) with ferroxidase activity to initiate subsequent Fe sequestration in ferritin, which may be limiting the Fe-driven Fenton reaction, 40 while iron deciency can trigger nuclear receptor coactivator 4 (NCOA4)-dependent autophagic ferritin degradation, namely ferritinophagy, to release free iron on demand. 41Lactate metabolism regulation can interfere with the construction of the acidic tumor microenvironment (TME), which provides a feasible strategy to meet the strong acidity requirements of an effective Fenton reaction. 42,43Glucose is imported into cells by the glucose transporter (GLUT) on the cell membrane and subsequently transformed into pyruvate in the cytoplasm.Then lactate dehydrogenase (LDH) mediates the conversion between pyruvate and lactate, and the monocarboxylic acid transporter (MCT) mediates lactic acid efflux, inducing tumor acidication. 44The cellular antioxidative system plays an indispensable role in maintaining the equilibrium of a reductive environment against excessive ROS.damage and detoxifying lipid ROS. 47The high GSH content in tumor cells can weaken the effect of CDT by directly scavenging ROS. 48For GSH synthesis, extracellular cystine is internalized by cells via the cystine/glutamate antiporter (system X c − ) and reduced into cysteine by thioredoxin reductase 1 (TXNRD1). 49hen cysteine is conjugated with glutamate in the presence of gglutamylcysteine synthetase (g-GCS) to form g-glutamylcysteine (g-glu-cys).Finally, GSH synthetase (GS) catalyzes g-glutamylcysteine and glycine to produce GSH.Lipid metabolism is also closely related to the effect of CDT. 50Typically, acetyl-CoA is rst carboxylated to form malonyl-CoA by acetyl-CoA carboxylase (ACC).Free polyunsaturated fatty acid (PUFA) generated from malonyl-CoA can be esteried by activated acyl-CoA synthetase long-chain family member 4 (ACSL4) and is inserted into the cell membrane with the assistance of lysophosphatidylcholine acyltransferase 3 (LPCAT3), eventually improving sensitivity of cells to CDT. 51 The above metabolic processes of endogenous molecules provide numerous practical targets for CDT treatment.Next, the mechanism of CDT regulation by each metabolic pathway will be elucidated and discussed in detail, using representative examples in the corresponding section, with the aim of providing references for designing more efficient chemodynamic agents.
Elevation of H 2 O 2 contents
The H 2 O 2 produced by cell metabolism can be converted into lethal cOH by Fenton/Fenton-like reactions. 52Despite higher levels of H 2 O 2 in cancer cells versus non-cancer cells, the endogenously produced H 2 O 2 is still insufficient to assure a satisfactory tumor-killing effect. 53Developing pertinent strategies aimed at promoting the accumulation of intracellular H 2 O 2 is still the primary focus of research on CDT.For example, Yang et al. constructed H 2 O 2 -responsive supramolecular polymers (PCSNs) by self-assembly of b-cyclodextrin-ferrocene (CD-Fc) conjugates with platinum(IV) complex modication to overcome this shortcoming (Fig. 2A). 54The hydrophobic Fc in PCSNs was oxidized into water-soluble ferrocenium (Fc + ) upon exposure to elevated H 2 O 2 in the tumor.The disruption of hostguest interaction in the nanostructure resulted in dissociation behavior.Meanwhile, the released platinum(IV) prodrug could be reduced into cisplatin(II) to activate NOX-mediated H 2 O 2 replenishment, which could further promote the Fc-initiated Fenton reaction.In vivo experiments conrmed that such a positive feedback loop enabling self-boosting ROS generation dramatically inhibited tumor growth.Wang and co-workers developed amphiphilic polypeptide self-assembling nanomedicine (PtkDOX-NM) for the encapsulation of Fe 3+ and b- Additionally, natural enzymes or articial nanoenzymes have been widely applied to catalyze the transformation of intracellular substances such as glucose, 56,57 lactate, 58 and cholesterol into H 2 O 2 , which can provide an adequate substrate to accelerate the Fenton reaction.In a recent study by Lin's group, multienzyme-like Co-PN 3 SA/CHO nanoagents were designed and constructed by utilizing a well-dispersed phosphorus (P)doped cobalt single atom nanozyme loaded with cholesterol oxidase (CHO) to achieve efficient tumor catalytic therapy for tumor metastasis inhibition (Fig. 2C). 59The as-prepared Co-PN 3
Disruption of iron homeostasis
The conversion of intracellular H 2 O 2 into cOH via Fenton chemistry is dependent on Fenton catalysts. 61Fe 2+ and its derivatives could serve as excellent Fenton catalysts due to their high catalytic activity in cOH generation. 62Generally, tumor cells require more iron ingestion compared to non-tumor cells to maintain rapid proliferation, while coordinating the regulation of iron metabolism to avoid excess iron-induced toxicity. 63he regulation of intracellular iron homeostasis encompassing iron storage/release and import/export has been regarded as a promising approach for reinforcing CDT.For instance, Bu's group prepared a mesoporous silicon nanocarrier loaded with amorphous elemental Fe 0 and 2-amino-1,2,4-triazole (AT, a catalase inhibitor) (DMON@Fe 0 /AT) to integrate redox regulation and iron metabolism disruption for enduring CDT (Fig. 3A). 64Once DMON@Fe 0 /AT was endocytosed by tumor cells, acidic pH-triggered Fe 2+ and AT release from this nanocarrier occurred.The released AT suppressed the catalase activity to promote H 2 O 2 accumulation and the exposed DMONmediated GSH depletion, thus enhancing the Fe 2+ -driven Fenton reaction.The elevated intracellular cOH induced mitochondrial dysfunction, leading to the downregulation of cell membrane protein ferroportin 1 expression and the further disruption of intracellular iron homeostasis.In short, the asprepared intelligent biodegradable DMON@Fe 0 /AT demonstrates a new method for effective CDT treatment by blocking the iron exporter and regulating redox homeostasis.
Our group recently proposed a Fe 0 -small interfering RNA (siRNA) composite system (Fe 0 -siRNA) to realize FHC downregulation-potentiated CDT (Fig. 3B). 65When the Fe 0 -siRNA nanoparticles (Fe 0 -siRNA NPs) were internalized into tumor cells and located in mildly acidic lysosomes, they were disassembled in the presence of O 2 and released Fe 2+ and siRNA.The acidity amplication resulting from O 2 depletion could promote further degradation of Fe 0 -siRNA NPs.The Fe 2+ from Fe 0 -siRNA NPs not only activated the conversion of intracellular H 2 O 2 into highly toxic cOH for CDT, but also facilitated endosome escape of FHC siRNA through cOH-triggered lipid peroxide (LPO) of endo/lysosomal membranes.More importantly, FHC silencing with siRNA suppressed the transformation of Fe 2+ into Fe 3+ with less Fenton acitivity and the sequestration of Fe in ferritin, allowing the accumulation of higher reactive Fe 2+ , as well as increasing the CDT efficiency.In another study, Du et al. decorated a thiamine pyrophosphate (TPP)-etched open cavity metal-organic framework (MOF) with chitosan oligosaccharide (COS) through electrostatic interaction to develop a nanoplatform (termed COS@MOF) enabling hydrogen sulde (H 2 S)-activated CDT and ferroptosis for colon cancer treatment (Fig. 3C). 66On the one hand, the upregulated H 2 S could boost the transformation of Fe 3+ into Fe 2+ to improve Fenton reaction efficiency.On the other hand, utilization of autophagy inducer COS enabled the degradation of ferritin to promote the release of iron from ferritin and Fe 2+ replenishment, further enhancing CDT and ROS-mediated ferroptosis.In another example, Li et al. developed a nanodrug (T10@cLAV) by the integration of transferrin-homing peptide T10 and crosslinked lipoic acid vesicles for Fe 2+ self-supplying tumor therapy. 67The T10@cLAV bound to the overexpressed Tf receptor on cancer cells (Tf@T10@cLAV) and was subsequently imported into endosomes/lysosomes.The acidic environment of endosomes/lysosomes could trigger the degradation of Tf, thus inducing the sustained release of iron ions and activating the Fe-driven Fenton reaction.More importantly, cLAV was disrupted with the assistance of GSH and TXNRD1 to release dihydrolipoic acid (DHLA) for DHLA-mediated reduction of Fe 3+ to Fe 2+ , resulting in an increase in Fe 2+ levels to further enhance nanocatalytic therapy.Completely unlike traditional treatment strategies involving the introduction of exogenous Fenton-type metal ions, our group rst utilized the intracellular labile LIP to provide the continuous catalyst for catalytic generation of toxic free radicals through Fenton-type chemistry (Fig. 3D). 68We developed a pH-sensitive polymer for the co-delivery of the model ROOH molecule and LIP-increasing agent, which can sequentially respond to pH and labile iron in the TME to realize enhanced CDT.
Interference of lactate metabolism
In addition to the Fenton catalyst activity, the reaction conditions, particularly acidity, also inuence the reaction kinetics of the Fenton reaction. 69It has been conrmed that acidic pH favors the Fenton reaction. 70Lactate is considered a metabolic byproduct of glycolysis in tumor cells, leading to elevated tumor acidity. 71Consequently, lactate metabolism regulation is an alternative strategy for enhancing CDT. 72Shi's group reported a pH-responsive and self-augmented nanoplatform (dened as FePt@FeO x @TAM-PEG) fabricated by incorporating a core-shell FePt@FeO x nanocatalyst and pH-responsive tamoxifen (TAM) drug in a poly(styrene-co-maleic anhydride) (PSMA) polymer polymeric matrix and further modifying the surface with PEG (Fig. 4A). 73Once internalized by cancer cells, the nanoplatform disintegrated due to acidic pH-triggered hydrophobic-hydrophilic transitions of TAM and released the FePt@FeO x nanocatalyst.The FePt@FeO x nanocatalyst catalyzed the inherent H 2 O 2 decomposition to produce lethal cOH to damage cancer cells.Meanwhile, the liberated TAM could suppress the mitochondrial complex I, thus increasing the intracellular lactate level as well as elevating environmental acidity.This simultaneously amplied cOH production and accelerated disassembly of the FePt@FeO x @TAM-PEG nanostructure, inducing more therapeutic cargo release.As a result, this intelligent nanoplatform overcomes the limitation of weak acidic conditions in tumors by introducing metabolism regulators to upregulate lactate content, thereby achieving the enhanced antitumor effect of CDT.Another related study involving the transport modulation of lactate to remodel the reaction environment is worth noting as well.Wang et al. prepared a calcium phosphate-based biomineralized multifunctional nanosystem with the co-delivery of a DOX-Fe 2+ complex and MCT4-inhibiting siRNA (siMCT4) (CaP-DOX@Fe 2+ -siMCT4-PEG-HA) for interfering lactate efflux and enhancing anti-tumor CDT (Fig. 4B). 74The nanosystem was specically taken up by tumor cells with the aid of hyaluronic acid (HA)-mediated targeting and degraded in the acidic environment of lysosomes due to the pH-triggered hydrolysis of CaP, resulting in the dissociation of the DOX-Fe 2+ complex and the release of Fe 2+ , DOX, and siMCT4.MCT4 silencing by siMCT4 could exacerbate intracellular acidication by blocking lactate efflux, which not only enhanced the Fe 2+ -driven Fenton reaction, but also decreased adenosine triphosphate (ATP) production for inhibition of DOX efflux, thus signicantly improving the treatment efficiency.Typically, Wang's group developed an intelligent bioreactor (Sa@FeS) in which ferrous sulde (FeS) nanoparticles were anchored on the surface of Salmonella typhimurium strain (Sa) via biomineralization. 75The effective tumor penetration of Sa enabled FeS nanoparticles to mediate photothermally enhanced Fenton catalytic reactions in deep tumor tissues upon 1064 nm laser irradiation.At the same time, the H 2 S produced by Sa metabolism can facilitate glucose uptake by tumor cells, resulting in the elevation of intracellular lactic acid levels and ultimately boosting FeS-mediated CDT enhancement.Given the limited therapeutic effect of single lactate consumption, simultaneous regulation of multiple metabolic pathways may be a promising way to enhance CDT.Wu et al. proposed a strategy with a dual effect on lactate metabolism based on lactate oxidase (LOX) and syrosingopine (Syr) co-loaded hollow Fe 3 O 4 nanoparticles (denoted as Syr/LOD@HFN), which could combine LOX-mediated lactate consumption and lactate transport chain blockade (Fig. 4C). 76In the tumor TME, LOX could catalyze lactate to generate H 2 O 2 in situ while Syr-induced MCT4 downregulation could achieve lactate metabolism blockade, leading to continuous H 2 O 2 production and pH reduction for boosting the Fe 3 O 4 -catalyzed Fenton reaction.Owing to the augmented ROS production and the reversion of the lactate-related immunosuppressive TME, this nanoplatform caused severe immunogenic cell death in tumor cells and promoted antitumor immunity.This work reveals that manipulation of multiple lactate metabolic pathways enables a better therapeutic effect than a single regulation strategy.
Depletion of GSH
It is well established that GSH is capable of potent ROS scavenging, thus endowing cancer cells with the ability to resist oxidative stress and weaken the effect of CDT.Generally, metals with oxidation states could consume GSH via undergoing redox reactions with GSH. 77Our group rst reported the synthesis of manganese dioxide (MnO 2 )-coated mesoporous silica nanoparticles for precise magnetic resonance imaging (MRI)-guided self-reinforcing CDT.Upon encountering the high level of GSH, MnO 2 NPs were degraded to release Mn 2+ , which not only initiated CDT by Mn 2+ -mediated Fenton-like chemistry but also acted as MRI contrast agents to monitor the CDT process. 78ecently, redox-responsive nanosized frameworks coordinated with Cu 2+ and 2,3,6,7,10,11-hexahydroxytriphenylene (HPT, a catechol ligand), namely CuHPT, were designed to eliminate drug-resistant cancers by disrupting cellular redox homeostasis (Fig. 5A). 79results indicated that upon being activated by GSH, CuHPT nanoparticles displayed enhanced CDT with remarkable tumor growth inhibition and negligible side effects via a combination of auto-oxidation and Cu + -catalyzed reactions.In addition to introducing oxidizing agents to consume GSH, numerous electrophilic reagents that can directly react with the thiol group in GSH for reduction of GSH content were utilized. 80,81Guo et al. fabricated a hybrid mesoporous silica/organosilicate nanocomposite (MSN@MON) followed by coating the surface with disulde bond-containing link poly(acrylic acid) and loading with N,N,N 0 ,N 0 -tetrakis(2pyridinyl methyl)-1,2-ethanediamine (TPEN, a metal ion chelator) to obtain MSN@MON-TPEN@PAASH (Fig. 5B). 82ecause of GSH-induced disulde bond cleavage in MSN@MON-TPEN@PAASH, intracellular GSH was partially consumed and the NPs collapsed to release TPEN.The liberated TPEN could chelate with Cu 2+ in the Cu-Zn superoxide dismutase, enabling its deactivation and the formation of a TPEN-Cu(II) complex.Then, the TPEN-Cu(II) complex was reduced into TPEN-Cu(I) by remaining GSH, inducing the further decline of antioxidant defense resulting from GSH and ultimately promoting TPEN-Cu(I)-driven Fenton-like reactions for selfreinforced CDT.
Normally, GSH biosynthesis occurs in the cytosol of almost all cells. 83Blocking GSH synthesis using agents has been a good way to decrease GSH levels in cancer cells.Dong and co-workers reported a biocompatible liposomal nanoformulation containing L-buthionine sulfoximine (BSO) and gallic acid-ferrous ion (GA-Fe(II)) complexes to inhibit the synthesis of GSH for combined CDT-RT (Fig. 5C). 84Interestingly, GA boosted the transformation of Fe 3+ into Fe 2+ , promoting the Fenton reaction to generate highly cytotoxic cOH.Besides, BSO was capable of limiting the GSH synthesis rate via inhibiting the activity of glutamate-cysteine ligase to reduce GSH levels, thus intensifying the Fenton reaction-enabled cancer cell damage and CDT.Moreover, Huang and co-workers developed a multi-faceted GSH depletion-potentiated CDT system (TP/2-DG@HMnO 2 @-HA, TDMH), in which 2-deoxy-D-glucose (2-DG) and triptolide (TP) were co-encapsulated in hollow mesoporous MnO 2 (HMnO 2 ) followed by modifying HA on the nanoparticles for tumor targeting (Fig. 5D). 85When the nanomedicine reached the tumor area, MnO 2 participated in a redox reaction with GSH, yielding glutathione disulde (GSSG) and Mn 2+ that could drive the Fenton-like reaction with the assistance of bicarbonate to lead to intracellular GSH consumption as well as cOH production.Meanwhile, the disintegration of the HMnO 2 structure occurred, triggering the release of TP and 2-DG.TP downregulated solute carrier family 7 member 11 (SLC7A11) expression and decreased cysteine uptake, and its combination with the glycolysis inhibitor 2-DG could signicantly reduce the intracellular ATP level and block the regeneration of GSH synthesis.Owing to the synergistic effect of GSH depletion and GSH synthesis inhibition, this nanoplatform efficiently strengthened CDT against tumors.
Regulation of lipid metabolism
The ROS accumulation induced by Fenton-type reactions can generally cause LPO.PUFAs, a family of membrane lipids possessing two or more carbon-carbon double bonds, are extremely susceptible to ROS-initiated LPO. 86,87Therefore, the CDT susceptibility of cancer cells can be raised by increasing PUFA content in the lipid chain. 88Taking advantage of this strategy, our group constructed a chemodynamic nanoagent (OA@Fe-SAC@EM) capable of modulating PUFA metabolism by loading oleanolic acid (OA) onto a single-atom Fe-anchored hollow carbon nanosphere (Fe-SAC) and further coating with an erythrocyte membrane (EM) for enhanced CDT (Fig. 6A). 89er internalization by tumor cells, Fe-SAC could catalyze intracellular H 2 O 2 to generate cOH, which not only mediated CDT but also induced the disruption of the surface EM for the controlled release of OA.Notably, OA caused a dramatically elevated expression of ACSL4, thus increasing membrane unsaturation via enriched PUFAs.As a result, this nanoagent exerted enhanced chemodynamic efficiency by LPO amplication.Another relevant study was developed by Liu and coworkers.They successfully prepared the FeCo/Fe-Co dualatom nanozyme (FeCo/Fe-Co DAzyme) possessing fourenzyme-like activities and subsequently loaded lipoxygenase (LOX) and phospholipase A2 (PLA2) into these NPs for the fabrication of a six-enzyme co-expressed nanoplatform (FeCo/ Fe-Co DAzyme/PL) to enhance catalytic therapy and immunotherapy (Fig. 6B). 90
Summary and outlook
In this perspective, we focused on the metabolic pathways of endogenous molecules within the TME and discussed the availability of these pathways in modulating Fenton chemistry for antitumor CDT.We briey outlined the fundamental principles of the regulatory network involving endogenous molecules.We then thoroughly discussed the relevance of Fenton chemistry to H 2 O 2 , iron, lactate, GSH, and lipid metabolism.In these sections, we described recent progress in developing chemodynamic agents that modulate endogenous metabolites to strengthen tumor CDT and tried to attract broad interest of researchers from various elds.While interfering with multiple endogenous metabolic regulations has shown promise in boosting the effectiveness of traditional CDT therapies, addressing the following issues is crucial to advancing the development of CDT.Firstly, the endogenous metabolic pathways of cancer cells varied greatly due to the signicant difference in tumor types, sizes, locations, and stages.Therefore, selecting appropriate experiment models and developing antitumor strategies with enhanced targeting ability are essential to achieve precise therapy.Secondly, the metabolic network of cancer cells is complex, and there can be crosstalk among different metabolic pathways involving various molecules.Inhibiting metabolic pathways of a single endogenous molecule may lead to the development of compensatory metabolic pathways in tumor cells, conferring resistance to the single intervention and compromising therapeutic efficacy.Undoubtedly, gaining a comprehensive understanding of the interdependencies among different metabolism systems would provide us with new ideas for the design of endogenous regulation-based CDT.Additionally, we have only explored several endogenous metabolic pathways while the abundance of other cellular metabolites has not been exploited for their roles in inuencing CDT, where research in this eld is much needed.Meanwhile, it is important to consider that most of the metabolic pathways associated with alternative molecules are pervasively present in both normal and cancer cells.Thus, plenty of efforts should be devoted to enhancing the tumor-specicity of these chemodynamic nanomedicines that regulate endogenous metabolites, aiming to mitigate their adverse effects on normal tissues and organs.Despite the existence of certain challenges, growing knowledge about Fenton chemistry and its related endogenous regulators foresees the emergence of novel avenues and signicant potential for endogenous regulation-based strategies to enhance CDT in the eld of anti-tumor therapy.
Perspective
Chemical Science
Fig. 1
Fig. 1 Schematic illustration of endogenous metabolic pathways regulating Fenton chemistry-mediated CDT in tumor cells.
SA/CHO could concurrently mimic catalase (CAT), oxidase (OXD), and peroxidase (POD) to produce cO 2 − and highly cytotoxic cOH for cancer cell killing.Moreover, CHO effectively catalyzed the oxidization of intratumoral cholesterol to generate H 2 O 2 and cholestenone, promoting the Co-PN 3 SA-mediated Fenton reaction.Simultaneously, the CHO-catalytic depletion of cholesterol resulted in the disruption of lipid ra integrity and the inhibition of invasive lamellipodium formation, suppressing tumor progression and metastasis.This work provides a promising strategy to improve therapeutic efficiency through the combination of CHO-mediated cholesterol consumption and Co-PN 3 SA-induced cOH generation.Notably, bacteria engineered with overexpression of respiratory chain enzyme II (NDH-2 enzyme) have also been utilized to promote intracellular H 2 O 2 content elevation.For example, Fan et al. designed an integrative bioreactor by assembling magnetic Fe 3 O 4 nanoparticles on the surface of NDH-2 enzyme-functionalized nonpathogenic bacterium Escherichia coli MG1655. 60During Escherichia coli MG1655 respiration, NDH-2 enzyme accepted electrons from NADH and subsequently transferred electrons to O 2 to generate H 2 O 2 .The Fe 3 O 4 nanoparticles converted the sustainably synthesized H 2 O 2 to produce cOH through the Fenton-like reaction, achieving H 2 O 2 self-supplying CDT.
Fig. 2 (
Fig. 2 (A) The main synthesis and therapeutic process of PCSNs.Reproduced with permission from ref. 54.Copyright 2021, Wiley-VCH GmbH.(B) Schemes displaying the construction process of PtkDOX-NM and its therapeutic mechanisms.Reproduced with permission from ref. 55.Copyright 2019, Wiley-VCH GmbH.(C) Illustrative diagram of the antitumor mechanism of Co-PN 3 SA/CHO.Reproduced with permission from ref. 59.Copyright 2023, Wiley-VCH GmbH.
Fig. 3 (
Fig. 3 (A) Schematic presentation of synthetic procedure of DMON@Fe 0 /AT and the corresponding anticancer process.Reproduced with permission from ref. 64.Copyright 2021, Wiley-VCH GmbH.(B) Schematic description of the sequential release of Fe 2+ and siRNA from Fe 0 -siRNA NPs responding to tumor acidity for enhancement of CDT.Reproduced with permission from ref. 65.Copyright 2023, Wiley-VCH GmbH.(C) Illustration of an engineered MOF with amplifying ferritinophagy to cause iron ion overload for improved antitumor therapy.Reproduced with permission from ref. 66.Copyright 2023, Wiley-VCH GmbH.(D) The scheme of factors affecting endogenous labile iron-based CDT outcome of ROOR and the utilization of MLH-loading nanomedicines for CDT.Reproduced with permission from ref. 68.Copyright 2020, American Chemical Society.
FeCo/Fe-Co DAzyme from this nanomedicine exhibited POD-like and OXD-like activities to catalyze both O 2 and H 2 O 2 to generate cO 2 91tivated immunotherapy.Additionally, Yang and co-workers co-loaded Fenton catalysts hemin and lipoxidase into a pHresponsive self-assembled CaCO 3 -encapsulated poly(lactic-coglycolic acid) (PLGA) nanoreactor for cascade catalytic ampli-cation of tumor oxidative stresses.91Uponentering tumor cells, CaCO 3 could be degraded in the acidic Chemical Science Perspective microenvironment to release hemin and lipoxidase.Hemin catalyzed the Fenton reaction to convert H 2 O 2 into toxic cOH for ROS-initiated LPO.Moreover, lipoxidase further oxidized PUFA to generate lipid hydroperoxides and led to massive lipid hydroperoxide accumulation as well as aggravated membrane LPO, eventually causing tumor cell death. | 2024-06-09T15:19:07.105Z | 2024-06-07T00:00:00.000 | {
"year": 2024,
"sha1": "95530b9ac31807c57b306ccfdd53fadfc45d7836",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1039/d4sc02129k",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "829e5e0ca5ddee94bf51f1cfc04f9e3f4e3adef8",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
98076250 | pes2o/s2orc | v3-fos-license | Investigation of Cycle-to-Cycle Variability of NO in Homogeneous Combustion
Cyclic variability of spark ignition engines is recognized as a scatter in the combustion parameter recordings during actual operation in steady state conditions. Combustion variability may occur due to fluctuations in both early flame kernel development and in turbulent flame propagation with an impact on fuel consumption and emissions. In this study, a detailed chemistry model for the prediction of NO formation in homogeneous engine conditions is presented. The Wiebe parameterization is used for the prediction of heat release; then the calculated thermodynamic data are fed into the chemistry model to predict NO evolution at each degree of crank angle. Experimental data obtained from literature studies were used to validate the mean NO levels calculated. Then the model was applied to predict the impact of cyclic variability on mean NO and the amplitude of its variation. The cyclic variability was simulated by introducing random perturbations, which followed a normal distribution, to the Wiebe function parameters. The results of this approach show that the model proposed better predicts mean NO formation than earlier methods. Also, it shows that to the non linear formation rate of NO with temperature, cycle-to-cycle variation leads to higher mean NO emission levels than what one would predict without taking cyclic variation into account.
Re´sume´-Enqueˆte de la variabilite´cycle-a`-cycle du NO dans la combustion homoge`ne -La variabilite´cyclique des moteurs a`allumage commande´est reconnue comme une dispersion dans les enregistrements des parame`tres de combustion lors du fonctionnement re´el dans des conditions stables. Des variabilite´s de combustion peuvent se produire en raison des fluctuations dans le de´veloppement pre´coce du noyau de la flamme et dans la propagation turbulente de la flamme avec un impact sur la consommation de carburant et les e´missions.
Cette e´tude pre´sente un mode`le chimique de´taille´pour pre´voir la formation de NO dans des conditions de combustion homoge`ne. Le parame´trage de Wiebe est utilise´pour pre´voir le de´gagement de chaleur ; les donne´es thermodynamiques calcule´es sont ensuite inte´gre´es au mode`le chimique pour pre´voir l'e´volution de NO a`chaque degre´d'angle de rotation du vilebrequin. Les donne´es expe´rimentales obtenues a`partir de l'analyse des publications ante´rieures ont e´te´utilise´es pour valider les niveaux moyens de NO calcule´s. Le mode`le a ensuite e´te´applique´pour pre´voir l'impact de la variabilite´cyclique sur le taux moyen de NO forme´et l'amplitude de sa variation. La variabilite´cyclique a e´te´simule´e en introduisant des perturbations ale´atoires qui suivent une distribution normale, aux parame`tres de la fonction de Wiebe. Les re´sultats de cette approche montrent que le mode`le propose´pre´dit mieux le taux moyen de formation de NO que les me´thodes pre´ce´dentes. Les re´sultats montrent e´galement qu'une vitesse de formation non line´aire de NO avec la tempe´rature et la variation cyclea`-cycle, entraıˆne une moyenne plus e´leve´e des niveaux d'e´mission de NO que celle pre´dite sans prendre en compte la variation cyclique.
INTRODUCTION
Combustion in engines evolves differently in each operation cycle even at steady state operating conditions. Experimentally, Cycle-to-Cycle Variability (CCV) is best observed by the scatter of the measured cylinder pressure around the mean pressure curve. Such fluctuations of the cylinder pressure have an impact on engine performance [1], fuel consumption [2] and pollutant emissions [3,4], while in some extreme cases such as highly diluted lean mixtures could result in misfiring or knocking [2]. The Coefficient Of Variation of the indicated mean effective pressure (COV imep ) is used for the classification of CCV [5]. In general, COV imep should be limited to up to about 10% in order to avoid vehicle drivability problems [5,6].
There are several reasons that may cause CCV. These may include variations in the early flame kernel development due to corresponding spark variance in each cycle or the turbulence conditions in the spark neighbourhood. The kernel development affects flame propagation, which by turn results to different macroscopic combustion parameters. The spark discharge characteristics [2], the local equivalence ratio of the mixture and its inhomogeneity close to the spark plug [2,7,8], turbulence in the vicinity of spark plug at the ignition time [8], and mixture temperature and pressure at the time of ignition [8] are all related with the variations of the early flame kernel development. On the other hand, the overall equivalence ratio [9], the extent of mixture homogeneity [10,11], the percentage of the residual gas fraction of the mixture [10] and the averaged turbulence intensity [12][13][14][15][16] are factors that affect the main flame propagation.
Combustion CCV also leads to variability in the combustion products. NO x formation in particular shows a strong dependence on combustion duration. NO x emissions decrease as combustion time decreases and this dependence becomes stronger as air-fuel ratio becomes leaner [17]. In other studies, it was found that the variance of NO x is higher compared with the variance of imep and the maximum combustion pressure [3,18].
There have been several model approaches aiming at simulating combustion development and pollutants formation in SI engines. The Wiebe function [19] has been applied in most studies for the approximation of heat release due to fuel consumption. However this empirical function does not have a physical meaning and its predictability is not always satisfactory. Zero-dimensional phenomenological models may better approach the actual physics, taking into account different temperature zones and compositions. However the turbulence conditions in the combustion chamber cannot be modeled with this kind of models [20], hence they cannot be used to simulate CCV. As a result, CFD models (1D/3D) are mainly used for the simulation of CCV, because they are able to precisely simulate both the rate of the early flame development and the flame propagation [12,13,21,22]. Their disadvantage is their high computational cost and the difficulty in setting up a satisfactory combustion CFD model [20].
In SI modeling, NO emissions are usually simulated by applying the extended Zeldovich mechanism, also known as the thermal mechanism [23,24]. However, in stoichiometric and slightly rich mixtures, the prompt (also known as Fenimore) mechanism could be responsible for up to fifteen percent of the total nitric oxide emissions [25].
The objective of this study is the investigation of the combustion CCV in nitric oxide emissions, using a detailed chemical mechanism. The simple two-zone Wiebe model is used for the description of the mixture temperature and pressure during combustion. The thermodynamic parameters for each cycle are used as input in the detailed chemical mechanism for the prediction of NO formation as a function of degree of crank angle. The model is then used to predict the impact of CCV on NO emission levels.
MODEL APPROACH
The model presented in this paper consists of a detailed chemical mechanism, coupled to a two zone Wiebe model [19,23]. For the aim of this study, the three parameters of the Wiebe function were individually perturbated around central values to simulate CCV, thus having an impact on the burning rate and the NO formation.
Thermodynamic Model
The commercial engine simulation package AVL BOOST was used for the simulation of the heat release rate and the in-cylinder thermodynamic properties. The combustion submodel used for the prediction of heat release was a two-zone Wiebe model. The Wiebe function describes the burned gas mass fraction at a given crank angle: In Equation (1), / SOI is the degree of crank angle where ignition starts, D/ CD is the duration of combustion in crank angle degrees, m is a shape parameter for the Wiebe function, and a is a combustion efficiency parameter.
The two-zone approach consists of a burned zone with a temperature for the combustion products and an unburned zone with a different temperature for the unburned mixture and any residuals from the previous combustion cycle. A uniform pressure for both zones is assumed. Although the Wiebe model is an empirical model and it is not recommended for the investigation of the CCV origins, variation of its parameters provides a good approximation in simulating combustion exothermy variability.
Emission Model
The chemistry model used to predict NO formation was based on SENKIN, a FORTRAN based code developed in Sandia Laboratories [26] that has been later evolved into the CHEMKIN software package. SENKIN calculates combustion evolution in homogeneous gas phase mixtures. The code solves the chemical kinetics differential equations and predicts the formation rate of products. This solution can refer either to constant pressure, constant volume, or constant temperature conditions. The default reaction scheme of SENKIN v1.8 that was used in this study, consisted of 53 species and 325 chemical reactions [26,27]. The reaction scheme involved of a number of carbon-nitrogen species and radicals which are relevant in the NO formation chemistry, including HCN, H 2 CN, CN, HCNO and HOCN. Figure 1 illustrates the coupling between the thermodynamic and the chemical kinetic modeling developed in the current study. The thermodynamic data for the hot zone are imported in the converter at each crank angle. The burned mass fraction from the Wiebe function defines the newly burned moles which enter from the flame front to the burned zone. The newly burned moles are calculated from the oxidation rate of the fuel, according to the stoichiometry of the combustion shown in Table 1.
The newly burned moles and the composition from the previous step are imported as the initial input composition of the burned zone in SENKIN. SENKIN calculates as an output the new composition of the burned zone which will be again imported in the next crank angle. Schematic of the emission modeling approach.
A. Karvountzis-Kontakiotis and L. Ntziachristos / Investigation of Cycle-to-Cycle Variability of NO in Homogeneous Combustion When the thermodynamic model calculates the end of combustion, no new moles are assumed in the SENKIN input scenario. The loop therefore ends and kinetics are thereafter considered frozen. In earlier typical two zone models [18,22,23,28] only the thermal mechanism was considered, while the other necessary species for the thermal mechanism (H 2 , H, O 2 , O, OH, H 2 O) were calculated assuming equilibrium. The proposed emission model uses a detailed chemical mechanism which includes the thermal and the prompt mechanism, while the other necessary species are calculated from detailed kinetics. This improves the precision in NO x prediction, with a cost in computational time. A reduced detailed chemical mechanism but with explicit kinetics for intermediate species could serve as a compromise between accuracy and computational time.
Modeling of NO CCV
The modeling of NO CCV was performed by introducing perturbations into the Wiebe function parameters, regarding the ignition timing (SOI), the Combustion Duration (CD) and the parameter m. Each of these three parameters was described by a normal distribution, characterized by a mean value and a standard deviation. The mean value of each distribution was the Wiebe value of the mean-cycle model, while the range of perturbations was taken from experimental data, as it will be later discussed. Finally the CCV thermodynamic data were imported in the detailed chemical mechanism. This procedure was modeled in MATLAB.
Modeling Assumptions
The following assumptions were considered for the simplification of the emission modeling and the CCV analysis: -uniform pressure in the cylinder (burned and unburned zone at the same pressure); -a complete combustion of hydrocarbon fuel with air; -uniform composition in the burned zone; -NO x emissions solely consisting of NO. The validity and impact of these assumptions in the final results is investigated in the results section.
EXPERIMENTAL DATA
Experimental data are necessary for the validation of the model developed in this study. In most CCV analysis, only the thermodynamic data are measured, without considering the emission data. Ball et al. (1998) [4] used experimental data from a Rover K4 optical engine to investigate cycle-to-cycle variation in combustion and NO emissions. The fuel used in those experiments was methane. That engine from the Ball et al. (1998) [4] work was simulated in the present study, as many engine specifications necessary for the modeling are contained in that publication and are summarized in Table 2. The model was applied to this engine and the results of the simulations were compared with the experimental data for validation. This optical engine was measured under partial load and Wide Open Throttle conditions (WOT), for different crank angle ignition durations and lambda values. Information about the engine performance and the engine emissions (NO x , HC) was also available for each measured engine point.
EFFECT OF ENGINE OPERATION PARAMETERS ON EMISSIONS
Based on the experimental data presented in the previous section, Figure 2 presents a graph of imep and NO x concentration for stoichiometric combustion, during Partial Load (PL) and WOT operation. It is observed that while the Start Of Ignition (SOI) changed from 15°BTDC to 45°BTDC partial load imep only differed by 18%, the WOT imep differed by 9.5%, while NO x concentrations changed by 183% and 46%, respectively. This shows how much more sensitive NO formation is than the thermodynamic properties of the engine when combustion parameters change. The corresponding graph for lean operation (k = 1.5) is shown in Figure 3. The impact of the variation in combustion parameters on NO x formation is even more magnified in this case compared to the stoichiometric combustion.
By comparing the two cases, it is observed that in lean operating conditions, the impact of the ignition timing on the indicated mean effective pressure is higher compared to the stoichiometric mixture, an observation which is in agreement with other studies [2][3][4][5][6][7][8][9]. From these data it seems that cycle-to-cycle combustion variability is more pronounced in lean and highly diluted mixtures, even with slight modification of the combustion parameters. In addition such conditions lead to high NO x formation, hence the CCV effect is magnified in this case as well.
This non-linearity of NO x formation is not easy to simulate in detail with a simplified mechanism. Hence, a detailed and more precise chemical mechanism is applied in this study, in order to simulate this nonlinearity and high sensitivity in NO x formation. The model presented in this study can be used to predict the amplitude of variation of NO x emissions due to CCV and, in this way, to more accurately predict the compliance of an engine with a given emission limit target.
RESULTS
For validation, the proposed emission model is first used to predict the Rover K4 NO x measured emissions at both stoichiometric and lean conditions. First, the measured data of Rover K4 are used regarding NO x emissions and engine performance characteristics to relate the tendency between performance and emissions. Second, the comparison between simulated and experimental cycleaveraged NO data is presented for the validation of the simulation. Measurement and simulation are discussed and the importance of the prompt NO formation mechanism is justified. Last but not least, the NO CCV is investigated.
Mean Cycle NO Modeling
The Rover K4 was simulated with the AVL BOOST model for mean cycle Wiebe parameters and the results [4]. The comparison between experimental and simulated data refers to the imep, the maximum pressure during the combustion phase, the crank-angle degree where maximum pressure occurs, and the crank angle degree where 10% of the fuel mass is burned (MFR). All these data are presented in Table 3. The designation of each point in Table 3 is done with the P and W initials corresponding to partial load or wide open throttle operation, respectively, followed by two digits corresponding to the lambda value (10 corresponding to k = 1 and 15 corresponding to k = 1.5), followed by two digits of crank angle degree where ignition starts before top dead centre.
The predicted thermodynamic data of the ten simulated operating points were used as an input for the NO prediction. The simulated NO emissions are compared with the experimental NO emissions in Figure 4 for stoichiometric combustion and in Figure 5 for the lean combustion. In the stoichiometric combustion, NO emissions are presented with and without the effect of the prompt mechanism. The prompt mechanism has been switched off by zeroing the HCN radicals in the chemical mechanism.
The model appears to have a rather good accuracy over a wide NO x range, that is from NO x concentrations of less than 10 ppm (P1515) to more than 2 000 ppm (W1030). For these cases where large differences can be seen (e.g. W1015), one should also observe related differences in the thermodynamic data and not only in the reaction modeling. Cases with lower thermodynamic error show better prediction in NO x results (example P1015). By using a more sophisticated combustion model [21,22], the burning rate prediction could be improved with significant improvement in NO x prediction as well.
As one might expect, the availability of oxygen is the key variable affecting NO x prediction. This may be an additional reason of difference between measured and experimental data. Within a typical stoichiometric window of [0.95< k <1.05] that appears in actual engines during stoichiometric operation, slight differences in lambda could affect the total amount of NO x formed during combustion. The stoichiometric cases of the experimental data were also simulated with a slightly rich (k = 0.95) and slightly lean (k = 1.05) mixture. The results are presented in Figure 6. It is observed that the measured NO x concentration is almost always between these slightly lean and rich simulated values. Hence, slight departures from the set lambda in the experimental data may be a significant reason for the difference between experiment and simulation. Figure 4 also shows that the "prompt" mechanism increases the total amount of NO x concentrations by 10%-15% in case of stoichiometric combustion. Including the prompt formation one can increase the accuracy of the chemical mechanism. Bachmaier et al. (1973) [25] used an experimental configuration to define the equivalence ratio in which the prompt mechanism becomes significant in terms of total NO x formation for various hydrocarbon mixtures. They found that the prompt NO formation starts to become significant as the mixture moves towards stoichiometry from k = 1.33, in the case of methane. The prompt mechanism was negligible for leaner (k ! 1.5) conditions. Our results confirm the significance of the prompt mechanism in addition to the thermal one, even for stoichiometric combustion.
The thermodynamic input scenario is also important in lean conditions; however the oxygen availability does not affect the final results as much as in the stoichiometric case. In lean combustion, it seems that non-homogeneities in the burned zone can become important for accurately predicting final NO emissions. Multi-zoning is mostly used in 0D-engine models to take into account mixture stratification. In multi-zone modeling, different lambda and temperatures are assumed in each zone. Including multi-zones is a development we are currently working on in our model.
Another reason for differences between the simulated and experimental results could be the uncertainty in the high concentration of hydrocarbons (HC) that Comparison of measured and simulated NO molar fractions for stoichiometric combustion. Results without prompt mechanism are also included. Comparison of measured and simulated NO molar fractions for lean (k=1.5) operating engine conditions.
A. Karvountzis-Kontakiotis and L. Ntziachristos / Investigation of Cycle-to-Cycle Variability of NO in Homogeneous Combustion this engine emits (up to 9 000 ppm). By assuming the measured concentration of HC in the model, the prompt mechanism appears very significant, even in the lean case. As this engine is an optical and not a production one, these HC were assumed to be generated from crevices in the piston/cylinder interface and oil oxidation, rather than from fuel combustion itself. Although these HC do not participate in combustion, they could have an effect in a cold outer zone of a multi-zone model.
Cycle-to-Cycle NO Variability
The detailed chemical mechanism was then used for the investigation of NO CCV. From the various engine points in Figure 4, four engine points were chosen for the CCV analysis; two in partial load (P1015, P1030) and two at wide open throttle operation (W1015, W1030). All engine points were selected in stoichiometric conditions, to also include the effect of the prompt mechanism in NO formation. NO variability was investigated using a statistical analysis. Wiebe combustion parameters such as the ignition timing (SOI), the CD and the Wiebe shape coefficient (m) were randomly varied within limits, assuming that these parameters follow a normal distribution. The mean values for these distributions were equal to the values used in the case of mean cycle modeling. The range of the variation considered was taken from a relevant analysis in the framework of the FP6 LESS-CCV research project [29] and differed for partial load and WOT operation. Full load points correspond to higher CCV than low load engine points [2]. One hundred engine cycles were simulated in each engine point and the results of imep and NO x concentrations are presented in distributions. Differences between mean cycle indices and CCV values are discussed.
Results of Cycle-to-Cycle Variation
Cycle-to-cycle variation of pressure and temperature are illustrated in Figures 7 and 8, respectively, for the engine point of partial load and ignition timing of 15 o BTDC (P1015). The mean value of maximum pressure is 16.6 bar and the standard deviation is 0.98 bar, while the peak temperature has a mean value of 2 172 K and a standard deviation of 20 K. CCV of imep (P1015 point). and the mean imep value of the CCV analysis coincide perfectly, while MC NO x value and CCV mean NO x value seems to have a slight deviation. The same approach was also followed for the operation point at partial load (P1030) with ignition timing 30 o BTDC. The peak pressure distribution has a mean value of 20.6 bar and the standard deviation is 1.05 bar. The peak temperature has a mean of 2 173 K and 30 K respectively. Distributions of this engine point for imep present no difference for the MC imep and the CCV imep value. In the case of NO, a small difference between MC NO x and CCV NO x values is again shown.
In the case of WOT, the same approach with a higher range of Wiebe parameters was used for the CCV analysis. Pressure and temperature plots of the engine point of 30 o BTDC (W1030) are presented in Figures 11 and 12, respectively. Pressure and temperature peak values have a higher range as a result of higher range in the combustion parameters. In the case of W1015, the mean value of the maximum pressure is 35.7 bar and SD is equal with 3.1 bar, while the peak temperature varies from 2 121 K to 2 310 K with a mean value and standard deviation equal with 2 200 K and 39 K, respectively. Same order of magnitude differences are noticed for the case of W1030, where peak pressure varies from 45.5 bar to 61.7 bar (mean 55.5 bar, standard deviation 2.9 bar) and peak temperature varies from 2 281 K to 2 503 K (mean 2 410 K, standard deviation 42 K).
The distributions of imep and NO are illustrated in Figures 13 and 14. Due to higher CCV, MC imep and CCV imep are slightly different in both cases. Thus, MC NO x value and CCV NO x values present a higher deviation compared with the partial load. This also indicates that deviation between MC and CCV NO values is affected by the range of change of the combustion parameters.
Contribution of Prompt Mechanism on the Cycle-to-Cycle NO Variation
The impact of the prompt mechanism on NO x CCV has been also investigated. In the case of mean cycle modeling, it was observed that the prompt mechanism accounts for an additional 10% to 15% in the final NO x concentration. Therefore, it is expected that the Mean values of NO distribution show a decrease compared to the mean CCV NO values using the full mechanism. In addition, for the case of using the detailed chemical mechanism without the prompt one, a slight difference between MC NO values and CCV NO values CCV NO x without the prompt mechanism (P1015 point). CCV NO x without the prompt mechanism (W1030 point).
is also observed. However, the prompt mechanism has an additional impact in the statistic characteristics of the NO distribution, which is described in the next section.
DEVIATION BETWEEN MEAN CYCLE VALUES AND MEAN CCV VALUES
CCV does not only result in a range of values for NO x emissions but, due to the non-linearity of NO x formation with combustion parameters and primarily with temperature, it may also have an impact on the average NO x emitted. Hence, comparison between cycle-averaged values and the mean CCV value is important. Partial load and full load are two cases that exhibit different variability for the combustion parameters. In partial load, the mean value of imep CCV distribution is almost the same to the mean cycle imep value. On the other hand, the CCV imep values are always lower than the mean cycle imep values in full load operation (Tab. 4). This means that the impact of CCV on average is a degradation of the engine performance.
NO formation is also affected by the variability in the combustion parameters. Both in full and partial load CCV NO x values are always less than MC NO x values, which reflects the CCV impact in NO formation (Tab. 4). In WOT operation, this impact is higher than in partial load. This result is related with the nonlinearity of NO formation and for this reason it can not quantitatively correlated with imep variation. As shown in Table 4, the higher the difference between CCV imep and MC imep is, the higher is this difference between CCV NO x and MC NO x , too.
The coefficient of variation is used as a metric of the intention of the NO CCV in Table 5. The impact of the prompt mechanism is also separated in this table. NO in general presents higher variability due to CCV than imep does. Also, the results show that it is not possible to establish a direct link between imep CCV and NO CCV. The latter is dependant on both the operation point and the CCV of imep. Finally, the impact of the prompt mechanism on CCV is also specific to the engine point considered. In one of the WOT conditions examined, the prompt mechanism led to a significant increase in NO CCV, that is not obvious in the other cases. This means that the combination of heat release rate with reaction kinetics is unique for each engine point that results to a behaviour which cannot be generalized at this stage. Simulations with other engines and further refinements in the model may lead to a more consistent behaviour of CCV NO with CCV in other combustion parameters.
CONCLUSIONS
In this study, a detailed chemical mechanism was used for the prediction of homogeneous engine-out NO emissions. Literature experimental data were used for the validation of the simulated values. The model satisfactorily predicts NO emissions, ranging from a few ppm to a couple of thousand of ppm of NO molar fraction, in both stoichiometric and lean conditions. Then, the model was used for the simulation of NO variation due to combustion CCV. It was found that CCV NO distributions exhibit a higher COV compared to the imep distributions. In addition, mean CCV NO values are always lower than the average cycle NO values. The impact of prompt mechanism in NO result was also investigated.
In the case of average cycle emissions, it was found that the prompt mechanism increases the accuracy of the prediction, especially in stoichiometric conditions by up to 15%. In CCV, the prompt mechanism has an impact in the COV and mean value of NO distributions, although the impact was dependant on the engine operation point considered. | 2019-04-06T00:42:58.765Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "89a75e6a7e129a49d50aab0dda0017590edbdc82",
"oa_license": "CCBY",
"oa_url": "https://ogst.ifpenergiesnouvelles.fr/articles/ogst/pdf/2015/01/ogst130136.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f9a5ded1a40b767ae12a0c9e0e655e96eacb8ec3",
"s2fieldsofstudy": [
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
9265599 | pes2o/s2orc | v3-fos-license | Low Cost Electrode Assembly for EEG Recordings in Mice
Wireless electroencephalography (EEG) of small animal subjects typically utilizes miniaturized EEG devices which require a robust recording and electrode assembly that remains in place while also being well-tolerated by the animal so as not to impair the ability of the animal to perform normal living activities or experimental tasks. We developed simple and fast electrode assembly and method of electrode implantation using electrode wires and wire-wrap technology that provides both higher survival and success rates in obtaining recordings from the electrodes than methods using screws as electrodes. The new wire method results in a 51% improvement in the number of electrodes that successfully record EEG signal. Also, the electrode assembly remains affixed and provides EEG signal for at least a month after implantation. Screws often serve as recording electrodes, which require either drilling holes into the skull to insert screws or affixing screws to the surface of the skull with adhesive. Drilling holes large enough to insert screws can be invasive and damaging to brain tissue, using adhesives may interfere with conductance and result in a poor signal, and soldering screws to wire leads results in fragile connections. The methods presented in this article provide a robust implant that is minimally invasive and has a significantly higher success rate of electrode implantation. In addition, the implant remains affixed and produces good recordings for over a month, while using economical, easily obtained materials and skills readily available in most animal research laboratories.
INTRODUCTION
The use of animals to model human disease pathology has required the development of technology to investigate the effects of experimental interventions in subjects for which existing equipment designed for imaging, recording, or measuring physiology of humans are inappropriate due to the differences between humans and animals in size, other physical attributes, and compliance with equipment requirements. One such physiological recording is electroencephalography (EEG), the recording of changes in electrical potentials at the surface of the brain through scalp or skull, which are a result of ion flow across neural membranes and a measure of neuronal activity (Petsche et al., 1984). EEG is commonly used in the clinical study and diagnosis of medical disorders, and with the advent of computer quantitative analysis, is also used to quantify the effects of pharmacological, dietary, or genetic alterations in research studies (Shipton, 1975;Bronzino, 1984).
The process for EEG in humans involves attaching electrodes directly to the scalp with an adhesive or wearing a cap with the electrodes attached, with wires connecting the electrodes to the recording equipment. This process benefits from a compliant subject and limits mobility and so is difficult to use in experiments requiring awake and mobile animal subjects. Advances in miniaturization of recording equipment have resulted in wireless EEG recording devices that can be implanted in the animal or mounted on the animal's head, providing mobility for the animal during recording and allowing recording for hours and even days (Higashi et al., 1979). Such devices incorporate either telemetry or data saved to a microchip and typically require inserting screws into holes drilled into the skull, affixing screws with cyanoacrylate glue to the surface of the skull, or attaching a pre-fabricated headmount with screws and dental cement, which require lengthy and invasive surgical procedures and costly materials. The screws or headmounts are frequently connected to wires by soldering or glue, which can form fragile connections or interfere with conductance of the electrical signal.
In our initial experiments using the Neurologger device (TSE Systems) to record EEG in mice, we used the protocol and electrode material provided by TSE Systems, which calls for soldering and screws, either inserted in drilled holes or glued on the surface of the skull. The TSE protocol also calls for removal and re-insertion of the pins of the screw leads into the connecting block while the animal is under anesthesia, which increases the risk of damage to the soldered connections and results in a lengthy surgical procedure. Using this protocol, we experienced a significant failure rate, typically failed connectivity between neural tissue and recording device with one or more electrodes, and decreased tolerability of the lengthy surgery in aged mice.
Consequently, we developed a new method that combines fabrication of a simple wiring harness with insulated silverplated copper wire electrodes, which eliminated the screws, solder, and glue in the electrode assembly and eliminated the process of re-inserting the pins into the connector during surgery. This method results in electrodes with direct contact of brain tissue and minimal headmount apparatus. The headmount is well-tolerated and will remain on the animal for a month or more and, most importantly, these procedures simplify the implantation process, resulting in a quick, and efficient surgery that minimizes discomfort to the animal and promotes swift recovery. The procedures were designed to utilize laboratory and surgical equipment customarily found in animal research facilities and readily available tools and materials. It greatly facilitates implementation of wireless EEG recording devices in animal research.
MATERIALS AND METHODS
All procedures were approved by the Institutional Animal Care and Use Committee of the University of California, Irvine. Materials for constructing and implanting the electrode assembly are readily available from electronic supply stores or online.
Preparing the Components for the Electrode Connector Assembly (Embedded Link to Video 1) The electrode assembly has three main components: the 6-pin connector block, the posts that connect the block to the leads, and the leads with the recording electrodes at their tips (Figure 1). The first step is to make the 6-pin connector block from the supplied 25-pin connector block. To do this, count out 7 pins, giving you one sacrificial pin, and then cut between the 7th and 8th pin of the larger 25-pin connector block. Trim the excess pin and material so you have 6 pins comprising one 6-pin block.
Then prepare posts, which are created from the legs of standard 3 mm LEDs. The diameter of each LED leg is 0.5 mm and fits securely in the holes of the pin block. Use sand paper or an abrasive sponge to clean the oxidation from the LED's legs. An electronic designer breadboard holds the LEDs and components for the construction process. Cut the LED component off as scrap, leaving two 20 mm-long LED legs to be used as posts to connect the leads into the 6-pin connector block. After creating the posts, the next step is to secure the lead-to-post connection using the "wire-wrap" process. There are 2 separate types of leads. One lead will serve as the ground and reference lead and will connect to 2 pins in the connector block. The remaining 4 leads are ADC or signal leads and each connect individually to a single post and single pin in the connector block.
Building the Ground-Reference Lead (Embedded Link to Video 2)
Making the ground and reference lead, which has 1 lead for 2 pin connections, requires only 1 post made from 1 LED leg. Fully insert 1 post into the designer's breadboard. To wire-wrap the lead wire to the post, strip a 25 mm length of black insulated wire and insert the stripped section into the small hole at the outer-edge of the wire wrap tool. With the stripped section of the wire fully inserted into the edge hole of the tool, place the center hole of the wire wrap tool completely over the post held in the breadboard until the tool rests on the breadboard. Holding the insulated section of the wire onto the breadboard, slowly twist the tool to wrap the stripped section of wire securely around the post. When done accurately, the wraps will be neat and close together and there will be no insulated wire wrapped around the post.
Unique to the ground and reference lead, the connection is to 2 pins of the 6-pin block. Thus, the single post is bent into a tight "U" with the wrap at the apex of the curve. To accomplish this, hold the post with needle-nose pliers at the wraps. Then bend the post into a tight "U." After bending into the tight "U, " the black lead is now functionally connected to two parallel lengths of post which, when trimmed to "pin-length, " will be inserted into holes 5 and 6 in the pin connector. Pinch the "U" close together so it matches the width of the receiving holes in the 6-pin block. For the pins and block to connect properly, ∼3 mm of post is exposed beyond the wraps. This is approximately the same length as the pins that are extending from the opposite side of pin block. Insert the 2 posts of the "U" into the last 2 holes in the 6-pin block corresponding to pins 5 and 6. Make sure to insert the FIGURE 1 | The three main components of the 6-pin electrode connector assembly. The 6-pin electrode connector assembly is comprised of the 6-pin block, the posts, and the recording leads/wires, 1 of which serves as the ground-reference electrode and 4 that serve as the ADC electrodes.
posts solidly into the pin block. Check for continuity between pins 5 and 6 using a digital multi-meter to insure a mechanical connection between the lead wire and both pins after insertion.
Shrink tubing on the exposed lead at the wire wrap provides 2 things: (1) Electrical insulation from neighboring wrapped posts, and (2) Strain-relief for the wire lead. Cut a small (∼5 mm) piece of shrink tubing-just enough to cover the exposed wraps and a small section of insulated wire. Slide the shrink tubing over the exposed lead and wrapped section of the post going into the pin block. Use a micro-torch to shrink the tubing onto the inserted post's wrap and lead. Strip the insulation from the lead to leave ∼10 mm of insulated wire beyond the wrap. Just beyond the insulation at the opposite end of the lead, trim to ∼0.5 mm of uninsulated lead. This will be the recording electrode that will be gently implanted into the hole drilled into the mouse's skull. Be careful to firmly hold only the lead when stripping insulation.
Building the Analog-Digital Channel (ADC) Leads (Embedded Link to Video 3)
The next step is to make the 4 ADC leads. Since 4 leads are required for each assembly, prepare 4 posts from the legs of 2 LEDs. Each ADC lead will each have only 1 lead connected to 1 post to be inserted into 1 hole in the 6-pin block. It is helpful during the surgical process to differentiate ADC leads by differing insulation color. Follow the same wire wrapping procedure as before for each ADC lead. Wire wraps should be tight and evenly spaced. For ADC leads, trim and remove the post above the wire wrap, where the insulated section of wire begins, so the wraps are on the post where the wire lead's insulation begins. Trim the post below the wire wraps to create ∼3 mm of exposed post below the wraps. Insert the 3 mm of exposed post into hole 4 in the pin block adjacent to the groundreference lead in holes 5 and 6 until it is completely and securely inserted. Strip the trailing end of the lead to have ∼10 mm of insulated wire extending from the wrapped post. Trim the stripped section of lead to leave only 0.5 mm of uninsulated wire at the end of the lead to serve as the recording electrode. Repeat that process for the remaining 3 ADC leads, placing each new lead into the next adjacent hole. Putting shrink tubing on every post and wrap is not possible due to size constraints. Insulating alternating post-wraps can ensure insulation and strain relief between posts. Check continuity to make sure that electrical connections between each pin of the block and the end of each lead are intact. Confirm that only the pins that are connected to the leads have continuity and that there are no shorts or connections between adjacent pins, wraps, or electrodes. Sterilize the completed assembly with a hard surface disinfectant such as Cetylcide-II per manufacturer's directions. Do not autoclave the assembly.
Preparing for Surgical Implantation of Electrodes (Embedded Link to Video 4)
Prepare and anesthetize the mouse for surgery, then place the mouse into a stereotaxic device. Lift a section of the scalp with forceps to make an incision through the raised area. Then cut a teardrop-shaped piece of scalp encompassing the electrode implantation sites, removing the scalp and any underlying tissue to expose bare skull. Swab the exposed skull with 70% ethanol to remove any remaining tissue and clean the skull. Locate bregma or any other reference point you will use to determine the stereotaxic locations. Insert a marking pen into the stereotaxic device, center the pen on bregma (or your point of choice), and use the micrometer function of the stereotaxic device and pen to locate and mark the electrode implantation sites relative to your reference point. Using a sterilized drill bit of the same diameter as the electrodes, drill holes just penetrating the skull at each marked electrode location, careful not to pass into the brain. The skull will discolor as the drill penetrates. Remove the drill if any blood appears or when there is a sudden drop in resistance as this indicates penetration of the skull. Swab skull material off the drill bit with a sterile tissue or gauze with 70% ethanol after drilling each hole. Swab off any blood so it does not coagulate and plug the hole.
Implanting the Electrodes (Embedded Link to Video 5)
Place the electrode assembly in the desired location over the back of the mouse, keeping the pin block parallel to the mouse's back, and use forceps to bend the electrode wires to align with the drilled holes. Load a gel-loading tip on a 20 µl pipettor with Vetbond by depressing the pipettor plunger, inserting the gel tip into the Vetbond bottle, inverting the bottle and flicking it gently to get the adhesive to settle into the tip of the bottle and then drawing the Vetbond into the gel tip with the pipettor plunger. The Vetbond will stay fluid in the tip for at least 10-15 min. Use bent toothed forceps to insert the electrode wires into the drilled holes, then secure each wire to the skull with 2-3 µl of Vetbond. Take care not to allow Vetbond to get into the mouse's eyes or flow into empty drilled holes. Keep a sterile cotton swab handy to wipe up any excess Vetbond. Repeat the insertion and securing process for the remaining electrodes.
Affixing the Electrode Assembly to the Mouse Head (Embedded Link to Video 6) Use dental cement to further secure the electrodes in the skull and to build a pedestal to support the recording device. Mix resin into the cement powder and apply the cement around all the electrodes, covering the entire exposed skull and taking care not to allow the cement to get in the subject's eyes. Keep a sterile cotton swab handy to catch any dripping cement. Position the pin connector block in the desired location and completely cover all leads between the skull and the connector block by applying subsequent layers of dental cement, creating a secure pedestal of dental cement to support, and secure the pin connector. Allow the cement to set up between layers and to prevent the formation of holes or gaps which could allow the mouse to snag and pull off the assembly. It is critical that the supporting cement places the pin connector centered on the mouse's midline and high enough that the recording apparatus will not rub the animal's back or become dislodged if the animal has a seizure as a result of anesthesia administration. Check that the dental cement has set, then secure the seam between the cement and the scalp with a thin line of Vetbond. Check that the cement and Vetbond forms a seal all around the scalp incision. Test the continuity between the reference and ground electrodes to ensure that the connections of the electrode assembly were not compromised during the implantation process. This completes the surgical process.
Attaching the Recording Device
The recording device can be attached to and removed from the implanted electrode assembly without anesthesia on animals that can be held securely. Aggressive animals may need to be briefly anesthetized for attachment and removal. Subsequent recordings can be made during the course of a month or more.
RESULTS
Twenty-four mice were implanted with screw electrodes and 24 mice were implanted with wire electrodes. Each animal was implanted with 4 recording electrodes placed at −1.34 A/P, +1.50 L; −1.34 A/P, −1.50 L; −3.5 A/P. +3.00 L; −3.50 A/P. −3.00 L and one ground/reference electrode place at −6.00 A/P, +0.50 L relative to bregma. A recording electrode implantation was considered successful if the electrode recorded data consistent with the other electrodes and was considered unsuccessful if it showed periods with clipped signal both at minimum and maximum voltage, no change in potential and/or a 60 Hz mains hum.
The new wire method of electrode assembly and implantation resulted in a 51% increase in the number of successfully implanted electrodes, from an average of 2.5 of the 4 recording electrodes per mouse providing signal with the screw method to an average of 3.8 of the 4 recording electrodes providing signal with the wire method (Figure 2). An unsuccessful implantation typically results in a clipped signal ( Figure 3A) with a 50 or 60 Hz mains hum from background electromagnetic emissions ( Figure 3B).
This method also results in an implantation that is welltolerated by the animal for weeks and provides the opportunity for repeated recordings after experimental interventions. EEG traces recorded from the same animal 4 days after electrode implantation (Figure 4) and 1 month after the first recording ( Figure 5) indicate no degradation of the signal over that time period, demonstrating the robustness of the electrode implantation process including the dental cement headmount. There were 4 separate 2-day recording sessions, 1 each week during the month, for a total recording time of over 200 h. The subject mouse ate, nested, and groomed normally during the course of the month and was caged individually to prevent cage mates from chewing on the assembly.
FIGURE 2 | The average number of successful electrode implantations increases 51% with the wire method. The new wire electrode method resulted in an average of 3.8 ± 0.08 of the 4 recording electrodes implanted per mouse returning analyzable signal, compared to an average of 2.5 ± 0.27 of the 4 recording electrodes implanted per mouse with the screw electrode method, an increase of 51% in successful implantations as measured by individual implanted electrodes. Screws, n = 24 mice; wires, n = 24 mice, ****p < 0.0001. Data were analyzed by two-tailed unpaired t-test. Error bars indicate the mean ± SEM.
DISCUSSION
The use of animals in research was for many years limited to studies of diseases and pathologies of the particular species of animal or testing of pharmaceuticals for lethality or adverse effects, which has limited translation to effects on humans. However, use of animals to study disease has increased significantly in the last 30 years due to the creation of genetically modified (transgenic) animals to study gene function and to model human diseases. While many animal species have been successfully genetically modified, mice are the most commonly used species because more than 80% of human genes have corresponding counterparts in the mouse genome (Emes et al., 2003), their short life span reduces the time needed to complete age-dependent studies, and colonies are easily maintained. It is now possible to mimic pathology characteristic of a human disease to which mice are not normally susceptible, such as the neurodegenerative conditions of Alzheimer's and Parkinson's. The pathologies can then be studied both in vivo and in vitro and potential therapeutics can be administered to the animal subjects to find candidate treatments for human clinical trials. These genetic advances in research technology require the adaptation of equipment designed to quantify changes in physiology of human subjects to suit the physical and behavioral characteristics of mice, which in general requires miniaturizing while maintaining robustness of the equipment so it is not overly obstructive to the mouse's activities but can withstand damage from normal grooming activities.
The ability to record EEG in mice has led to significant research findings, including age-related sleep disturbances and changes in EEG profile (Jyoti et al., 2015) and the presence of seizure activity in mouse models of Alzheimer's Disease (AD) (Palop et al., 2007), a condition previously not often recognized in AD patients due to their cognitive deficits (Vossel et al., 2013). Just a few years ago, EEG studies of AD patients focused on sleep abnormalities (Jeong, 2004), altered regional connectivity, and rhythms (Knyazeva et al., 2013), while more recent research is also utilizing mouse models of AD to discover the underlying causes of seizure activity in AD pathology and investigate therapeutic treatments (Sanchez et al., 2012;DeVos et al., 2013;Bomben et al., 2014;Born et al., 2014).
Introducing the ability to record EEG on freely moving mice performing behavioral tasks makes it possible to observe and test interventions in mice designed to affect learning and memory that would be hampered by tethering or other invasive equipment. Typically, screws are used as electrodes or anchors for wires serving as electrodes and there are two different approaches for affixing the screws. One features drilling holes in the skull and inserting screws, which requires careful precision to avoid drilling or inserting the screws too deep and causing brain damage (Lapray et al., 2008;Armstrong et al., 2013), while the other avoids drilling and instead affixes the screws to the surface of the skull with cyanoacrylate glue (Etholm et al., 2010), which can result in poor contact if there is excessive glue between the skull and screw. After utilizing both approaches to record EEG on mice with a wireless recording system, with a lack of consistent successful implantation and adequate signal conduction, our lab decided to investigate alternative approaches and have developed procedures that result in consistent successful surgeries and recordings, with a robust implant that is minimally invasive and produces recordings for a month or more after implantation. The use of screws appeared to be the main impediment to consistent results due to the large size relative to a mouse. A flat screw base does not provide a secure conductance area on a curved mouse skull, while drilling several holes large enough for screws in the small and thin mouse skull resulted in a fragile setup that failed to maintain the apparatus for long periods of time. We also found that tissue growing around the wound with screw implants would FIGURE 6 | Percentage of electrodes successfully implanted. The percentage of fully successful electrode implantation, as measured by implant recording signal on all 4 ADC channels, increased from 29% with the screw method to 83% with the wire method. The different areas represent the number of electrodes recording analyzable signal after implantation for the 2 methods.
degrade the signal within 2 weeks of electrode attachment. The other weak link in the process is the attachment of wires to the screws and pin connectors with the use of solder or conductive adhesive. These connections are difficult to manufacture and fragile in use.
Our electronic technician suggested inserting just a small length of highly conductive stripped wire through a small hole drilled in the skull and affixing the wire jacket insulation to the skull with adhesive, with the wire wrapped around the pin connector instead of soldered to it, and proposed to make a pre-assembled wiring harness that would be strong and simple to affix. A similar process without screws has been described for telemetry recording of EEG (Weiergräber et al., 2005), but specific details were not provided. Consequently, we have completely revamped our electrodes and implantation procedures, finding that the percentage of successful recordings from all electrodes in an implanted electrode assembly increased from 29% with the TSE screw method to 83% with the new wire method (Figure 6). The success rate is calculated based on signal return from 0 through 4 electrodes as failure of the ground/reference electrode typically results in loss of signal from all recording electrodes even if the recording electrodes are all properly implanted. Other benefits of the wire method are that the implants remain in place for extended periods with no adverse effects or reactions and the assembly can be constructed with equipment readily available from typical electronic stores. On the other hand, some limitations of this method include not being well-suited for studies where more electrodes are needed to resolve different frequencies during sleep stages or for studies targeting deep brain structures which require implantation of depth electrodes.
Advances in science and technology over the past 30 years have made possible the development of transgenic animals to model human diseases and electronic equipment that can wirelessly record 72 h of EEG data on a device weighing slightly more than the U.S. Mint specifications for a dime (www.USMint. gov). However, it is still important to think "keep it simple" when using such equipment to reduce the opportunities for failure of components or steps in the process. Simplifying the electrodes and implantation process is also in accordance with the animal research principles of "replacement, reduction and refinement (Russell and Burch, 1959), " in that this is a refinement that minimizes the invasiveness of the experimental intervention and reduces the number of animal subjects needed to complete a study by increasing the reliability of data generation from each animal subject. Technology advances by both grand leaps and small adjustments; ironically, the wire-wrapping technology is well over a half century old and relatively novel to the current generation of scientists and has been documented to have the greatest reliability among various methods of electronic connections (Wagner, 1999). We have presented a protocol for improvements in the implementation of miniaturized EEG equipment that will facilitate successful use of the equipment in animal research.
AUTHOR CONTRIBUTIONS
EV and JB designed the project. EV wrote the manuscript. EV and MM acquired and analyzed the data. MM and JB revised the manuscript. DF, MM, RB, and AT refined the technique. FB and DF recorded and edited the videos.
FUNDING
This work was supported by grants from the Center for the Neurobiology of Learning and Memory at the University of California, The Institute for Memory Impairments and Neurological Disorders at the University of California-Irvine, and the California Department of Public Health.
ACKNOWLEDGMENTS
We would like to thank Dr. Maya Koike for her guidance and expertise in surgical procedures. | 2017-11-14T18:06:53.046Z | 2017-11-14T00:00:00.000 | {
"year": 2017,
"sha1": "a971949265fb9c18df75a49141f6ab0e7b517b58",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2017.00629/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a971949265fb9c18df75a49141f6ab0e7b517b58",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
235829444 | pes2o/s2orc | v3-fos-license | Path-dependent Hamilton-Jacobi-Bellman equation: Uniqueness of Crandall-Lions viscosity solutions
We formulate a path-dependent stochastic optimal control problem under general conditions, for which weprove rigorously the dynamic programming principle and that the value function is the unique Crandall-Lions viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation. Compared to the literature, the proof of our core result, that is the comparison theorem, is based on the fact that the valuefunction is bigger than any viscosity subsolution and smaller than any viscosity supersolution. It alsorelies on the approximation of the value function in terms of functions defined on finite-dimensionalspaces as well as on regularity results for parabolic partial differential equations.
Introduction
The optimal control of path-dependent stochastic differential equations (SDEs) arises frequently in applications (for instance in Economics and Finance) where the dynamics are non-Markovian. Such non-Markovianity makes difficult to apply the dynamic programming approach to those problems. Indeed, the standard dynamic programming approach is designed when the state equation is Markovian hence it cannot be applied to such problems as it is. More precisely, consider the following SDE on a complete probability space (Ω, F , P) where a m-dimensional Brownian motion B = (B t ) t≥0 is defined. Let T > 0, t ∈ [0, T ], x ∈ C([0, T ]; R d ), and consider a progressively measurable process α : [0, T ] × Ω → A (with A being a Polish space), where x is the initial path and α the control process. Let the state process X : [0, T ]× Ω → R d satisfy the following controlled path-dependent SDE: dX s = b(s, X, α s ) ds + σ(s, X, α s ) dB s , s ∈ (t, T ], X s = x(s), s ∈ [0, t]. (1.1) where the supremum is taken over all progressively measurable control processes α. We see that the value function is defined on the infinite-dimensional space of continuous paths C([0, T ]; R d ), hence it is related to some Hamilton-Jacobi-Bellman (HJB) equation in infinite dimension. The "standard" approach to study such problems consists in changing state space transforming the pathdependent SDE into a Markovian SDE, formulated on an infinite-dimensional space H, typically C([0, T ]; R d ) or R d × L 2 ([0, T ]; R d ). In this case the associated Hamilton-Jacobi-Bellman equation is a PDE in infinite dimension (see for instance [19,31]) which contains "standard" Fréchet derivatives in the space H. Some results on the viscosity solution approach are given for instance in [32,33,51]; however, uniqueness results seems not available up to now, see the discussion in [31, Section 3.14, pages 363-364] ).
More recently, another approach has been developed after the seminal work of Dupire [23]. This is based on the introduction of a different notion of "finite-dimensional" derivatives (known as horizontal/vertical derivatives), which allows to write the associated HJB equation without using the derivatives in the space H. We call such an equation a path-dependent Hamilton-Jacobi-Bellman equation (see equation (3.5) below), which belongs to the more general class of path-dependent partial differential equations, that is PDEs where the unknown depends on the paths and the involved derivatives are the Dupire horizontal and vertical derivatives. The definitions of these derivatives are recalled in Appendix A. There are also other approaches, similar to that introduced by Dupire, but based on slightly different notions of derivatives, see in particular [1, 44,36]. The theory of path-dependent PDEs is very recent, yet there are already many papers on this subject, see for instance [7,21,22,53,45,10,46,25,26,27,34,47,13,14,50,49,48,3,11,4,15]. One stream in the literature looks at such equations using modified definitions of viscosity solution. In particular, we mention the notion of viscosity solution introduced in [25] and by now well developed (see [26,27,47,50,49,11]), where maxima and minima are taken in expectation. We also mention the recent paper [7], where a notion of "approximate" viscosity solution is introduced, for which existence, comparison, and stability results are established under fairly general conditions. Another stream in the literature, to which this article belongs, looks at path-dependent PDEs using the "standard" definition of viscosity solution adapted to the new derivatives. We call such a definition the "Crandall-Lions" one, recalling for instance their papers [17,18]. In such a context there are only two papers, namely [15] and [54]. Paper [15] only addresses the path-dependent heat equation, however it is the first work where the main tools used in [54] and in the present paper were introduced, namely the use of a smooth gauge-type function and a smooth variational principle on the space of continuous paths in order to generate maxima/minima on [0, T ] × C([0, T ]; R d ), therefore relying on the completeness of the underlying space in place of the missing local compactness. Concerning [54], a comparison theorem for path-dependent HJB equations is provided using the approach of doubling variables. The proof of such a result turns out to be technically more involved compared to our approach, even though we impose stronger assumptions on the diffusion coefficient.
In the present paper we prove existence and uniqueness of Crandall-Lions viscosity solutions of HJB equations associated to the optimal control of path-dependent SDEs. The proof of uniqueness (or, more precisely, of the comparison theorem, from which uniqueness is derived) is built on refinements of the original approach developed in [43] and is based on the existence of the candidate solution v (the value function), which is shown to be bigger than any subsolution and smaller than any supersolution. The latter is traditionally based on regularity results which are missing in the present context. We overcome this non-trivial technical difficulty relying on suitable approximation procedures as well as on regularity results for parabolic partial differential equations, see Lemmas B.3-B.4-B.5 and Theorem B.7 of Appendix B. Moreover, in order to generate maxima or minima for functions on [0, T ] × C([0, T ]; R d ), we use the idea introduced in [15], and also adopted in [54], of relying on a smooth variational principle of Borwein-Preiss type and on a smooth gauge-type function. Here, we exploit the smooth gauge-type function introduced in [54] (see Lemma 4.2), which turns out to be simpler than that built in [15].
Once the comparison theorem is proved, we deduce from our existence result (Theorem 3. 4) that the value function v is the unique Crandall-Lions viscosity solution of the path-dependent HJB equation. The existence result is based, as usual, on the dynamic programming principle, which is proved rigorously in the present paper, see Theorem 2.9.
The rest of the paper is organized as follows. In Section 2 we formulate the stochastic optimal control problem of path-dependent SDEs and prove the dynamic programming principle. In Section 3 we introduce the notion of Crandall-Lions viscosity solution and prove that the value function v solves in the viscosity sense the path-dependent Hamilton-Jacobi-Bellman equation. In Section 4 we state the smooth variational principle on [0, T ]×C([0, T ]; R d ) and prove the comparison theorem, from which the uniqueness result follows. In Appendix A we recall the definitions of horizontal and vertical derivatives together with the functional Itô formula. Finally, in Appendix B we report all the results concerning the approximation of the value function needed in the proof of the comparison theorem.
2 Path dependent stochastic optimal control problems 2.1 Notations and basic setting Let (Ω, F , P) be a complete probability space on which a m-dimensional Brownian motion B = (B t ) t≥0 is defined. Let F = (F t ) t≥0 denote the P-completion of the filtration generated by B. Notice that F is right-continuous, so that it satisfies the usual conditions. Furthermore, let T > 0 and let A be a Polish space, with B(A) being its Borel σ-algebra. We denote by A the family of all F-progressively measurable processes α : [0, T ] × Ω → A. Finally, for every p ≥ 1, we denote by S p (F) the set of d-dimensional continuous F-progressively measurable processes X : [0, T ] × Ω → R d such that The state space of the stochastic optimal control problem is the set C( where |x(s)| denotes the Euclidean norm of x(s) in R d . We remark that (C([0, T ]; R d ), · T ) is a Banach space and we denote by B its Borel σ-algebra. We also define, for every t ∈ [0, T ], the seminorm · t as x t = x ·∧t T , x ∈ C([0, T ]; R d ).
we consider the restriction of d ∞ , which we still denote by the same symbol.
is a modulus of continuity if w is continuous, increasing, subadditive, and w(0) = 0.
We refer to [31,Appendix D] for more details on the notion of modulus of continuity.
Assumptions and state equation
We consider the coefficients on which we impose the following assumptions.
(ii) There exist a constant K ≥ 0 such that x, a)) 1/2 denoting the Frobenius norm of σ(t, x, a).
Assumption (B). The maps b, σ, f are uniformly continuous in t, uniformly with respect to the other variables. In particular, there exists a modulus of continuity w such that for some continuously differentiable maps ϕ 1 , . . . , ϕd : [0, T ] → R, where the above deterministic forward integrals are defined as in Definition B.1 with T replaced by t (see also Remark 2.3).
(ii) There exists a constant K ≥ 0 such that for all (t, a) ∈ [0, T ] × A, y, y ′ ∈ R dd , with |y − y ′ | denoting the Euclidean norm of y − y ′ in R dd .
Remark 2.3. Since the functions ϕ 1 , . . . , ϕd appearing in Assumption (C)-(i) are continuously differentiable, we can use the integration by parts formula (B.1) to rewrite the forward integrals as follows: Remark 2.4. By the Lipschitz continuity of b, σ, f , we deduce that they satisfy the following non-anticipativity condition: For every t ∈ [0, T ], ξ ∈ S 2 (F), α ∈ A, the state process satisfies the following system of controlled stochastic differential equations:
Value function
Given t ∈ [0, T ] and x ∈ C([0, T ]; R d ), the stochastic optimal control problem consists in finding α ∈ A maximizing the following functional: Finally, the value function is defined as Proof. The boundedness of v follows directly from the boundedness of f and g. Moreover, estimate (2. 6) follows from [53,Theorem 3.7] (notice however that in [53, Lemma 3.6] the right-hand side of estimate (29) should be replaced by c(|t , as in the right-hand side of (2.6) but with x = x ′ ; this is indeed a consequence of the proof of that lemma when estimating the term denoted "Part1").
Dynamic programming principle
In Section 3, Theorem 3.4, we prove that the value function v is a viscosity solution of a suitable pathdependent Hamilton-Jacobi-Bellman equation. The proof of this property is standard and it is based, as usual, on the dynamic programming principle which is stated below. We prove it relying on [12,Theorem 3.4] and on the two next technical Lemmata 2.7 and 2.8. For other rigorous proofs of the dynamic programming principle in the path-dependent case we refer to [28,29]. We begin introducing some notations. For every t ∈ [0, T ], let F t = (F t s ) s∈[0,T ] be the P-completion of the filtration generated by T ] -progressive sets. Finally, let A t be the subset of A of all F t -progressively measurable processes.
Lemma 2.7. Suppose that Assumption (A) holds. Then, the value function defined by (2. 5) satisfies We split the proof of (2.8) into four steps.
Step I. Additional notations. We firstly fix some notations. Let (R m ) [0,t] be the set of functions from [0, t] to R d , endowed with the product σ-algebra B(R m ) [0,t] generated by the finite-dimensional cylindrical sets of the form: t] defined as follows: Such a map is measurable with respect to F t , as a matter of fact the counterimage through B t of a finitedimensional cylindrical set C t1,...,tn (H) clearly belongs to F t . In addition, the σ-algebra generated by B t coincides with where N is the family of P-null sets. Finally, let (E t , E t ) be the measurable space given by E t = [t, T ] × Ω and E t = P rog(F t ). Then, we denote by I t : E t → E t the identity map.
Step II. Representation of α. Given α ∈ A, let us prove that there exists a map a t : 1) a t is measurable with respect to the product σ-algebra P rog 2) the processes α |[t,T ] (denoting the restriction of α to [t, T ]) and (a t (s, ·, B t )) s∈[t,T ] are indistinguishable.
In order to prove the existence of such a map a t , we begin noticing that the following holds: where the second equality follows from the fact that N , the family of P-null sets, is contained in both F t and F t s . Recalling that α is F-progressively measurable, we have that α |[t,T ] is progressively measurable with respect to the filtration In other words, the map α Now, recall the definitions of I t and B t from Step I, and let denote still by the same symbol B t the canonical extension of B t to [t, T ] × Ω (or, equivalently, to E t ), defined as B t : Then, the σ-algebra generated by the pair Therefore, by Doob's measurability theorem (see for instance [38, Lemma 1.13]) it follows that the restriction of α to [t, T ] can be represented as follows: Step III. The stochastic process X t,x,B t . Given α ∈ A, let a t be as in Step II. For every y ∈ (R m ) [0,t] , let X t,x,y be the unique solution in S 2 (F) to the following equation: dX s = b(s, X, a t (s, ·, y)) ds + σ(s, X, a t (s, ·, y)) dB s , s ∈ (t, T ], (2. 9) By the standard Picard iteration argument for the existence of a solution to equation (2.9), together with Proposition 1 in [52], we can deduce that the random field X : As a consequence, we can consider the composition of X t,x,y and B t , denoted X t,x,B t . Using the independence of G t = σ(B t ) and F t T , we deduce that the process X t,x,B t satisfies the following equation: (2. 10) As a matter of fact, we have E sup where the last equality follows from the so-called freezing lemma, see for instance [ x,y , a t (r, ·, y))dr − s t σ(r, X t,x,y , a t (r, ·, y))dB r = 0.
Hence E sup This shows that X t,x,B t solves equation (2.10). Now, recalling from Step II that α |[t,T ] and (a t (s, ·, B t )) s∈[t,T ] are indistinguishable, and noticing that the solution to equation (2.11) below depends on α only through its values on [t, T ] (namely, it depends only on α |[t,T ] ), we conclude that X t,x,B t solves the same equation of X t,x,α , namely From pathwise uniqueness for equation (2.11), we get that X t,x,B t and X t,x,α are also indistinguishable.
Step IV. The stochastic process X t,x,B t . Given α ∈ A, let a t be as in Step II and X t,x,B t as in Step III. Then, we have Denoting by µ t the probability distribution of B t on ((R m ) [0,t] , B(R m ) [0,t] ), and recalling the independence of G t = σ(B t ) and F t T , by Fubini's theorem we obtain E T t f s, X t,x,y , a t (s, ·, y) ds + g X t,x,y µ t (dy).
Now, fix some a 0 ∈ A and, for every y ∈ (R m ) [0,t] , denote Notice that β y ∈ A t . Moreover, recalling that X t,x,y solves equation (2.9), we see that it solves the same equation of X t,x,β y . Then, by pathwise uniqueness, X t,x,y and X t,x,β y are indistinguishable. In conclusion, we obtain This proves that J(t, x, α) ≤ sup γ∈At J(t, x, γ), for every α ∈ A. Then, inequality (2.8) follows from the arbitrariness of α.
Next lemma expresses in terms of v the value of the optimal control problem formulated at time t, with random initial condition ξ ∈ S 2 (F). In order to state such a lemma, we introduce the function V : [0, T ] × S 2 (F) → R defined as follows: f s, X t,ξ,α , α s dr + g X t,ξ,α , (2. 12) for every t ∈ [0, T ], ξ ∈ S 2 (F). Clearly, when Lemma 2.8. Suppose that Assumption (A) holds. Let t ∈ [0, T ] and ξ ∈ S 2 (F), then (2. 13) Proof. We begin noting that for t = 0 it is clear that equality (2.13) holds true, as a matter fact F 0 is the family of P-null sets, therefore ξ is a.s. equal to a constant and (2.13) follows from the fact that For this reason, in the sequel we suppose that t > 0. We split the rest of the proof into four steps.
Step II. Preliminary remarks. We begin noting that X t,ξ,α = X t,ξ·∧t,α , so that it is enough to prove equality (2. 13) with ξ ·∧t in place of ξ. More generally, we shall prove the validity of (2. 13) in the case when ξ ∈ S 2 (F t ). Now, recall that v is Lipschitz in the variable x (see Proposition 2. 6) and observe that, by the same arguments, V is also Lipschitz in its second argument. Furthermore, both v and V are bounded. Notice also that given ξ ∈ S 2 (F t ) there exists a sequence {ξ k } k ⊂ S 2 (F t ) converging to ξ, with ξ k taking only a finite number of values. As a consequence, from the continuity of v and V , it is enough to prove (2.13) with ξ ∈ S 2 (F t ) taking only a finite number of values. Then, from now on, let us suppose that 14) for some n ∈ N, ..,n being a partition of Ω.
Theorem 2.9. Suppose that Assumption (A) holds. Then the value function v satisfies the dynamic programming principle: for every t, s ∈ [0, T ], with t ≤ s, and every Proof. This follows directly from [12, Theorem 3.4] and Lemma 2.8. As a matter of fact, let V be the function given by (2.12). From [12, Theorem 3.4] we get the dynamic programming principle for V : Moreover, by Lemma 2.8 we know that from which the claim follows.
Definition of path-dependent viscosity solutions
In the present paper we adopt the standard definitions of pathwise (or functional) derivatives of a map u : , as they were introduced in the seminal paper [23], and further developed by [8,9] and [15,Section 2]. We report in Appendix A a coincise presentation of these tools. Just to fix notations, we recall here that the pathwise derivatives of a map u : We also refer to Definition A.4 (resp. Definition A. 6) for the definition of the class )). The reason for which we consider is due to the definition of viscosity solution adopted, for more details see Remark 3.3. Finally, we recall that for a map u ∈ C 1,2 Now, consider the following second-order path-dependent partial differential equation: Definition 3.1. We say that a function u : and satisfies (3.1). Definition 3.2. We say that an upper semicontinuous function u : We say that a lower semicontinuous function u : We say that a continuous map u : if u is both a (path-dependent) viscosity subsolution and a (path-dependent) viscosity supersolution of (3.1).
Remark 3.3.
Differently from the standard definition of viscosity solution usually adopted in the nonpath-dependent case (see for instance [16]), notice that in Definition 3.2 the maxima/minima are taken on T ] in place of [0, T ] (i.e. the maxima/minima are "onesided").
In the non-path-dependent case it is known that, even in infinite dimension, our "one-sided" definition is equivalent to the standard "two-sided" one (see e.g. [31,Lemma 3.39]). In addition, notice that the value function (say v) of our stochastic control problem is a viscosity solution of the HJB equation in both senses. As a matter of fact, the DPP, which is the main tool in order to prove the viscosity properties of the value function, only involves the values of v = v(s, y) for s ≥ t.
We observe that the fact of taking the maxima/minima on the right-time interval is generally adopted in the literature on viscosity solutions of path-dependent PDEs, as for instance in [25,26,27,47,50,49,11], where the notion of viscosity solution introduced involves the maxima/minima of an expectation of future (that is on [t, T ]) values of a suitable underlying process. In our case, the reason for considering is due to the proof of the comparison Theorem 4.5. In particular, it is due to the gauge-type function implemented in that proof, which is introduced in Lemma 4.2 and denoted by κ ∞ . More precisely, given a fixed point However, if we would be able to find another gauge-type function Ψ :
The value function solves the path-dependent HJB equation
Now, we focus on the path-dependent Hamilton-Jacobi-Bellman equation, namely on equation (3.1) with (3. 5) We now prove that the value function v is a viscosity solution to equation (3.5). Proof. Recall from Proposition 2.6 that v is continuous, moreover v(T, ·) ≡ g(·). Then it remains to prove both the subsolution and the supersolution property From Theorem 2.9 we know that, for every h > 0 sufficiently small, where the above inequality follows from the fact that v(t, By the functional Itô formula (A.1), we obtain Recalling that b, σ, f are uniformly continuous in their first two arguments, uniformly with respect to a, using (2.4), we get where ρ(h) → 0 as h → 0 + . Then, we obtain Sending h → 0 + , we conclude that (3.2) holds (with F given by (3.4)).
From Theorem 2.9 we have, for every h > 0 sufficiently small, and for every constant control strategy α ≡ a ∈ A, where the above inequality follows from the fact that v(t, Now, by the functional Itô formula (A.1), we obtain x,a , a) dr.
Letting h → 0 + , exploiting the regularity of ϕ and the continuity of b, σ, f , we find From the arbitrariness of a, we conclude that (3. 3) holds (with F given by (3.4)).
Smooth variational principle
This section is devoted to state a smooth variational principle on [0, T ] × C([0, T ]; R d ) which will be an essential tool in the proof of the comparison theorem (Theorem 4. 5). Notice that such a smooth variational principle is obtained from [ In the proof of the comparison theorem we need a gauge-type function Ψ such that (t, for every fixed (t 0 , x 0 ). Notice that d ∞ is obviously a gauge-type function, however it is not smooth enough.
Then, κ ∞ is continuous and satisfies the following inequalities: and its horizontal derivative is identically equal to zero. Its vertical derivatives of first and second-order satisfy 4) for some constant c > 0, for every i, j = 1, . . . , d. Finally, the following pseudo-triangle inequality holds: Proof. The claim follows from [54, Lemma 3.1], apart from (4.5). More precisely, let Υ m,M be the function defined at the beginning of [54,Section 3]. Then, notice that κ ∞ corresponds to Υ 2,3 . As a consequence, (4.2) follows from inequalities (3.1) in [54]. In addition, the fact that, for every fixed Moreover, the fact that its horizontal derivative is identically equal to zero is proved at the beginning of the proof of [54, Lemma 3.1]. Concerning estimate (4. 3), this follows from the explicit expressions of the first-order vertical derivatives of κ ∞ reported in (3.8) of [54]. Finally, estimate (4. 4) follows from the explicit expressions of the second-order vertical derivatives of κ ∞ given in (3.14) of [54]. Regarding (4.5), by (4.2) we have Next result provides a gauge-type function with bounded derivatives, built starting from κ ∞ in (4.1).
Then, ρ ∞ is a gauge-type function. In addition, for every and it has bounded derivatives.
Proof. The claim follows directly from Lemma 4.2. As a matter of fact, from the continuity of κ ∞ we deduce that, for every fixed This proves item a) of Definition 4.1. Moreover, item b) is obvious, while item c) follows from inequalities (4. 2). Finally, the fact that, for every fixed and it has bounded derivatives follows from the regularity of κ ∞ and the estimates on its derivatives stated in Lemma 4.2.
We can finally state the smooth variational principle on Theorem 4.4. Let λ > 0, δ > 0, and let G : fulfilling the following properties.
iv) It holds that t 0 ≤t and t i ≤t, for every i ≥ 1.
In addition, the restriction of ϕ to and its derivatives are bounded by cδ, for some constant c ≥ 0, independent of λ, δ.
Concerning item iv), this is a consequence of the fact that we set ρ ∞ ((t, x), (t ′ , x ′ )) equal to +∞ for t < t ′ . More precisely, item iv) can be deduced looking at the proof of [41, Theorem 1] (see in particular formula (18) in [41] where (t 1 , x 1 ) is introduced and, more generally, formula (21) Finally, the properties of ϕ follows from the properties of ρ ∞ stated in Corollary 4.3 and from item iv).
Comparison theorem and uniqueness
for every i = 1, 2, (t, for some constants C i ≥ 0, q i , p i ∈ (0, 1]. Suppose that u 1 (resp. u 2 ) is a (path-dependent) viscosity subsolution (resp. supersolution) of equation ( Step I. Proof of u 1 ≤ v. We proceed by contradiction and assume that sup(u 1 − v) > 0. Then, there exists 18). Moreover, for every n and any ε ∈ (0, 1), consider the functions v n,ε ∈ C 1,2 with v n,ε classical solution of the following equation: where y t,x n is given by (B. 9). Notice that the term 1 2 ε 2 tr[∂ yyvn,ε (t, y t,x n )] depends on the functionv n,ε rather than on v n,ε , see Remark B.6. We split the rest of the proof of Step I into four substeps.
(4. 15) Now, for each r ∈ (0,r), let C r > 0 be such that the map y → r 4 y − r p √ y is strictly positive for y ≥ C r . Then, given r ∈ (0,r) and k ≥ C r , we construct a map G n,ε,k,r : for some smooth functions h k,r and ψ k,r , such that G n,ε,k,r satisfies the following properties: x), (t 0 , x 0 )) ≤ r 4 }, for some constant D r > 0, possibly depending on r, but independent of n, ε, k.
Then, we define G n,ε,k,r as follows: (4. 18) Since we are assuming k ≥ C r , then r 4 k − r p √ k > 0. We begin noting that Property 1). Notice that property 1) above holds true: Property 2). This property follows from (4. 13), the fact that h k,r ≥ 0 and ψ k,r ≥ 0: Property 3). Regarding property 3), by (4. 13) and (4. 19), for all (t, 20) from which property 3) follows. As a matter of fact, (4. 20) can be written as (4. 21) On the other hand, using the concavity of the map y → y 1/k , we have where the last inequality follows from the fact that 2 1/k ≥ 1. Therefore, (4.21) holds true if This yields which is indeed an equality due to the expression of ℓ k,r in (4. 18).
Property 5).
We recall that the function h k,r takes the following form: Its derivatives are given by: The claim follows if we prove that the quantities multiplying 1 √ k are bounded on the set {|t − t 0 | 4 + κ ∞ ((t, x), (t 0 , x 0 )) ≤ r 4 } by some constant D r > 0, independent of k, but possibly depending on r. Consider for instance the first term appearing in the second-order derivative (the other terms can be treated in a similar way):
By (4.2) and (4.4), we have
We conclude that the quantity cℓ 1− 1 k k,r r 2 is bounded in k by some constant depending on r.
Recalling that t 0 < T , we see thatt < T for r small enough. Using again (4. 26) and (4.2), we obtain Substep I-d. We apply the definition of viscosity subsolution of (4.9) toũ 1 at the point (t,x) with test function (t, x) →ṽ n,ε (t, x) + h k,r (t, x) + ϕ(t, x), where (recall from (4. 19) that ψ k,r does not appear because of (4.26 )) Then, we obtaiñ Recalling thatṽ n,ε is a classical solution of equation (4.10), we find 28) where yt ,x n is given by (B.9) with t and x replaced respectively byt andx. By item ii) of Theorem 4.4 and the fact that ϕ ≥ 0, we have In addition, using the boundedness of b and σ, and recalling from Theorem 4.4 that the derivatives of ϕ are bounded by cδ, we deduce that there exists a constant Λ ≥ 0, independent of n, ε, k, r, δ, such that Similarly, recall from property 5) of Substep I-b that the derivatives of h k,r are bounded by Dr √ k on the set {|t − t 0 | 4 + κ ∞ ((t, x), (t 0 , x 0 )) ≤ r 4 }. Since by (4. 26) it holds that |t − t 0 | 4 + κ ∞ ((t,x), (t 0 , x 0 )) ≤ r 4 , there exists some constant Ξ r ≥ 0, independent of n, ε, k, δ, but possibly depending on r, such that Plugging the last three estimates into (4. 28) we get, using also estimates (B. 33) and (B. 34), Recall that yt ,x n is given by (B.9). Then, from (B.11)-(B. 12) we see that there exists a constant c ≥ 0, independent of n, ε, δ, such that Hence where the last inequality follows from (4. 27). Therefore, from (4. 29) we obtain
Now, notice that
where the last inequality follows from (4. 27). Then, using estimate (B. 16) with h and h n replaced respectively by b and b n or f and f n , we get where we recall that w is the modulus of continuity of b and f with respect to the time variable. Now we pass to the limit. First, we send ε → 0 + so the first term on the right-hand side of (4. 30) goes to zero. Then, we send n → +∞ so the second term on the second line in (4. 30) goes to zero, together with the two terms on the third line. Afterwards, we let k → ∞ and δ → 0 + . Finally, we send r → 0 + , obtaining which gives a contradiction to (4. 8).
Step II. Proof of v ≤ u 2 . It is enough to show that x,a , a dr + v s, X t,x,a , (4. 31) for x,a corresponds to the process X t,x,α with α ≡ a. As a matter of fact, it holds that where the validity of the above inequality can be shown simply taking s = t in the left-hand side. For every Notice that applying Proposition 2.6 with g, T, A replaced by v(s, ·), s, {a}, respectively, we deduce that v s,a is bounded, jointly continuous on [0, s] × C([0, T ]; R d ), and there exists a constantĉ ≥ 0 (depending only on T and K) such that In order to prove (4.31), we proceed by contradiction and suppose that there exist (t 0 , It holds that t 0 < T , otherwise t 0 = s 0 = T and u 2 (T, x 0 ) ≥ g(x 0 ) = v T,a0 (T, x 0 ). Moreover, we can suppose, without loss of generality, that t 0 < s 0 . As a matter of fact, by (4.32) and the fact that t 0 < T , there exists Notice thatũ 2 is a (path-dependent) viscosity supersolution of the following path-dependent partial differential equation: Similarly,ṽ s0,a0 and is a classical solution of the following equation: We begin noting that by (4.7) and item 4) of Theorem B.8, there exist constants M ≥ 0 and p ∈ (0, 1/2], independent of n, ε, such that To alleviate notation, we use the same symbols M and p as in Step I. Now, we proceed along the same lines as in Substep I-b. In particular, we consider the function As in Substep I-b we can prove that, for k ≥ C r , Recalling that t 0 < s 0 , by the first inequality we deduce thatt < s 0 for r small enough. Now, we can proceed along the same lines as in Substep I-d to get a contradiction and conclude the proof. We only notice that in order to use the viscosity supersolution property ofũ 2 at the point (t,x) we need to extend v s0,a0 in such a way that the extension is still smooth. We can do this extending by reflection (see [37]), namely defining the function v 1,s0,a0 Notice that v 1,s0,a0 n is non-anticipative. If 2s 0 ≥ T we have finished, otherwise we extend again v 1,s0,a0 Finally, we can state the following uniqueness result. Proof. If u is another (path-dependent) viscosity solution of equation (3.5) satisfying (4.7), then, by Theorem 4.5, we get the two following inequalities: from which the claim follows.
Appendix A Pathwise derivatives and functional Itô's formula
In the present appendix, we briefly recall the definitions of pathwise (or functional) derivatives following [15,Section 2], for which we refer for more details.
As we follow the standard approach (as it was introduced in the seminal paper [23]), in order to introduce the pathwise derivatives for a map u : we consider the restriction ofd ∞ , which we still denote by the same symbol.
with t < T , the horizontal derivative ofû at (t,x) is defined as (if the limit exists) , the vertical derivatives of first and second-order ofû at (t,x) are defined as (if the limits exist) where e 1 , . . . , e d is the standard orthonormal basis of R d . We also denote ∂ V xû = (∂ V x1û , . . . , ∂ V x dû ) and We can now define the pathwise derivatives for a map u : To this end, the following consistency property plays a crucial role.
then the same holds for their pathwise derivatives: for every (t, Proof. See [15, Lemma 2.1].
We can now given the following definition.
We also define, for every (t, Remark A.5. Thanks to the consistency property stated in Lemma A.3, the definition of pathwise derivatives of u does not depend on the mapû appearing in Definition A.4.
In the present paper we also need to consider the following subset of Definition A.6. We denote by Finally, we state the so-called functional Itô formula.
be a filtered probability space, with (F t ) t∈[t0,T ] satisfying the usual conditions, on which a d-dimensional continuous semimartingale X = (X t ) t∈[t0,T ] is defined, with X = (X 1 , . . . , X d ). Then, it holds that
B.1 The deterministic calculus via regularization
In the present appendix, we need to consider "cylindrical" maps defined on C([0, T ]; R d ), namely maps depending on a path x ∈ C([0, T ]; R d ) only through a finite number of integrals with respect to x. An integral with respect to x can be formally written as " [0,T ] ϕ(s)dx(s)". In order to give a meaning to the latter notation, it is useful to notice that we look for a deterministic integral which coincides with the Itô integral when x is replaced by an Itô process (such a property will be exploited in the sequel). This is the case if we interpret " [0,T ] ϕ(s)dx(s)" as the deterministic version of the forward integral, which we now introduce and denote by [0,T ] ϕ(s)d − x(s). For more details on such an integral and, more generally, on the deterministic calculus via regularization we refer to [20, Section 3.2] and [13, Section 2.2]. The only difference with respect to [20] and [13] being that here we consider d-dimensional paths (with d possibly greater than 1), even though, as usual, we work component by component, therefore relying on the one-dimensional theory. When ϕ is continuous and of bounded variation, an integration by parts formula provides an explicit representation of the forward integral of ϕ with respect to x.
Proposition B.2. Let x : [0, T ] → R d be a càdlàg function and let ϕ : [0, T ] → R be càdlàg and of bounded variation. The following integration by parts formula holds: where in the last equality we set ϕ(s) := 0 for s < 0. Hence Since ϕ is càdlàg, we have On the other hand, by Fubini's theorem, we have Since x is right-continuous we have that 1 ε r+ε r x(s) ds → x(r) as ε → 0 + . Moreover, since x is bounded (being a càdlàg function), by Lebesgue's dominated convergence theorem we conclude that More precisely, (B. 16) holds.
2)
If h satisfies items a) and b) (resp. a) and c)) then h n also satisfies the same items. In particular, h n satisfies item a) with constant 2K and item b) with a constantǨ ≥ 0, depending only on K (resp. item c) with the same constant K).
Proof. We split the proof into six steps.
Step I. Definitions of x pol n,y and x t,pol n . For every n ∈ N, consider the n-th dyadic mesh of the time interval [0, T ], that is 0 = t n 0 < t n 1 < . . . < t n 2 n = T, with t n j := j 2 n T, for every j = 0, . . . , 2 n .
For every y = (y 0 , . . . , y 2 n ) ∈ R d·(2 n +1) , we consider the corresponding n-th polygonal, denoted x pol n,y , which is an element of C([0, T ]; R d ) and is characterized by the following properties: • x pol n,y (t j n ) = y j , for every j = 0, . . . , 2 n ; • x pol n,y is linear on every interval [t n j−1 , t n j ], for any j = 1, . . . , 2 n .
So, in particular, x pol n,y is given by the following formula: for every s ∈ [t n j−1 , t n j ] and any j = 1, . . . , 2 n . Now, given t ∈ [0, T ] and x ∈ C([0, T ]; R d ), we denote where the second equality follows from the integration by parts formula (B.1).
Step III. Definitions ofh n , h n and proof of item 3). For every n ∈ N, let η n : R → R be given by Notice that ∞ 0 η n (s)ds = 1. Moreover, for every n ∈ N, let ζ n : R d·(2 n +1) → R be the probability density function of the multivariate normal distribution N (0, (d(2 n + 1)) −2 I d(2 n +1) ), where I d(2 n +1) denotes the identity matrix of order d(2 n + 1): with φ n,j as in (B. 8). From the continuity of h, we see that both h n andh n are continuous.
Step V. Proof of items 2) and 4). It is clear that h n (resp.h n ) satisfies item c) (resp. 4)-iii)) with the same constant K. If h satisfies item b) then |h(t, x, a)| ≤ K(1 + x t ), therefore, by (B. 6), where the last equality follows from (B. 15). Since d(2 n + 1) ≥ 1, we get |h n (t, y, a)| ≤ K (1 + |y|) + 2 K, which proves item 4)-ii). Concerning h n , we have which proves that h n satisfies item b) with a constantǨ, depending only on K.
Let us now prove that h n andh n satisfy respectively item a) and item 4)-i). We have By the integration by parts formula (B.1), we get Hence Since t 0 |φ ′ n,j (s)| ds = − t 0 φ ′ n,j (s) ds = 1 − φ n,j (t) and 1 − φ n,j (t) ≤ 1, we conclude that which proves that h n satisfies item a) with constant 2K.
Remark B.6. In equation (B. 21) with unknown u, the term 1 2 ε 2 tr[∂ yyvε (t, y t,x )] is known as it does not depend on u but it involves the functionv ε . The reason for the presence of this term is due to the fact that we first derive the HJB equation for the functionv ε , then we use equalities (B.29)-(B.30)-(B.31) to derive equation (B. 21) for v ε . However, from those equalities we are not able to rewrite the term 1 2 ε 2 tr[∂ yyvε (t, y t,x )] in terms of v ε , therefore we have left it as it is since this is not relevant for the sequel (actually we could work with the HJB equation satisfied byv ε ).
From similar calculations as in (B. 27), we deduce that (recall that y t,x is given by (B. Then, from the Lipschitz property of f and g, we derive the following Lipschitz property ofv ε |v ε (t,x) −v ε (t,x ′ )| ≤L ′ x −x ′ t , for all t ∈ [0, T ],x,x ′ ∈ D([0, T ]; R d ), for some constantL ′ , depending only on T and K. As a consequence, from the definition of vertical derivative of v ε , we see that item 4) holds.
We can now state the following result, which plays a crucial role in the proof the comparison theorem (Theorem 4. 5), in order to show that u 1 ≤ v. 2) v n,ε is a classical solution of equation (B. 21) with b, f, g,v ε , y t,x replaced respectively by b n , f n , g n ,v n,ε , y t,x n , where y t,x n is given by (B.9). | 2021-07-15T01:15:53.085Z | 2021-07-13T00:00:00.000 | {
"year": 2021,
"sha1": "e8095680dc4f1df583b25d5028b30505ee227a09",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e8095680dc4f1df583b25d5028b30505ee227a09",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
242869943 | pes2o/s2orc | v3-fos-license | CNT–TiO2 core–shell structure: synthesis and photoelectrochemical characterization
Porous composite coatings, made of a carbon nanotube (CNT)–TiO2 core–shell structure, were synthesized by the hybrid CVD-ALD process. The resulting TiO2 shell features an anatase crystalline structure that covers uniformly the surface of the CNTs. These composite coatings were investigated as photoanodes for the photo-electrochemical (PEC) water splitting reaction. The CNT–TiO2 core–shell configuration outperforms the bare TiO2 films obtained using the same process regardless of the deposited anatase thickness. The improvement factor, exceeding 400% in photocurrent featuring a core–shell structure, was attributed to the enhancement of the interface area with the electrolyte and the electrons fast withdrawal. The estimation of the photo-electrochemically effective surface area reveals that the strong absorption properties of CNT severely limit the light penetration depth in the CNT–TiO2 system.
Introduction
Photo-electrochemical (PEC) water splitting is an appealing approach for clean hydrogen energy generation. 1 Hereby, the process is essentially limited by the water oxidation reaction, which drives intense research for the development of highperformance anode materials. In this context, non-oxide semiconductors feature convincing performances, 2 however, they are chemically unstable in acidic and alkaline environments. 3,4 In contrast, several metal oxides exhibit a better chemical stability in the dark and under illumination. 5 Furthermore, metal oxide semiconductors come with additional merits such as the cost-effectiveness, non-toxicity and high abundance. In this context, TiO 2 , ZnO, WO 3 , Fe 2 O 3 and BiVO 4 have been intensively investigated in PEC water splitting. 6,7 Among the available metal oxides, TiO 2 has a high abundance and high chemical stability. 8 Titanium oxide features eight polymorphs, among which anatase and rutile have shown a signicant photocatalytic activity towards water splitting. 9,10 Anatase-rutile composite forms a heterostructure where charge carrier separation is improved, and the bandgap is decreased. As a result, the composite signicantly outperforms the photocatalytic property of the individual constituents. 11 TiO 2 has a suitable positioning of the conduction and valence band energies to drive hydrogen evolution (HER) and oxygen evolution reactions (OER). This is associated with a band gap in the UV (3.0-3.2 eV), which limits the theoretical efficiency. Furthermore, the low charge carrier mobility with short diffusion length (10-100 nm) 12 imposes either a reduction of its thickness to match the diffusion length scale, or nano-structuring. 13 Further improvements were reported using several approaches such as, doping, forming a heterojunction with other semiconductors and by the application of a co-catalyst. [14][15][16] Nano-structured TiO 2 is synthesised by different processes such as hydrothermal, solvothermal, titanium-foil anodization and template-assisted process. 14,15 The poor electrical conductivity of TiO 2 nano-structures and the fast recombination of photogenerated charges limit the PEC water splitting performance, 17 and the addition of CNT has a benecial effect. The electron transfer is energetically favourable from the TiO 2 conduction band to the CNT psystem. 18 So far, CNT-TiO 2 structures are synthesised by solgel 19,20 and hydrothermal processes, 21,22 which are affected by the challenging CNT dispersion in aqueous media as the unmodied CNTs are hydrophobic. Therefore, the process is difficult to control and heat treatments are needed to enhance the crystallization of TiO 2 . 23 The electronic structure of the CNT-oxide interface is degraded due to the chemical modication of the CNT surface, a step that is necessary to enable their appropriate dispersion. 20,24 The presence of TiO 2 as nanoparticles decorating the CNT surface leads to charges recombination upon interaction with the electrolyte. 25 The performance of TiO 2 -CNT coatings made by hydrolysis results in a photocurrent density of 0.05 mA cm À2 at 1.6 V RHE , 21 while the core-shell CNT-TiO 2 synthesised by gas phase process has shown a photocurrent density of 0.16 mA cm À2 at 1 V RHE . 26 In the latter case the CNTs were grown at 750 C, detached and drawn on the PEC surface prior to the deposition of TiO 2 . 26 Here we propose a simple and one-pot gas-phase process, low
Materials and methods
The synthesis of the CNT-TiO 2 core-shell lm architecture involves a single-pot hybrid Chemical Vapor Deposition-Atomic Layer Deposition (CVD-ALD) process. The deposition of carbon nanotube on silicon substrates was performed using thermal CVD. An equimolar ethanol solution, 0.65 Â 10 À3 mol L À1 , of cobalt acetylacetonate (Co(acac) 2 ) and magnesium acetylacetonate (Mg(acac) 2 ) was implemented as a single precursor feedstock. This feedstock was introduced into the reactor via an evaporation cylinder at 220 C, using a pulsed spray with a frequency of 4 Hz and using the opening time of 4 ms. The deposition was run for 2 h at 10 mbar, using a substrate temperature of 485 C. The thickness of the lm was assessed via the cross-section SEM inspection, and the density was assessed gravimetrically.
The ALD of the TiO 2 shell around the individual CNTs was achieved using the alternated surface exposure to titanium tetra-isopropoxide (TTIP) and water vapor. Here an ALD cycle consists of 4 steps: TTIP/purge/H 2 O/purge, and the growth rate is dened by the deposited thickness of TiO 2 per ALD cycle (growth per cycle/GPC). Both precursors were maintained at room temperature during the process, which conveniently limits their eventual condensation in the transport lines. The deposition pressure was adjusted at 0.5 mbar, while the temperature and the exposure times were a subject of a systematic study. The thickness of TiO 2 lms on planar silicon was measured using a multi-wavelength Ellipsometer (Film Sense) with the Cauchy Model.
X-ray diffraction (Bruker D8), with Cu-K a as the X-ray source, was used to identify the present crystalline phases. Here, the data were collected in the grazing incidence mode 0.5 while scanning the detector from 0 to 90 with a step size of 0.02 . Raman scattering was performed using an InVia Raman spectrometer from Renishaw with a 633 nm laser and a power density of 87 mW cm À1 . 2 The CNT-TiO 2 structure was characterized using transmission electron microscopy (S/TEM Themis Z G3, 300 kV, Thermo Fisher Scientic). The elemental mapping was performed using a combined EDX (energy dispersive X-ray spectrometer) analysis and high-angle annular dark-eld scanning transmission electron microscopy (HAADF-STEM, 29.5 mrad, probe corrected). The coated CNTs were sampled, by scratching the surface, and deposited on lacey carbon grids. The morphology of the lms was inspected using the FEI Helios Nanolab 650 scanning electron microscope (SEM) at a working distance of 5 mm and using an acceleration voltage of 5-10 kV.
A standard three-electrode setup was used for the photoelectrochemical measurements with the Si-CNT-TiO 2 or Si-TiO 2 as the working electrode. All voltages were measured versus Ag/AgCl reference electrode, and platinum (Pt) was used as the counter electrode. The electrolyte was an aqueous solution of 0.1 M NaOH (pH ¼ 12.7). All the potentials from Ag/AgCl reference were converted to RHE based reference throughout eqn (1). (1) The electrode area, 2 cm 2 , was front-illuminated using a Xelamp at 100 mW cm À2 . Electrochemical measurements were conducted using an AUTOLAB potentiostat. Steady state current-voltage curves were used for assessing the electrochemical performance, whereas the AC impedance spectroscopy provided information on the contribution of various resistive losses (polarization and ohmic/ionic) to the performance of the photoanodes.
CVD of CNT lms
The CNT growth is performed in a single step using a single feedstock approach. In this process ethanol vapor is thermally converted to CNTs via the mediation of in situ formed catalyst and promoter. The in situ reaction of ethanol with transition metal acetylacetonates at moderate temperatures yields metallic nanoparticles, 27-29 that catalyze the CNT growth; whereas, the thermal decomposition of magnesium acetylacetonate yields MgO, which the basicity promotes the CNT growth at temperatures exceeding 330 C. 30 The resulting lms are composed of randomly oriented multi-wall CNTs featuring an average outer diameter of (12 AE 0.6) nm as assessed by SEM inspection. 30 The inner/outer diameter of the CNTs were conrmed by TEM at 5 nm/12 nm along with the existence of 8 graphene layers. 30 The grown 4 mm thick CNT lm on interdigital electrodes features an electrical resistance of 5 U. Such a low electrical resistance results from the strong crosslinking between the MWCNTs. The cross-section morphology of the grown lm on silicon substrates, Fig. 1, displays a porous CNT structure for which the density is gravimetrically estimated at 0.4-0.6 mg cm À3 . This density is at least three orders of magnitude lower relative to densely packed CNTs. 31 Although the geometric thickness of the lm is homogeneous throughout the substrate, the CNTs occupy a marginal volume fraction.
A close inspection at the surface of the CNT lm and at the interface with the silicon substrate shows a similar morphology, which is a consequence of the simultaneous introduction of the catalyst and promoter along the deposition process. Cobalt and magnesium were found to be distributed homogeneously across the thickness of the lm and their content in the CNT lm was estimated using EDX at 4 at% ¼ Co/(C + Co + Mg), and 9 at% ¼ Mg/(C + Co + Mg). It is worth mentioning that the presence of cobalt might contribute to the electrochemical behaviour of non-coated CNTS.
The as-grown CNT lms fail in the adhesion scotch test, as the CNTs are easily detached from the substrate, and they partially detach from the surface when dipped in the electrolyte under sonication. This limitation was overcome via the conformal deposition of metal oxides around CNTs to form a core/shell structure. 32 In this context, shells of aluminum oxide or silicon oxide were investigated. Here we do investigate the ALD of TiO 2 around the CNTs to provide them the ability to split water in a photoelectrochemical setup. It is worth mentioning that the CVD of CNTs and their coverage with an oxide shell layer can be performed in the same reactor, and the resulting lms are mechanically robust and remain unaffected when ultrasonicated or dipped in the electrolytes.
ALD of TiO 2
A systematic study was performed on silicon substrates to establish conditions where the thermal ALD of TiO 2 can be performed. For the investigation of the temperature effect, Fig. 2a, the deposition recipe involved a surface exposure time of 15 s to TTIP, and 8 s exposure to water vapor, both are separated by 15 s of purge using 50 sccm of argon. The impact of the surface temperature on the growth per cycle (GPC) is marginal in the 140-195 C temperature range. A rise of the GPC, outside this range, is associated with the dominant thermal decomposition at high temperature and the plausible insufficient purging of water vapor at low temperature. A GPC of 0.56-0.58Å per cycle was measured in the plateau, which agrees with the $0.5Å per cycle reported for the hydrolysis of TTIP at 250 C. 33 The same ALD chemistry was implemented at 80-120 C (ref. 34) and 160 C, 35 and GPCs of 0.33 and 0.68Å per cycle were reported respectively. The diverging literature data regarding the GPC values might hint at the presence of competing deposition mechanisms. Beyond the great relevance of self-limited reactions for the attainment of conformal coatings on structures with high aspect ratios, such as CNTs, studying the effect of the exposure time helps to understand the ALD process. Investigating the effect of the surface exposure time to water vapor was performed at a deposition temperature of 160 C, while maintaining the TTIP exposure time at 15 s. The displayed results in Fig. 2b evidence the self-limited hydrolysis reaction step. An exposure time of 8 s is appropriate to completely hydrolyze the adsorbed TTIP, which enables a maximal GPC of $0.6Å per cycle. A non-complete hydrolysis at short exposures to water vapor leaves ligand moieties that poison the surface and yield a reduced GPC.
Unlike the self-limited behavior observed for the hydrolysis reaction, the TTIP adsorption gives a continuous increase at 160 C as displayed in Fig. 2c. A strong rise of the GPC with the TTIP exposure time is observed, reaching 2.3Å per cycle aer 120 s. No saturation was observed, which indicates a signicant CVD contribution. In this case, the thermolysis of TTIP leads to the growth of TiO 2 lm even in the absence of water vapor.
Decreasing the deposition temperature from 160 C to 140 C signicantly limited the rise of the GPC with the TTIP exposure; but did not suppress it. Decreasing the deposition temperature would logically further limit the contribution of the CVD components, and likely enable the ALD-typical selflimited growth. Nevertheless, it is worth mentioning that 140 C is at the low temperature side of the processing window featuring a constant GPC (Fig. 2a). Therefore, the CVD growth contribution persists in the optimized pseudo-ALD process. The omnipresent CVD contribution might be the reason behind the diverging literature data regarding the reported values of the GPC. The occurrence of a competing CVD pathway was demonstrated below the TTIP thermolysis temperature. [36][37][38] This behavior was attributed to the catalytic effect of the under-coordinated Ti +4 , which is assumed to induce the dehydration of TTIP or of the formed isopropanol. 36,37 As a result, further growth occurs instead of a surface saturation upon exposure to TTIP. Dosing isopropanol onto a surface of TiO 2 (110) shows that the associative dehydration reaction extends from 30 to 180 C. 39 Therefore, only a pseudo-ALD of TiO 2 can be expected from the hydrolysis of TTIP; nevertheless, limiting the surface exposure to TTIP would reduce the CVD contribution.
CNT-TiO 2 core-shell structure The ALD of TiO 2 was performed on the CNT layers within the identied temperature window (140-200 C). The SEM crosssection displayed in Fig. 3 corresponds to a lm grown at 160 C. At rst glance, the initial porous structure of the randomly oriented CNT is retained aer the deposition of TiO 2 . The apparent diameter of the CNT is however signicantly larger, 35 nm, relative to the non-coated CNTs (12 nm), and their surfaces feature faceted crystallites. The outer diameter hints at the deposition of a shell with a thickness of 11.5 nm around the CNT core aer 200 cycles. This corresponds to a GPC of 0.58Å per cycle, which is comparable to the growth on planar silicon (Fig. 2). The resulting morphology was further inspected across the thickness of the CNT layer. It is worth highlighting that a slightly higher CNT density is observed at the interface with silicon for the as-grown lms. The surface and interface regions, Fig. 3, reveal an identical morphology, and the coated CNTs feature a similar diameter across the layer, which is a strong evidence about the conformality of the TiO 2 coating. The further densication of the CNT-TiO 2 layer at the interface is an additional evidence of the ability of the ALD to enable a full inltration.
Raman scattering and X-ray diffraction on the ALD-grown titanium oxide over CNTs at 140, 160 and 175 C are displayed in Fig. 4. It is worth mentioning that these lms have the same thickness, as the GPC in these temperature conditions is similar, but their analyses show a signicant contrast. The CNT characteristic Raman bands at 1345 cm À1 (D band) and 1589 cm À1 (G band) are observed with a low I G /I D ratio for all samples, which was associated with the presence of defects at the outer graphene layer of the MW-CNT. 30 The anatase ngerprint is only observed for lms grown at 160 C and 175 C. The most intense and sharp peak at $140 cm À1 in addition to the peaks at $200 and 630 cm À1 are attributed to the E g modes. 40 The peak at 395 cm À1 was assigned to a B 1g mode, whereas the peak at 513 cm À1 involves components from A 1g and B 1g . 40 Relative to lms grown at 175 C, the signals are weaker for the lms grown at 160 C, while no peaks can be distinguished for the lms grown at 140 C.
The performed XRD analysis conrms the polycrystalline nature of the grown lm at 175 C. The recorded peaks in the XRD pattern correspond to anatase TiO 2 (pdf 01-075-2547). The lm grown at lower temperatures show weak peaks intensities of the same phase, indicating their poor crystallinity. The thermal activation during deposition favors the atoms surfacediffusion towards sites with minimized energy. Therefore, the crystallization process improves to reach saturation at sufficiently high temperatures. Weak diffraction peak corresponding to Si substrate and CNT can be identied.
Photoelectrochemical measurement
TiO 2 deposition at 175 C was retained for lms destined to the PEC measurements. The evolution of the morphology with the thickness of TiO 2 is illustrated in Fig. 5. The preservation of the porous structure is noteworthy even aer the deposition of a 45 nm thick TiO 2 layer. High resolution TEM displayed in Fig. 6 shows the conformal coating of TiO 2 layer on CNT con-rming the formation of a core-shell structure.
The oxygen evolution reaction (OER) occurs at the anode involving holes, whereas the hydrogen evolution reaction (HER) occurs at the cathode involving electrons as shown in eqn (2). The OER requires an overpotential of 1.23 V RHE that might be reduced when implementing photocatalysts such as TiO 2 . 14 4OH À / O 2 + 2H 2 O + 4e À (OER) (2) 2H 2 O + 2e À / H 2 + 2OH À (HER) 2H 2 O / O 2 + H 2 (overall reaction at 1.23 V RHE ) Due to its n-type characteristics, TiO 2 structures are used mostly as anode for the OER reaction. When n-type semiconductors, such as TiO 2 , are immersed in an electrolyte, an equilibrium is reached by the transfer of electrons from the semiconductor to the electrolyte. The formed space charge at the interface features an internal electric eld and inhibits the further transfer of electrons to the electrolyte. However, upon light illumination, electron-hole pairs are generated, and the built-in electric eld contributes to their separation. The photogenerated holes are dried to the surface of the semiconductor and participate in the oxidation of adsorbed water molecules. Whereas, electrons are dried to the bulk under the bias effect, and are further transported to the cathode. 6 The O 2 evolution reaction involves 4 holes along with the formation of O-O double bond. In principle, an overpotential beyond 1.23 V RHE is required for the OER, while the overpotential required for H 2 evolution is far smaller. Hence, OER is typically considered as a rate limiting step in the water splitting reaction. 41 The extent of water oxidation is assessed by measuring the photocurrent density.
The investigated TiO 2 was applied either on a planar Si substrate or on Si-CNTs. The surface area of CNT-TiO 2 is signicantly higher than the planar TiO 2 , which offers an extended interface with the electrolyte. The surface area in this case was approximated by combining the geometric thickness around the CNTs, as extracted from the SEM observation, and the weight gain as a result of TiO 2 deposition (ESI †). Hereby the weight gain resulting from the ALD of TiO 2 was assumed to be proportional to the real surface on which it is deposited. The surface area resulting from the 10 nm TiO 2 deposition corresponds to 401 cm 2 cm À2 . The surface area varies from 401 to 209 cm 2 cm À2 when the thickness of TiO 2 is varied from 10 to 78 nm, which is related to the partial obstruction of the channels between CNTs. While the electrochemical reactivity of CNT-TiO 2 might be related to the entire available TiO 2 -electrolyte interface area, the photoelectrochemical reactivity should take into consideration the light penetration depth and the competing light absorption by the CNTs. These effects reduce substantially the effective surface area of TiO 2 , which can be estimated using cyclic voltammetry with a varied scan rate in the negative potential range. 42 Notwithstanding the relevance of the effective surface area, the photoelectrochemical characterization in this study refers to the geometric area, as the sun light is the factor that triggers the reactivity. The results displayed here correspond to illumination with a ux of 1 sun.
Steady state chronoamperometry measurement was performed at a bias potential of 1.23 V RHE to assess the photocurrent generated during intermittent illumination periods. The current is normalised to the geometric area and the results related to pristine CNTs and bare silicon substrates are depicted in the ESI (Fig. S1 †). The results reveal a photocurrent in the order of 1-2 mA cm À2 . Fig. 7 shows the equivalent results with the application of various thicknesses of TiO 2 . Upon illumination the current density raised quickly from 0 to reach an equilibrium plateau in the case of TiO 2 lms on silicon substrates. However, CNT-TiO 2 core-shell structure features a residual dark current density that reduced gradually. Here the dark current is attributed to the presence of surface charge trapping, for which the suppression needs an extended time in the electrolyte. The current density response to light switching of thick TiO 2 on CNT is slow relative to the grown TiO 2 on silicon substrate, which is also associated with charge trapping that is emphasized by the large surface area. 43 Trapped charges might either witness a transfer across the interface or a recombination. 44 The CNT-TiO 2 core-shell structure features a photocurrent density of 0.17 mA cm À2 at 1.23 V RHE , which is 425% higher than bare TiO 2 with a similar thickness (i.e. 0.04 mA cm À2 ). One of the primary limitations of TiO 2 material is the short diffusion length of minority charge carriers, $10 to 100 nm, 45 which is associated with the high recombination rate. This hinders holes (h + ) from reaching the interface with the electrolyte. In case of CNT-TiO 2 core-shell structure, the photogenerated electrons are likely to witness a transfer to the CNTs, which would diminish the risk of bulk recombination in TiO 2 . In both cases the photocurrent density value increased with the thickness of TiO 2 . Photocurrent density is mostly affected by the photogenerated charge that depends on the thickness. The photocurrent density increases with the TiO 2 thickness on Si up to 45 nm, where a plateau above is observed in contrast to the grown TiO 2 on CNT.
N-type silicon/n-type TiO 2 heterostructure promotes the photogenerated charge carrier recombination at the interface. 46 The electrically resistive undoped silicon substrates were used in this study, which forces a lateral electron transport in the TiO 2 phase resulting in an enhanced recombination of the charge carriers. In the case of CNT-TiO 2 structures, bulk recombination is likely to be limited due to the high electron conductivity of CNTs, and the short distance that electrons should cross in TiO 2 prior being collected. Here the work function of 4.95 eV (ref. 47) and 4.5 eV (ref. 48) were reported for metallic CNTs and for TiO 2 respectively. Therefore, the contact between CNT and TiO 2 favours the transfer of electrons towards the CNT p-system. 49 The transfer of electrons from TiO 2 to CNT leads to the attainment of an equilibrium by balancing the Fermi levels. A built-in electric eld at this interface inhibits the further ow of electrons towards CNT, forming a Schottky barrier with negatively charged metallic multiwalled CNTs. The height of this barrier can be reduced however by applying an external bias, 50 enabling the ow of photogenerated electrons from TiO 2 to CNT as illustrated in Fig. 8. The TiO 2 -electrolyte interface will also feature a built-in electric eld that further enhances the photogenerated charge separation by driving the photogenerated holes towards the surface.
Cyclic voltammetry (CV) measurement was performed in the 0-2.2 V potential range with a 0.1 V s À1 scan rate and the results are displayed in Fig. 9. Si-TiO 2 samples, Fig. 9a, show negligible dark current densities throughout the potential range. Increasing the thickness of TiO 2 induces a perceptible increase of the dark current indicating the occurrence of electrocatalytic reactions. Exposing the surface to solar radiation brings a relatively prominent increase of the current density. The last features a signicant increase with the bias potential and with the thickness of TiO 2 . The onset potential is dened as the bias potential at which the anodic photocurrent starts to increase. This onset potential under light exposure is observed at 0.96 V for Si-10 nm TiO 2 sample and it shis further negatively to reach 0.82 V with a TiO 2 thickness of 78 nm.
The CV measurements of CNT-TiO 2 samples, Fig. 9b-d, show signicant forward & reverse dark currents that increase with the bias potential, giving rise to a hysteresis behaviour. A marginal current density increase is observed upon illumination and the hysteresis characteristics are retained. This behaviour is qualitatively like the one observed for non-coated CNTs (gure S1b †). The current density obtained from the voltammetry measurement might be categorised into faradaic and non-faradaic current. The faradaic response is due to the redoxreaction with a transfer of electrons at the electrode-electrolyte interface and the capacitive current is related to the charging of the electrochemical double layer formed at the electrode-electrolyte interface. 51 The electrochemical water oxidation occurs over pristine CNT sites at high overpotential. 52,53 The steady state current density at 1.23 V RHE was measured with a periodic exposure to light (Fig. S1a †). While the steady dark current is high for pristine CNT, the sensitivity to light exposure is relatively marginal. This variation in current density in steady state and at 0.1 V s À1 scan rate shows the presence of a large nonfaradaic capacitive current with a marginal faradaic contribution. This behaviour was presumably attributed to the electrocatalytic reactivity over TiO 2 , partially covered CNTs or cobalt decorated CNTs. The potential contribution of electrocatalysis will be prominent as it would concern the total CNT-TiO 2 layer, which contrasts with the photo-electrocatalytic reaction that is limited to the penetration depth of light. The carrier charge density of TiO 2 was assessed as a function of the thickness using the Si-TiO 2 model system and the Mott-Schottky analysis (Fig. S2 in the ESI †). A decrease from 3.8 Â 10 16 to 2.6 Â 10 15 cm À3 was noticed when increasing the thickness from 10 to 78 nm, which was related to a lower density of grain boundaries. With a known density of charge carrier, the impedance spectroscopy was implemented to assess the photoelectrochemically effective surface area as a function of the thickness of TiO 2 on CNTs. The results reveal indeed that only 7% of the available surface is photo-electrochemically effective with the TiO 2 thickness of 10 nm. This percentage rises to 36% with thicker TiO 2 lm (78 nm).
The investigation of the CNT-TiO 2 system reveals advantages leading to an enhancement of the photocurrent with a factor exceeding 400%. This includes the enhancement of the surface area and the withdrawing of electrons from TiO 2 . The thorough analysis indicates however several limitations with further optimization potential. Among these aspects we might emphasize the band alignment between the electron collector and TiO 2 , competing for light absorption and the contribution of parasitic capacitive current. It is worth mentioning that the oxidative selective removal of CNTs leads to a semi-transparent lm of randomly oriented TiO 2 nanotubes with high interface area with the electrolyte. This structure does not suffer from the competing light absorption from the CNTs but is missing the fast electron conduction channel. The resulting weak photocurrent, not shown, evidences that the enhanced electrons transfer and their transport in the CNTs outweighs their competing light absorption. This result shows also that the increase of the surface area resulting from the use of CNT support is not the factor exclusively dominating the photoelectrochemical response of the core-shell structure.
Conclusion
CNT-TiO 2 nanocomposite coatings have been grown in this study, and their PEC characterization was performed. Single step thermal CVD process was used for the growth of CNT lm, resulting in a randomly oriented CNTs, which were used for the ALD growth of the oxide layer (TiO 2 ). Anatase phase has been grown via the hydrolysis of titanium tetra-isopropoxide that exhibits a constant growth rate 0.056 nm per cycle between 140 and 190 C. The crystallinity of the lm improves, however, with the temperature in this range.
CNT-TiO 2 core-shell conguration outperforms bare TiO 2 lms in terms of PEC water splitting rate at a constant potential bias. The improvement in the photocurrent was attributed to the enhancement of the TiO 2 -electrolyte interface and the electrons-removal. Here CNTs act as nano-structuring support and as an electron transport channel.
Conflicts of interest
The authors declare no conict of interest. | 2021-10-15T16:06:04.228Z | 2021-10-04T00:00:00.000 | {
"year": 2021,
"sha1": "c469d6caa500437d2186ae978ecee7e4b8f7e132",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/ra/d1ra05723e",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c13ce19184e5ed7b8e6faace145703f1ad440c96",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
44454856 | pes2o/s2orc | v3-fos-license | A life-course perspective on legal status stratification and health
Scholars have expressed growing interest in the relationship between legal status stratification and health. Nevertheless, the extant research often lacks theoretical underpinnings. We propose the life-course perspective as a theoretical lens with which to understand relationships between legal status stratification and health outcomes. In particular, the life-course perspective guides researchers׳ attention to historical contexts that have produced differential social, political, and economic outcomes for immigrants based on legal status, and to the potentially long-term and intergenerational relationships between legal status stratification and health. We review four key dimensions of the life-course perspective and make recommendations for future directions in public health research on legal status and health.
Introduction
Scholars of immigrant health have long focused on immigrants' health advantage relative to U.S.-born counterparts, despite lower average socio-economic status. The so-called "immigrant paradox" is typically linked to migrants' favorable health behaviors (Blue & Fenelon, 2011;Creighton, Goldman, Pebley, & Chung, 2012) and family characteristics (Mulvaney-Day, Alegria, & Sribney, 2007), and the selection of relatively healthy individuals into migration (Jasso, Massey, Rosenzweig, & Smith, 2004). Despite the traditional focus on health advantages relative to the U.S.-born, researchers are increasingly focused on forms of social, economic and political stratification that may contribute to health disparities within immigrant groups (Acevedo-Garcia, Sanchez-Vaznaugh, Viruell-Fuentes, & Almeida, 2012;Viruell-Fuentes, Miranda, & Abdulrahim, 2012). In particular, immigrants face significantly different life chances based on their legal status. Regardless of the specific strata that they encompass, systems of legal status stratification in each country (Jasso, 2011;Morris, 2002) determine individuals' relative access to the rights enjoyed by citizens, including the right to admission and residence, as well as employment, education, and public benefits. In the United States, 45% of the estimated 40 million foreign-born residents are naturalized U.S. citizens, another 27% possess an authorized legal status, and the remaining 27% are undocumented (Pew Research Center, 2013). Within the U.S. citizenship hierarchy, undocumented migrants enjoy the least access to rights. Other legal status categories include Lawful Permanent Residence (LPR)the precursor to naturalizationasylee or refugee statuses, as well as temporary, and often precarious, lawfully residing statuses, such as Temporary Protective Status or deferred action. These statuses confer authorization to be in the U.S., but not all entail rights to obtain employment or receive public benefits.
Contents lists available at ScienceDirect
The hierarchy of legal classifications shapes corresponding social, political, and economic conditions that may influence health outcomes and health inequalities. Legal status shapes (1) differential health risks (e.g. stress, working conditions), (2) resources to manage those risks (e.g. income), and (3) access to health-promoting services (e.g. public benefits, health care). In particular, there is increasing concern related to the health impact of anti-immigrant political and social environments faced by immigrants around the globe. Enforcement policies and activities create environments that are harmful to health (Hacker et al., 2011;Hardy et al., 2012;Rhodes et al., 2015). Undocumented immigrants, Lawful Permanent Residents with less than 5 years of residence, and many temporary status groups are not only excluded from public benefits, but vulnerable to deportation (ICE, 2015). Further, there are likely 'spillover' effects of these enforcement activities on individuals across legal status categories, including increased questioning and harassment about legal status by police and potential employers or concern about the deportation of friends and family members (Aranda, Menjivar, & Donato, 2014). Such experiences may contribute to acute and chronic stress, with adverse consequences for health outcomes for significant portions of the population that are immigrants or have immigrant family members.
Public health research has made a growing number of empirical contributions to our understanding of the relationship between legal status and health. Nevertheless, much of this work has focused on health services use or has examined descriptive health differences by legal status, with few theoretical underpinnings that hypothesize how the political, social, or economic elements of legal status may affect health. In particular, much of the available research pays limited attention to the historical factors that shape heterogeneous social, economic, and political circumstances for different immigrant groups, including immigrants of varying legal statuses. Moreover, studies are often interpreted without regard to the future, including the potential for long-term and potentially intergenerational effects of legal status stratification on health.
A life-course perspective on legal status stratification and health We propose the life-course perspective (LCP) as a framework that offers a comprehensive theoretical lens for studying the health consequences of legal status stratification and for understanding legal status stratification as a social and structural determinant of health (Viruell-Fuentes, 2007). The life-course perspective draws attention to the effects of social and structural factors experienced across the individual life-course and among subsequent generations, all placed within a historical context (Elder, Johnson, & Crosnoe, 2003;Lynch & Smith, 2005).
Despite the emphasis on social and structural factors, the broad framework of LCP can encompass the processes described in the broader immigrant health literature, such as the "immigrant paradox". Specifically, the LCP allows scholars to bridge exposures related to legal status stratification with more proximal behavioral or cultural factors that may affect health (Lara, Gamboa, Kahramanian, Morales, & Hayes Bautista, 2005). For example, undocumented immigrants may experience disproportionate occupational and economic stressors, which may strain family networks and contribute to the erosion of family cohesion (Menjívar, 2000), with potentially adverse consequences for health (Rivera et al., 2008). Moreover, the deportation and detention of undocumented individuals may threaten the family and community networks that are often cited as a driving force behind observed health advantages among immigrant populations (Brabeck & Xu, 2010).
Scholars have applied the life-course perspective to the study of multiple forms of inequality in relation to health outcomes, including racism (Gee, Walsemann, & Brondolo, 2012;Hertzman, 2004) and socio-economic status (Kahn & Pearlin, 2006;McLaughlin et al., 2011). Over a decade ago, Jasso (2003) proposed the application of a lifecourse perspective to immigrant health, with an empirical example that took into account the experiences of legal migrants arriving to the U.S. Recently scholars have continued to emphasize the importance of bringing a life-course perspective to the field of immigrant health more generally (Acevedo-Garcia et al., 2012), and several studies have applied dimensions of the life-course perspective to analyses of legal status and health with a focus on older migrant adults (de Oca, García, Sáenz, & Guillén, 2011;Gubernskaya, Bean, & Van Hook, 2013;Miller-Martinez & Wallace, 2007). We expand upon this previous work to consider the health impact of legal status stratification on undocumented migrants, formerly undocumented migrants, and those with temporary or 'liminal' legal statuses (Menjivar, 2006), as well as the family members of those who are undocumented, given the healthrelated vulnerabilities faced by these groups ( In particular, we propose that the life-course perspective will help advance research on legal status stratification and health by acknowledging (1) the long-term and dynamic effects of legal status stratification and (2) the potentially inter-generational effects of legal status on health.
Long-term and dynamic effects of legal status stratification
Viewing the health of immigrants through a perspective that takes into account the past, future, and changing conditions experienced by immigrants across legal status categories is critical: immigrants, including undocumented immigrants, are increasingly long-term residents of the U.S., raising families that include U.S.-born children or foreign-born children who have lived the vast majority of their lives in the U.S. (Passel, Cohn, Krogstad, & Gonzalez-Barrera, 2014). Given the embeddedness of immigrants within families that are rooted in the U.S., many immigrants are likely to age in place and continue to be stratified by legal status. Further, because an individual's legal status may change over their course of their lives, the LCP provides a theoretical lens for considering the impact of individual legal status trajectories, which may entail experiences across tiers of legal status stratification. For example, there may be long-term health impacts of having been undocumented or in a temporary status, including lasting effects of delays in healthcare access and exposure to chronic stressors, even after obtaining permanent residence or naturalization.
Generational effects of legal status stratification
The legal status of immigrants is intimately linked to individual and family biography, as well as historical time and place (Jasso, 2011). Individual and family circumstancesincluding pre-migration economic and human capital accumulation and social tiesintersect with the economic and political climate of the receiving society. Agreements and conflict between sending and receiving states at a given point in time generate and reinforce stratification among immigrants by legal status. Differential health-related exposures experienced across legal status categories in one generation may have consequences for healthand differences in health outcomesfor subsequent generations.
Understanding the dynamic nature of immigrants' heterogeneous social, economic, and political circumstances allows for consideration of both context and change in the conditions that shape health over time and across generations. This is critical given the ever-changing nature of legal status categories and the associated risks and protective factors based on federal, state, and local-level policies.
We now turn to four key concepts from the life-course perspective and consider how they can inform research on the relationship between legal status stratification and health outcomes (Supplemental Table 1). Because legal status stratification in each nation is produced by different legal and social systems, we focus on examples particular to the U.S. Scholars can apply life course perspective concepts across international contexts to understanding the relationship between legal status stratification and health.
Historical time and place
The political and social conditions at a given historical time and place shape the definition of legal status categories and their demographic composition, as well as the significance of legal status for health. As a result, the social, political, and economic context of particular historical periods and locations provide a critical context for understanding the possible influence of legal status categories on health. Legal status categories are created by policies that have changed over the course of U.S. history in concert with shifting social attitudes towards specific immigrant groups and changes in the political and economic climate (Motomura, 2007(Motomura, , 2014. For example, during the 19th and early 20th centuries there were no numerical restrictions on migration from Europe, while there was formal exclusion of migrants from China. These divergent policies were based on explicitly racialized social attitudes, as well as economic and political factors (Motomura, 2007). Due to the open migration for European immigrants, there was no unauthorized migrationor "undocumented" status as we know it todayuntil quotas on migration from this region were first established in 1921, during a time of post-war economic downturn (Tichenor, 2002).
The size, distribution, and health-related characteristics of migrant populations across legal status categories are also historically and geographically contingent, resulting from a confluence of demographic, economic, political, and social factors in sending and receiving contexts (Massey, Durand, & Pren, 2014). For example, shifting circumstances in sending countries contribute to the number of individuals seeking entry into the U.S.: recent numbers suggest a tapering of U.S. migration from Mexico, due in part to declining labor demand in the U.S. as well as demographic shifts in Mexico towards lower fertility and an aging population (Massey et al., 2014;Passel, Cohn, & Gonzalez-Barrera, 2013;Passel et al., 2014). Meanwhile, rates of migration from Central America are expanding due to poverty, violence, and political instability in the region (Massey et al., 2014). Central American migrants currently arriving and settling in the U.S. -including a large number of unaccompanied minors (Robinson, 2015) -hold a number of diverse legal statuses, including citizenship, LPR, 'liminal' statuses, including Temporary Protective Status (Menjivar, 2006), and undocumented status.
The conditions of migrant sending countries are dynamic across time and shape pre-migration exposures that intersect with legal status stratification (Gushulak & MacPherson, 2011). Migrants may come from regions with high rates of infectious or chronic disease, or both. Those who survived childhood and migrated from regions with high rates of infectious disease may be more highly selected on characteristics of early-life nutrition and socio-economic status, contributing to better health in later-life. Others may face greater pre-migration exposure to conditions that contribute to poor health along the life-course, including high rates of tobacco use, violence, and discrimination. Migrants' position within sending societies shapes pre-migration health-related exposures and the post-migration legal status they hold. For example, individuals from high socio-economic strata in countries of origin may be less likely to face early-life poverty and more likely to migrate with Lawful Permanent Residency. Thus, the selectivity of migrants into varying legal statuses in the U.S. is likely based on characteristics that may be influential for health across countries of origin and various migrant cohorts.
Policies also vary across time and place in the provision of rights and services based on legal status. Federal and state policies have expanded and constricted rights based on legal status during different periods, and state and local policies can extend access to public benefits for those that have been excluded at the federal level (Marrow, 2012). For example, while the Affordable Care Act excludes undocumented immigrants and some lawfully residing immigrants from federally funded health insurance (Zuckerman, Waidman, & Lawton, 2011), select states and municipalities have expanded coverage to those otherwise excluded (Raymond-Flesch et al., 2014;Wallace, Torres, Nobari, & Pourat, 2013). Other recent examples include access to affordable higher education and driver's licenses for undocumented immigrants, despite federal laws designed to restrict access to those resources. These policies influence access to services that have a direct impact on health (Kullgren, 2003;Torres & Waldinger, 2015), and contribute to structural stigma (Abrego, 2011; Phelan, Lucas, Ridgeway, & Taylor, 2014) experienced by immigrants who face restricted rights, which may be an important determinant of mental and physical health outcomes.
Time and place-specific trends underscore the importance of considering contextual factors in research on legal status and health. In addition, the dynamic nature of legal status categorization suggests the need for understanding the health impact for those in emergent legal status categories, such as those with Deferred Action for Childhood Arrivals or migrants from countries newly designated for Temporary Protective Status.
Latent effects and critical periods
Legal status stratification contributes to differences in early-life exposures that may have lasting impacts on health later in life for both immigrants and their children. The concepts of latent effects and critical periods provide a theoretical basis for examining both the longterm and unique impacts that legal status stratification may have on health at key periods of development. Latent effects refer to the persistent, long-term effect of an exposure from an earlier point in time, regardless of intervening circumstances (Hertzman, 2004). Critical periods refer to developmental stages along the life-course in which exposures may have particularly acute and irreversible effects on laterlife health (Barker, 1993;Raposo, Mackenzie, Henriksen, & Afifi, 2014). These critical periods may include in utero and early childhood, given the sensitivity of developing neurocognitive, endocrine, and other systems in early life (Fagundes, Glaser, & Kielcolt-Glaser, 2013;Kim et al., 2013;Schlotz & Phillips, 2009). Adolescence and young adulthood are also thought to represent critical periods given the neurodevelopmental and hormonal changes and the social, behavioral, and identity development that take place during this phase in the lifecourse (Viner et al., 2012).
Exposures to acute and chronic stressors that may have lasting effects on health later along the life-course are often differentially distributed by immigrant legal status. In particular, undocumented migration might entail traumatic physical and psychological experiences (Holmes, 2013) that have lasting health effects. Furthermore, chronic and acute work-related exposures with potentially long-term impacts on health may also be disproportionately distributed across legal status strata. Undocumented workers have limited power to change occupational exposures (Holmes, 2007) that increase risk for injuries or illness that may have lasting, irreversible impacts on their physical and mental health (Negi, 2011;Walter, Bourgois, & Loinaz, 2003). The long-term impact of poor working conditions may not be apparent until late-life. Moreover, there may be lasting health consequences of trauma, injury, or stress earlier in life even if political or individual-level changes lead to a change in legal status. Some exposures may be particularly salient for child health as they undergo critical periods of development. For example, early childhood exposure to pesticides used for farming in the U.S. have in turn been linked to adverse health and neurodevelopmental outcomes for the children of farmworkers (Raanan et al., 2014), who may be disproportionately undocumented (Mehta et al., 2000). Moreover, the broader social, economic, and health inequities experienced by parents who face barriers to healthcare (Ortega et al., 2007;Vargas Bustamante et al., 2012) and low wages (Hall et al., 2010) as the result of their legal status may, in turn, result in early and long-term disadvantages for their children, including barriers to early childhood education (Yoshikawa, 2011), greater developmental risk (Ortega et al., 2009), and lower educational attainment (Bean et al., 2011). Fewer opportunities for cognitive and social development during early and critical developmental stages may contribute to poor health later in life, regardless of intervening experiences. The life-course construct of critical periods is therefore closely tied to concepts of linked lives and intergenerational effects.
Linked lives and intergenerational effects
Immigrants' individual-level exposure to inequitable structural factors can have an impact on families and broader communities. The construct of linked lives provides a theoretical basis for examining how conditions related to an individual's legal status may influence or be influenced by the conditions of others, with potential consequences for health (Elder Jr., 1998;Gee et al., 2012). Intergenerational effects points to the ways in which an individual's life-course is shaped by individuals from previous generations and shapes the exposures of those in future generations (Kane, 2015;Serbin & Karp, 2004).
Most often, linked lives and intergenerational effects refer to connections between family members, given evidence linking the health exposures of those in previous generations to the health outcomes of children and grandchildren (e.g. Lê-Sherban, Diez Roux, Li, & Morgenstern, 2014). An estimated nine million individuals in the U.S. live in mixed legal status families, and family members who are authorized to be in the U.S. can experience the consequences of policies that are meant to limit access to services for their undocumented relatives (Castañeda & Melo, 2014;Chavez, Lopez, Englebrecht, & Viramontez Anguiano, 2012). Lives may also linked across neighborhoods, communities, and institutions: enforcement or service exclusions aimed at undocumented immigrants may affect the wellbeing of community members, regardless of legal status (Hacker et al., 2011;Rhodes et al., 2015).
One example of the intergenerational impact of legal status stratification on health relates to the separation of foreign-and U. S.-born family members, including parents and children, due to immigrant detention and deportation (Chaudry et al., 2010;Dreby, 2015). Parental separation due to deportation has been associated with reduced child well-being (Brabeck & Xu, 2010;Chaudry et al., 2010;Koball et al., 2015). In findings from a longitudinal birth cohort study, Yoshikawa (2011) reports that mothers' fear of deportation, even without its actual occurrence, was associated with higher levels of maternal depression, which was in turn associated with fewer cognitive skills among pre-school age children.
As with other aspects of childhood adversity, deportationrelated family separation has the potential to impact a wide range of physical and mental health outcomes by way of generating "toxic stress", which refers to exposure to chronic stressors without resourcesfamilial or otherwiseto buffer the effects of these exposures (Shonkoff & Garner, 2012). Toxic stress caused by family separation or similar adverse experiences in childhood can impact health through mechanisms of systemic inflammation, immune dysregulation, changes in neurocognitive development, heightened psychological reactivity, cellular aging, and DNA methylation (Drury et al., 2014;Fagundes et al., 2013;Franklin et al., 2010;Lacey, Kumari, & McMunn, 2013).
Even earlier along the life-course, poor birth outcomes have been linked to restricted access to prenatal care due to fear of deportation and policies that create formal barriers to care for immigrants based on legal status. Reed and co-authors (2008) found that undocumented women were more likely to have complications during delivery, such as fetal distress and need for assisted ventilation compared with women holding other legal statuses. A study of the impact of Arizona SB 1070 found a significant decline in utilization of routine pediatric care (Toomey et al., 2014) among Mexican-origin families after the passage of the bill, regardless of parental legal status.
Another example of linked lives in the context of legal status stratification and health that merits further attention are the 'spillover' effects of policies on health and healthcare for immigrants in general, or members of larger racial/ethnic minority groups (Aranda et al., 2014;Hacker et al., 2011). The spillover effects related to concern about deportation can also be seen among foreign-born respondents to a 2002-2003 nationally representative survey of Latino residents in the U.S.: 11% of foreign-born respondents reported either thinking they might be deported if they went to a social or government agency and/or avoided using health services due to fear of immigration authorities; 2% of naturalized citizens and nearly 19% of those holding other legal statuses reported this worry. A full quarter of foreignborn Latinos reported being questioned about their legal status, including 17% of naturalized citizens and 30% of those holding other legal statuses (see Supplemental Table A). More recent national survey data suggests that nearly half of Latino residents and 16% of Asian-American residents are concerned that a family member or close friend could be deported, regardless of respondents' own immigration or legal status (Hugo Lopez, Taylor, Funk, & Gonzalez-Barrera, 2013).
Legal status stratification and its health consequences are shaped by historical and geographic context. But the concept of linked lives points to the idea that the individual experience of legal status and policies targeting those with undocumented or other precarious legal statuses has reciprocal impacts on family and community-level contexts. These impacts can include spillover effects within families and communities, and across generations.
Transitions and trajectories
Life-course transitions and trajectories for immigrants and their children may be shaped by legal status stratification, with consequences for health. Trajectories refer to long-term patterns of stability and change within multiple dimensions of individual, familial, and social life (e.g. employment trajectories, marital trajectories) (George, 1993). Divergent life-course trajectories are often linked to health disparities. For example, individuals who experience downward socio-economic mobility across the life-course have been observed to have poorer health outcomes in later-life than those who experience trajectories of upward mobility (Luo & Waite, 2005).
Transitions refer to life changes that are defined by developmental stage, and may mark entry and exit in and out of social roles. They are also embedded in cultural, social, and historical contexts that define normative social roles and the timing of entry and exit (George, 1993). Non-normative changes may also result in significantly different life trajectories for an individual or group. For example, a transition that occurs outside of socially defined expectations and timing (e.g. teen pregnancy, early exit from education) may be associated with adverse health outcomes (Martin, Blozis, Boeninger, Masarik, & Conger, 2014).
Migration itself may be considered a transition along the individual life-course. For example, migration from historic 'sending' communities (e.g. Western Mexico) may be considered a normative transition for young adults, as part of an expected trajectory of life and work in the U.S. (Massey, Alarcon, Durand, & González, 1987). Migration may also reflect a non-normative transition that results from individual or macro-level conditions (e.g. civil conflict, economic crises). Moreover, the experience of migration may represent a transition in one's individual health: longitudinal evidence from Mexico suggests that recent migrants to the U.S. experienced significant declines in self-rated health status (Goldman et al., 2014), which may be the result of the challenging physical and emotional experience of migration and undocumented migration (Holmes, 2013), as well as the disruption to social networks and hostile reception context faced in the U.S.
Studies suggest that immigrant youth may experience life transitions differently based on their legal status. Gonzales (2011) and Abrego (2011) describe that adolescents who are undocumented often learn of their legal status and its implications while undergoing normative adolescent experiences: attempting to apply for a driver's license, jobs, or college admission. As a result of being blocked from these opportunities, many undocumented youth describe feelings of low self-esteem, stigmatization and a sense of hopelessness about the future. As they contend with the limitations of their legal status, they may then be discouraged or barred from opportunities for the trajectories of upward socio-economic mobility taken by their peers. Undocumented youth are often observed to exit early from secondary or higher education and enter into low-skilled occupations (Gonzales et al., 2013;Raymond-Flesch et al., 2014). Some U.S. states have enacted formal barriers to entry into higher education for undocumented students, and even undocumented students who gain entry into higher education may face significant financial burden due to the lack of options for financial aid (NCSL, 2014). The differing barriers that youth face during these critical educational transitions based on their legal status may contribute to highly divergent educational trajectories; educational attainment is in turn one of the most robust predictors of population health and longevity (e.g. Ross & Wu, 1996).
There is some evidence that legal status may shape healthrelated trajectories later along the life-course as well. One study found that naturalization was associated with fewer functional limitations among older adults who migrated to the U.S. during childhood or young adulthood and had accumulated decades of greater civic, occupational, and economic incorporation relative to their non-naturalized, lawful permanent resident counterparts (Gubernskaya et al., 2013). Another qualitative study of older Mexican migrants reported undocumented migrants' own accounts of rapid health decline in late-life relative to documented peers given a lifetime of manual, low-wage labor (de Oca et al., 2011). Taken together, these studies highlight the potential importance of legal status across the life-course, including changes in legal status, in shaping later-life health outcomes.
Future research on legal status stratification and health
The four life-course perspective concepts discussed above provide a framework for future, theoretically grounded research that examines how legal status stratification exposes individuals to different health risks depending on historical place and time, developmental period in life, generational or community connections, and life transitions. The social and structural conditions described by the life course perspective suggests that differential exposures may result in cumulative experiences of disadvantage, marginalization, and increased exposure to health risks based on legal status. As immigrants continue to build their lives in the U.S. (Passel et al., 2014), exposure to the conditions shaped by legal status stratification are likely to persist over the longterm, with potential influences on their health outcomes later in life. These concepts indicate areas of inquiry to guide further theoretical and empirical developments.
Research on legal status stratification and health has been hampered by the limited inclusion of legal status measures in health studies, in part due to ethical concerns about collecting legal status data (Carter-Pokras & Zambrana, 2006). Nevertheless, data sources that include large, representative samples of foreign-born respondents as well as measures of legal status and health are critical to test the legal status-health relationships suggested by the life-course perspective. Further, data should include repeated measures of legal status, given its dynamic nature historically and due to policy shifts (e.g. DACA), and individual transitions (e.g. naturalization). Already, population-based surveys with short-term follow-up have been used to understand the relationship between legal status and intergenerational health outcomes (Landale, Hardie Halliday, Oropesa, & Hillemeier, 2015), although data representing the foreign-born population on a national-level is needed.
Even in the absence of available data for studying longitudinal outcomes associated with legal status stratification, scholars might draw on theoretical concepts from the life-course perspective to (1) situate cross-sectional analyses in the particular time and place in which data was captured and (2) make inferences about the long-term consequences of findings captured at one point in time (e.g. the long-term consequences of adverse birth outcomes, or family separation due to deportation).
Qualitative studies have already advanced our understanding of the differential conditions migrants face by individual and family legal status, and at key life transitions. In the future, long-term qualitative and mixed-methods research that follows respondents, families, or communities over time will be particularly critical to furthering our understanding of legal status stratification and health across the lifecourse. For example, qualitative research that follows individuals or communities before and after policy changes may help shed light on how historic shifts in legal status categories and their meaning lead to changes in health and its social determinants. Similar longitudinal research could be used to follow health and healthcare outcomes for individuals and families as they undergo key life transitions, including transitions in late-life, and how these outcomes differ by legal status.
Finally, population-based surveys, administrative records, and policy data bases are increasingly being used to study the association between policies and health for other minority groups (e.g. Lukachko, Hatzenbuehler, & Keyes, 2014), and might be extended to research on legal status stratification and health within or across national contexts. While these data sources may not be equipped to examine policy impacts by legal status, they might capture the 'spillover' effects of these policies on communities or racial and ethnic minorities in general. The connected nature of health and health-related exposures across linked lives could also be further explored in studies of family and community-based social networks, including respondent-driven sampling or other techniques to understand social network characteristics and health outcomes.
Conclusion
We present the life-course perspective as a set of concepts that can advance future research and theory related to legal status stratification and health. In particular, the life-course perspective underscores the importance of thinking about how exposures relate to health outcomes across the individual life-course, across generations, and within historical context. It brings attention to social and structural factors and their contributions to population health inequalities, but is still inclusive of individual-level factors. The focus on structural factors is particularly important, given that legal status is reflective of a system of stratification that positions immigrants within a hierarchy of relative access to the rights and responsibilities enjoyed by citizens. However, the life-course perspective may serve to bridge scholars' increasing interest in structural determinants of immigrant health with individual and family-level factors that may be shaped by and also interact with structural conditions. As long as legal status stratification persists, there will continue to be a need to for both descriptive and theoretically driven research that documents the potential impacts on health for the growing population of migrants across the globe, as well as their children and community members. | 2018-04-03T05:47:34.565Z | 2016-03-19T00:00:00.000 | {
"year": 2016,
"sha1": "c194304ed340a4a97db5824a0aa60aa819b96dc5",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ssmph.2016.02.011",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c194304ed340a4a97db5824a0aa60aa819b96dc5",
"s2fieldsofstudy": [
"Law",
"Sociology"
],
"extfieldsofstudy": [
"Sociology",
"Medicine"
]
} |
119096954 | pes2o/s2orc | v3-fos-license | Classical limit for the scattering of Dirac particles in a magnetic field
We present a relativistic quantum calculation at first order in perturbation theory of the differential cross section for a Dirac particle scattered by a solenoidal magnetic field. The resulting cross section is symmetric in the scattering angle as those obtained by Aharonov and Bohm (AB) in the string limit and by Landau and Lifshitz (LL) for the non relativistic case. We show that taking pr_0\|sin(\theta/2)|/\hbar<<1 in our expression of the differential cross section it reduces to the one reported by AB, and if additionally we assume \theta<<1 our result becomes the one obtained by LL. However, these limits are explicitly singular in \hbar as opposed to our initial result. We analyze the singular behavior in \hbar and show that the perturbative Planck's limit (\hbar ->0) is consistent, contrarily to those of the AB and LL expressions. We also discuss the scattering in a uniform and constant magnetic field, which resembles some features of QCD.
Introduction.
We know that in the classical scattering of charged particles by magnetic fields the scattered particles describe circular trajectories with fixed radii, so they have a preferential direction of motion. In this work, the relativistic quantum version of the problem is studied in the lowest order of perturbation theory and a symmetric behavior in the scattering angle is found for uniform and solenoidal magnetic fields. We focus on the study of some interesting limit cases of the differential cross section and compare them with previous non relativistic results reported by Aharonov and Bohm (AB) [1] and Landau and Lifshitz (LL) [2].
Although the scattering of charged particles by a solenoidal magnetic field, Aharonov-Bohm (AB) effect has been studied perturbatively before by other authors [3], in this work we are interested in the classical limit of the differential cross section.
As it is known, the AB effect [1] is considered one of the most important confirmed [4] predictions of the quantum mechanics because it shows that the vector potential has a physical significance and can be viewed more than a mathemathical convenience. The interest in this effect has been increased recently. Both because of basic reasons that have changed the understanding of gauge fileds and forces in nature and also because it has a lot of connections with new physics, like the quantum Hall effect [5], mesoscopic physics [6] and physics of anyons [7].
In the last two decades, several treatments to the AB scattering problem have appeared. First, the magnetic phase factor in the non relativistic case was treated directly (e.g. [8]). Works that consider spin have also been carried out and specially the behavior of the wave function is studied taking into account a delta function like potential at the origin [9]. Although originally AB solved the problem exactly, the perturbative analysis has played an important role [10,11,3], giving rise to discussions about the form in which the incident wave must be treated [12]. Also, various QED processes of scalar and Dirac particles in the AB potential have been carried out, and special interest has been devoted to polarization properties in bremsstrahlung and synchrotron radiation [13,14].
Our interest in this problem stems from the fact that an electron in a uniform infinite magnetic field is trapped in two dimensions in a potential of the harmonic oscillator form. Thus it is a confined point-like fermion and therefore resembles the dynamic confinement produced in Quantum Chromodynamics (QCD) for quarks in three dimensions. In order to keep our analogy as close as possible to the parton model ideas our computation is done in a perturbative fashion. In this model, the use of free particle asymptotic states is very common, nevertheless with a simple model calculation in Quantum Electrodynamics (QED) we show that this procedure could be misleading at least in lowest order.
In this work we focus our analysis in the classical limit. First we are puzzled by the symmetric results of the differential cross section with respect to the scattering angle θ, in contradistinction to the classical scattering which favor an asymmetrical result. Second, the classical limit of field theory is a long unsolved problem, it is therefore tempting to understand the above mentioned quantum (symmetric) vs classical (asymmetric) results in order to clarify the classical limit: Are we in the presence of a process which is purely quantum in nature as LL suggest?
Previous results in the non relativistic case.
Let us recall two landmark results of the non relativistic case for the differential cross section of the scattering of electrons by solenoidal magnetic fields. Chronologically, the first result was presented by Aharonov and Bohm [1]. They obtain the exact solution for the scattering problem when the radius of the solenoid is very small for a constant magnetic flux, in fact they consider only a quanta of magnetic flux (Φ = 2πhc/e ∼ 4 × 10 −7 gauss cm 2 ). Their result is †: Independently, Landau and Lifshitz [2] study the same scattering problem with the use of the eikonal approximation. Including only the contribution of the vector potential from the exterior of the solenoid, they obtain precisely the same as Aharonov and Bohm. Notice that this cross section is symmetric in θ.
Landau and Lifshitz compute the differential cross section for small scattering angles in the case of a small magnetic flux, eΦ/2hc ≪ 1, where perturbation theory is applicable, then sin (eΦ/2hc) ≈ eΦ/2hc, and the resulting cross section develops a singular behavior inh: They comment that the singular behavior of the cross section in θ when it goes to zero is specifically a quantum effect, without any further comment. We will study this problem in the next sections.
3. Solenoidal magnetic field in the relativistic case.
Let us consider the scattering of a Dirac particle by the magnetic field of a solenoid with a constant magnetic flux. This is a problem in which free particle asymptotic states can be used. The beam polarization will be taken into account. As mentioned before, this is a problem studied before by other authors also using perturbation theory, but here our interest is quite different. We want to study the classical limit of the result and discuss the proper way to get it. Consider a long solenoid of lenght L and radius r 0 ≪ L centered in theî 3 axis. Inside of the solenoid, where r < r 0 , the magnetic field is uniform, B = B 0î3 , with B 0 being a constant, while outside the solenoid, where r > r 0 , the magnetic field is null. For r > r 0 , the magnetic flux Φ will be constant: Φ = πr 2 0 B 0 . We will follow the Bjorken and Drell convention [15].
A vector potential that describes such magnetic field for the interior of the solenoid . Using the Levi-Civita symbol in three indices, ǫ ijk , the vector potential of the solenoid field can be † In the paper of Aharonov and Bohm the cross section appears to be inversely proportional to cos 2 θ/2, but the reference frame they use is such that θ is traslated by π. Also the factorh/p is not explicitly shown due to a change of variable (r ′ = kr).
written as
with the scalar potential A 0 = 0. Replacing this vector potential in the S matrix, equation (A.2), for Dirac particle solutions, equation (A.1), and f = i, we obtain Notice that there exists a global phase e ieΦθ/hc in the free particle wave function because the presence of a pure gauge field in the exterior of the solenoid, but it does not contribute to S f i . For both integrals the parts that correspond to dx 0 = dt and dx 3 are proportional to 2πδ(q 0 ) and 2πδ(q 3 ), respectively. So the energymomentum conservation in the scattering process is guaranteed and the particles do not change their momentum in the magnetic field direction. The integrals in the plane and J n are the n order Bessel functions [17].
In this form, the S matrix for f = i is: We note that in the lowest order in the S matrix there exists a net contribution from the interior of the solenoid, where the magnetic field is not null, in contradisctintion with the LL calculation, where only the vector potential of the exterior of the solenoid is considered. Finally, the differential cross section per unit length of the solenoid is Averaging over incident polarizations we obtain and we note that this result does not depend on the final polarization. After some algebra we get 1 2 Σ si |ū f q /u i | 2 = 16p 4 sin 2 θ 2 where p = |p f | = |p i |. Finally, introducingh and c explicitly, we have which has the same form whether or not the final polarization of the beam is actually measured (f = 1 or f = 2). As can be observed, the differential cross section is symmetric in θ. This is reminiscent of the Stern-Gerlach result, in which an unpolarized beam interacting with an inhomogeneous magnetic field is equally split into two parts, each one with opposite spin. But, as we have mentioned before, equation (3) does not depend on the final polarization of the particles. Thus, this symmetric behavior of θ should be a consequence of the perturbation theory but notice that it is also present in non perturbative results like those of AB and LL. Figure 1 shows the behavior of the cross section of equation (3) in a polar plot scaled by a 10 52 factor for a quanta of magnetic flux (Φ ∼ 4 × 10 −7 gauss cm 2 ), r 0 = 1cm and the energy of the incident particles running from 1 MeV to 50 MeV in steps of 2MeV as a function of the scattering angle θ.
When helicity states (HS) are considered, the resulting differential cross section is: where λ i and λ f stand for the initial and final states of helicity and acquire only the values +1 or −1. A zero differential cross section is obtained if λ i = −λ f . When λ i = λ f , which implies helicity conservation, then the differential crosss section is: which has the same form as equation (3).
Non-relativistic reduction.
To make connection with previous results, we study the limit case of small scattering angles. If we assume pr 0 |sin (θ/2)| /h ≪ 1, then equation (3) (or equation (5)) reduces to dσ which agrees with the result reported by Aharonov-Bohm when eΦ/2hc ≪ 1. And if we impose in equation (6) the condition θ ≪ 1 we obtain which is presicely the result reported by Landau and Lifshitz.
We want to point out that it does not make sense to take the Planck's limit (h → 0) in equation (6) or in equation (7), because both expressions were obtained assuming the condition pr 0 |sin (θ/2)| /h ≪ 1. Hence, we have to take the classical limit using the exprression for the differential cross section given in equations. (3) or (5).
Classical limit (Planck's limit).
Let us study now the classical limit of the differential cross section of equation (3). For this purpose, consider the new adimensional variable x = 2pr 0 |sin (θ/2)|/h = r 0 q and define J(x) = |J 1 (x)| 2 /x. Observe that the limith → 0 implies x → ∞ or pr 0 → ∞ [16] for fixed θ. We can rewrite equation (3) as follows: Because the asymptotic behavior of the Bessel function [17] is: the resulting Planck's limit of equation (3) is identically zero for fixed e, p, r 0 , Φ and θ: which is also obtained for pr 0 → ∞. So, the perturbative result gives a consistent finite classical limit and reduces to the eikonal and the zero radius limits. If the classical limit is attempted in equations. (6) or (7), the result would be singular inh, but this is clearly a misleading procedure.
The apparent difference in the classical limits of equation (3) (see equation (8)) and equations. (6) and (7) comes from the fact that in taking the limith → 0 of the perturbative result, equation (3), the Bessel function decreses as J 1 (x) ∼ 1/ √ x and it does generate anh contribution to the cross section. On the other hand, if one begins by taking pr 0 |sin (θ/2)| /h ≪ 1, the Bessel function approximates to J 1 (x) ∼ x, and this behaves like 1/h. The overall difference between these two procedures is anh 3 factor. It is important to notice that loop corrections to the perturbative expansion do not modify theh behavior of the amplitude, this can be proved with the use of the loop expansion.
Quantized magnetic flux.
One can obtain a non divergent expression for the Landau-Lifshitz and the Aharonov-Bohm results whenh → 0 if in place of e one fixes Φ, the magnetic flux. The rationale behind this is that instead of a classical limit, with fixed e, one imposes the magnetic flux quantization condition, Φ = nΦ 0 .
For a quantized magnetic flux, where Φ 0 = hc/e = 4.318 × 10 −7 gauss cm 2 , the cross section of equation (3) takes the form which apart of being independent of the charge of the particles, is a cross section of a purely quantum effect. Cast in this way it is not singular inh as in the form that Landau-Lifshitz report. In particular, for the case of small scattering angles, it takes the form dσ which also has a null classical limit. If we recall equation (2) obtained by Landau-Lifshitz, we see that the same form can be recovered when Φ = nhc/e, but these authors did not quantize the magnetic flux and thus they cannot get a cross section of a pure quantum problem as that of equation (9). Also note that the zero classical limit with quantized magnetic flux of equation (1) is obtained: dσ dθ =h sin 2 (nπ) 2πp sin 2 θ 2 −→ h→0 0.
Conclusions.
In this work we present a relativistic quantum study in first order of perturbation theory of the cross section of the scattering of a Dirac particle with magnetic fields. We have specially focused in the classical limit for the solenoidal magnetic field case.
In order to fulfill the perturbation theory requirements the magnetic field was bounded to a solenoidal one with constant flux. We obtained that the cross section of the scattering problem is given by equation (3) and has the same form whether or not the final polarization of the beam is actually measured. This indicates that the symmetry in the scattering angle is most likely a consequence of the perturbation theory.
We have shown that the perturbative classical limit for all scattering angles and all radii of the solenoid with e, Φ, p, r 0 and θ fix, is identically zero, because the cross section behaves likeh 2 and hence it is not singular inh as the one of Landau-Lifshitz (see equation (2)). We point out that the same zero classical limit can be obtained in the limit pr 0 → ∞ with fixed e, Φ and θ. Notice that the apparent difference in the classical limits comes from the fact that the asymptotic behavior of the Bessel function goes like J 1 (1/h) ∼ 1/ 1/h and it does generate anh contribution to the cross section, while taking first the approximation pr 0 |sin (θ/2)| /h ≪ 1, the Bessel function behaves like J 1 (1/h) ∼ 1/h. So, the overall difference between these two procedures is anh 3 factor.
When the magnetic flux is quantized, the cross section is proportional toh 3 getting, again, a null classical limit. This limit can be recovered from the Aharonov-Bohm and Landau-Lifshitz results because Φ = nhc/e and then sin eΦ 2hc = sin(nπ) ≡ 0, an independent result ofh.
Finally we want to point out that althought our reslut is consistent in the sense that the Aharonov-Bohm and the Landau-Lifshitz results are recovered, there is not a direct classical correspondence via the Planck's limit (see equation (8)), because in particular the cross section is symmetric in θ. This problem is shared also by the AB and LL solutions and is possibly solved by higher order corrections in the external magnetic field.
where we have denoted u i = u i (p i , s i ) and similarly for u f . The integrals can be solved immediately because three of them are proportional to a Dirac delta function, while the integral in the x 2 direction is equal to 2πiδ(q 2 )/q 2 . Then, the S matrix of this problem is With the usual replacement of |2πδ(q i )| 2 = 2πL i δ(q i ), we obtain the differential cross section per unit of magnetic field volume: Note that the resulting differential cross section is proportional to a Dirac delta function of the momentum transfer in the incident direction of the particles, meaning that the particles do not change their momentum after their interaction with the magnetic field. This is in apparent contradiction with the common sense, because we know that in the classical situation a particle that interacts with a magnetic field changes its momentum, and its orbit is a circumference. Also note that we assumed that the magnetic field fills all the space and we used free particle solutions to solve the scattering problem. These conditions are physically questionable because the presence of the magnetic field binds the particles and therefore the asymptotic sates cannot be plane waves, just like is done in perturbative QCD.
To use perturbation theory it is necessary to verify its applicability limits. For perturbation theory to be valid one must satisfy |U | ≪h 2 /ma 2 [18] where U is significant in the range a and m is the mass of the particle. For the case we are studying the range of the potential is infinite, so to use perturbation theory we need to modify the potential in such a way that it goes rapidly to zero at infinity, doing the system compatible with our calculation and with physics, and then the particles can be treated as free asymptotically.
Then, a more natural way to study the scattering of particles by an external magnetic field, is to confine the field in space, as was done in section 3. Curiously enought, we notice that the r 0 → ∞ limit of equation (3) gives precisely the result of equation (A.3). | 2019-04-14T03:18:57.267Z | 2002-11-29T00:00:00.000 | {
"year": 2002,
"sha1": "4c8c7c870e289346fd20f6aac3c3729545622a0a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/quant-ph/0211195",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4c8c7c870e289346fd20f6aac3c3729545622a0a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
252516020 | pes2o/s2orc | v3-fos-license | Assessing the Contact Angle Between Dentin Treated With Irrigation and Calcium Hydroxide and Root Canal Sealers
This article has been retracted at the request of the authors due to a grouping error affecting the results and conclusion of the article. As stated by the authors, "Specimens G4, G5, and G6 were tested with bioceramic (BC) sealer rather than Tubli-Seal as stated in the article. Additionaly, they were subjected to irrigation for two weeks rather than the required four weeks. This also misinterpreted data by the authors. We sincerely regret the error." The Cureus editorial staff has reviewed the article and agreed to retract it per the authors' request. Abstract The long-term use of calcium hydroxide, or Ca(OH)2, on dentin has been established in the literature. However, scarce data is available on dentin wettability with Ca(OH)2. The present study was conducted to assess the outcomes of Ca(OH)2 use on the wettability of dentin following two and four weeks of using sealers of bioceramic (BC) and Tubli-Seal™ in the root canal.
Aim
The present study was conducted to assess the outcomes of Ca(OH)2 use on the wettability of dentin following two and four weeks of using sealers of bioceramic (BC) and Tubli-Seal™ in the root canal.
Methods
In this study, 168 specimens were divided into 12 groups of 14 specimens each and were numbered from G1 to G12. G1-G6 groups were tested with Tubli-Seal. Sterile water irrigation was reserved for G1 for two weeks (14 days) followed by two minutes treatment with 6% sodium hypochlorite (NaOCl) chemical irrigation and 17% EDTA (10 ml). G2 and G3 were subjected to Ca(OH)2 (0.1 ml of UltraCal) for two weeks followed by 10ml sterile water irrigation for two minutes with chemical irrigation. G4 to G6 were given similar treatment for four weeks. G7-G12 were assessed for BC sealer similarly to G1-G6. Two and four weeks of incubation with sterile water or Ca(OH)2 was done in 100% humidity at 37°C. SEM and EDX were done to evaluate the surface morphology of G1 and G6, and results were formed.
Results
Significantly smaller contact angles were seen for Tubli-Seal (G1 to G6) compared to BC (G7 to G12) with p<0.05. Application of water irrigants and Ca(OH)2 (in G2, G5, and G11) showed a smaller (p<0.05, statistically significant) contact angle compared to the use of chemical irrigation agents and Ca(OH)2 (in G3, G6, and G12) except for G8 and G9. Based on EDX and SEM, compared to the use of chemical irrigation with Ca(OH)2, higher remnants of Ca(OH)2 were seen in the water irrigation, whereas, no remnant was seen with chemical irrigants.
Conclusion
Better dentin wettability is seen with Tubli-seal compared to BC sealer. A smaller contact angle between root canal sealers and dentin is seen in the remaining calcium hydroxide samples. Also, calcium hydroxide can be removed from the polished dentin surface with two minutes of irrigation with 17% EDTA and 6% NaOCl. With reported high success rates of using intracanal medicament as calcium hydroxide, adverse effects are reported on long-term use on dentin's mechanical properties making it fracture susceptible. It has been reported that using Ca(OH)2 for four weeks or longer results in decreased mechanical properties and increased fracture susceptibility in in-vitro studies [2]. To get long-term success, preventing communication between periradicular tissues and the oral cavity can be prevented by achieving a three-dimensional (3D) fluid-tight seal which cannot be solely achieved using solid-core root filling materials alone making the use of root canal sealers a vital step. Sealers in root canals are being used to fill gaps and irregularities in the space between the walls of the root canal and the filling materials. Various sealers used in the root canal include calcium phosphate, methacrylate resin-based, calcium silicate phosphate, mineral trioxide aggregate (MTA), silicone, epoxy resin, and zinc oxide-eugenol-based sealers [3].
The most commonly used sealer is zinc-oxide eugenol and epoxy resin based on nearly 74% and 25% endodontic treatments [2]. Tubli-Seal™ is a eugenol-based sealer used in the majority of endodontic practice as it is associated with various advantages including minimal shrinkage of nearly 0.14% compared to resin-based sealers [3]. Additionally, eugenol-based sealers have been shown to have an antibacterial impact on the numerous microorganisms that may be found in a root canal for up to seven days. However, it has the limitation of having the highest solubility compared to other available contemporary sealers [4].
Proper root canal sealer adaptation to the wall is vital to attain a maximum three-dimensional seal while obturating. The wettability and flowability of the root canal sealers can affect the adaptability of sealers to the walls of the root canal. The wetting behavior of any liquid can be accurately indicated by the contact angle [4]. The contact angle of sealers on root canal dentin is affected by the effects on dentin surface tension by various dentin conditions. An evaluation of the contact angle reveals that there is a paucity of information in the scientific literature concerning the wettability of sealers used in the treatment of dentin with Ca(OH)2 [5]. Hence, the present study was conducted to assess the outcomes of Ca(OH)2 use on the wettability of dentin following two and four weeks of using two different sealers in the root canal.
Materials And Methods
The present study was conducted to assess the outcomes of Ca(OH)2 use on the wettability of dentin following two and four weeks of using two different sealers in the root canal. Also, the present study was aimed at assessing the outcomes of applying Ca(OH)2 on the wettability of the dentin following two and four weeks of using the two different sealers in the root canal (ethical approval no: YMTDCH/2021/21/309).
In this study, 168 third molars with no caries were procured and kept at 4°C in the thymol solution (0.1%). Using a saw and constant irrigation, 2 mm sections were made parallel to the crown and close to the pulp. A deep coronal section was used as it provides a wider area. To remove enamel from the sample teeth, silicon carbide paper of 1200-grit was used with water irrigation followed by pulpal side flattening with carbide papers. The specimens were sonicated in deionized water and then placed for three minutes in running deionized water followed by polishing with spray. Then samples were again immersed in running deionized water. For smear layer removal, specimens were treated in 6% NaOCl and 17% EDTA for five minutes.
These 168 specimens were divided into 12 groups of 14 specimens each and were denoted from G1 to G12. G1-G6 groups were tested with Tubli-Seal. G1 was subjected to irrigation using sterile water for 14 days (two weeks), and then two minutes of chemical irrigation with 6% NaOCl (10ml) and 10 ml of 17% EDTA. G2 and G3 were subjected to Ca(OH)2 (0.1 ml of UltraCal) for two weeks followed by 10ml sterile water irrigation for two minutes/chemical irrigation. G4 to G6 were given similar treatment for four weeks. G7-G12 were assessed for BC sealer similarly to G1-6. Two and four weeks of incubation with sterile water or Ca(OH)2 was done in 100% humidity at 37°C (Figures 1-4). The contact angle was measured between calcium silicate-based sealer (BC sealer) or zinc-oxide eugenolbased sealer (Tubli-Seal) and treated dentin surface. These sealers are available in auto mix and premix syringes eliminating manual mixing errors. Before measuring the contact angle, specimens were air-dried for two seconds from a six-inch distance followed by dropping of 2µL sealer drop on dentin. PGX goniometer was used for assessing the contact angle immediately following sealer detachment from the pipette. The contact angle was measured at room temperature in triplicate for each dentin disc.
Also, 18 additional discs of dentin were made and split into six groups (n=3) which were treated similarly and were used for EDX (energy-dispersive X-ray measurement) to assess surface chemical changes following different treatments. Following treatment, samples were dried for 48 hours, and % weight was assessed for nitrogen (N), carbon (C), phosphorus (P), and calcium (Ca) using SEM (scanning electron microscopy). EDX analysis was done on five random spots at X1000 magnification and 15kV. Any remaining Ca(OH)2 and morphological changes in treated dentin surface were assessed on SEM in the 18 samples where they were subjected to a low-vacuum-pressure desiccator for air-drying of each sample for 48 hours. Three scanning electron microscopic images were captured at X1000 magnification for all the samples on the site of the treatment: one from the center, and two from the edges. Cleanliness analysis was done using a 5-grade scale by Alturaiki et al. in 2015 [6].
The collected data were subjected to statistical evaluation using SPSS software version 21 (IBM Corp., Armonk, NY) and one-way ANOVA and t-test for results formulation. The data were expressed in percentage and number, and mean and standard deviation. The level of significance was kept at p<0.05.
Results
The study results showed that for contact angle, in G1-G6 groups (Tubli-Seal), the highest and lowest contact angle was seen in G4 and G5 with 104.7±6.9° and 85.2±15.3° respectively. In G7-G12 (BC) group, the highest and lowest contact angle was seen for G10 and G11 with 145.2±5.0° and 128.6±8.3° respectively ( Table 1). For the Tubli-Seal duration effect, it was seen that G4 had a significantly higher contact angle compared to G1, whereas, a significantly lower contact angle was seen for G5 compared to G2. Between G6 and G3, no significant difference was seen. For BC sealer, at four weeks (G10-12) and two weeks (G7-G9), no significant difference was seen between G10 and G7. A significantly lower contact angle was seen for G11 compared to G8, whereas, G12 had a higher contact angle compared to G9 (p<0.0001). At two weeks for Tubli-Seal, a significantly higher contact angle was seen for G3 compared to G2. Similarly, at four weeks, a higher contact angle was seen for G6 compared to G5. At two weeks for BC sealer, a significantly lower contact angle was seen for G9 compared to G8, and at four weeks, a higher angle was seen for G12 compared to G11 ( Table 1).
For EDX, the measurement of four elements showed a significant assessment of the group, and duration was significant for N. At two and four weeks, the use of water irrigation with calcium hydroxide had significantly higher Ca and P compared to N and C than Ca(OH)2 with chemical irrigation and chemical irrigation without Ca(OH)2. Also, significantly lower C was seen in Ca(OH)2 with chemical treatment compared to chemical irrigation and no Ca(OH)2, and no difference was seen for Ca, P, or N. The two-week similar treatment showed p-values significantly lower compared to four weeks. N was significantly higher at two weeks compared to four weeks. No difference was seen in Ca(OH)2 with water or chemical irrigation. For two and four weeks no significant difference was seen for C in either treatment ( Table 2).
Concerning SEM analysis, effects of treatment duration and groups on calcium hydroxide particle removal were assessed, and it was seen that degree of calcium hydroxide particle removal did not affect the duration. Also, irrigation solution showed a significant effect on calcium hydroxide particle removal and cleanliness with p=0.0006. A higher remnant of calcium hydroxide particles was seen with the water irrigation group compared to chemical irrigation. No remaining Ca(OH)2 following irrigation remained which was the same as seen in the control group (Tables 3-4).
TABLE 5: Alturaiki et al cleanliness scores in study groups
On assessing the percentage of elements including calcium, phosphorus, nitrogen, and carbon in study specimens, the interaction of duration and group was statistically significant for nitrogen with p=0.01. Also, a statistically significant association was seen for nitrogen with treatment type and duration with treatment type with respective p-values of <0.001 for both. For calcium, a significant association was seen with only treatment type (p<0.0001) and was non-significant for duration and duration with treatment type with pvalues of 0.61 and 0.24 respectively. For carbon, a significant association was seen for only treatment type with p<0.001. The percentage of phosphorus in study specimens showed only a statistically significant association with duration and treatment type with respective p-values of 0.03 and <0.0001 respectively as shown in (Table 5).
Discussion
Obturating the root canals following the biomechanical preparation with the root canal sealers has various proven advantages, including the sealing of the irregularities of the root canal walls like deltas, spaces inaccessible to obturating materials, and the apical ramifications [6]. Root canal sealers also act as binding agents between root canal filling material and the walls of the root canal [7]. Hence, wetting and adequate flow remain the vital physicochemical property of the root canal sealer while obturating. Wettability of the intra-radicular dentin surface largely governs the adhesion of the root canal sealer making contact angle a useful indicator of liquid wettability [8,9].
The contact angle is formed as the tendency exists in a liquid to spread on the surface of the solid surface.
When the contact angle is greater than 90 o , the liquid is considered non-wetting, however, in cases with a contact angle of <90 o , the liquid is considered as wetting the substrate. Complete wetting is represented by the contact angle of zero. A better interaction is considered between solid and liquid surfaces is considered when the contact angle is low [10,11].
The results of the present study showed that the highest contact angle was seen for the Tubli-Seal group For duration effects in Tubli-Seal, a significantly lower contact angle was seen in G1 compared to G4, and in G5 than in G2. In comparing G6 with G3, no significant difference was observed. In the groups of BC sealers, no significant difference was seen between G10 and G7. These results were consistent with the findings of Yassen et al. [12] in 2015 and Tummala et al. [13] in 2012 where authors reported comparable contact angle and their effects as in the present study. The lower contact angle of the Tubli-Seal shows that it has better wettability compared to other sealers used in the present study.
On EDX measurement assessment of duration in the groups was significant for the percentage weight of nitrogen. Significantly lower C and N were seen compared to Ca and P in samples treated with Ca(OH)2 with water irrigation compared to chemical irrigation without calcium hydroxide and chemical irrigation with calcium hydroxide used. Compared to chemical irrigation and no Ca(OH)2, significantly lower C was seen in Ca(OH)2 with chemical treatment with a significantly lower difference at two weeks than four weeks. Lower N was seen at four weeks compared to two weeks. These findings were in agreement with the results of Nagas et al. [14] in 2012 and Mohammadi et al. [15] in 2012 where authors reported similar differences between water and chemical irrigation on dentin wettability. Following EDX, an increase was seen in calcium and phosphate particles.
After SEM analysis, the effects of treatment duration considered on the removal of calcium hydroxide particles were seen; the duration did not affect the degree of calcium hydroxide particle removal. A significant effect on cleanliness and calcium hydroxide particle removal was shown by irrigation (p=0.0006). Compared to chemical irrigation, in the water irrigation group, more remnants of calcium hydroxide particles were seen. These results were comparable to the studies of Bohn and Ilie [16] in 2014 and Ballal et al. [17] in 2013 where similar SEM findings were seen following the use of calcium hydroxide and different irrigants and removal of remaining particles of calcium hydroxide. SEM analysis showed that following chemical irrigation, calcium particles were not seen irrespective of the treatment duration. However, following irrigation with water, calcium particles were seen.
Study limitations include that this should be evaluated in vivo and hence human-based trials have to be carried out so that other sealer systems can be evaluated. Additionally, this study had a single-assessment time and a smaller number of included samples, thus warranting more long-term studies on a greater number of samples.
Conclusions
Within its limitations, the present study concludes that better dentin wettability is seen with Tubli-seal compared to BC sealer. The contact angle was lesser between dentin and sealers used in the root canal for the remaining calcium hydroxide samples. Also, calcium hydroxide can be removed from the polished dentin surface with two minutes of irrigation with 17% EDTA and 6% NaOCl. The limitations of the present study were single-assessment time and a smaller number of included samples, warranting more long-term studies on a greater number of samples.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. A issued approval YMTDCH/2021/21/309. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-09-25T15:18:46.076Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "52063ab313c1e20cdc6297fc33257748bea0efe9",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/105859-assessing-the-contact-angle-between-dentin-treated-with-irrigation-and-calcium-hydroxide-and-root-canal-sealers.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a49f61cbb1ed3080b82253c07247a1bab8aef17e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52027470 | pes2o/s2orc | v3-fos-license | An Exploratory Survey of Sound Levels in New York City Restaurants and Bars
For several decades, there has been a significant need to better educate the public about noise pollution. A small number of small-scale studies have focused on the sound levels of restaurants and their impact on health and hearing. There have also been an increasing number of media articles stating that eating and drinking venues are getting increasingly loud making it more difficult for people to connect with others in conversation. This study reports on an exploratory large-scale noise survey of sound levels of 2376 restaurants and bars in New York City using a novel smart-phone application and categorized them based on how quiet or loud they were. The results suggest that: 1) A significant number of venues have high sound levels that are not conducive to conversation and may be endangering the health of patrons and employees, 2) that the reported sound levels by the venue managers on their online public business pages generally underestimated actual sound levels, and 3) the average sound levels in restaurants and bars are correlated by neighborhood and type of cuisine.
Introduction
Noise pollution is a widespread issue that a plethora of studies, both in the United States and abroad, have addressed.A small number of these studies have focused on the sound levels of restaurants and the impact of noise on health and hearing.There have also been an increasing number of media articles from several cities stating that eating and drinking establishments are getting increasingly loud either because of background music or architectural design that enhances rather than abates interior sounds [1]- [12].This trend of increased noise can be attributed to modern design changes that are viewed as creating an exciting and bustling atmosphere [13].This translates into more open kitchens, stripped-down or hard surfaces such as wooden floors, more windows, less tablecloths, no carpeting and fewer plants and paneling which help absorb sound.High table density with people in close proximity to one another and background music are also large contributors.There are also financial incentives for venue managers to produce higher sound levels since studies show that it causes people to drink more and eat faster, generating greater table turnover [14] [15] [16].
High sound levels in restaurants and bars can negatively impact the health and quality of life of patrons and employees as hearing loss is the third most common chronic physical condition in the US [17], and noise is the most common "modifiable environmental cause" of hearing loss which is present in 24% of adults [18].Noise is also associated with tinnitus, a noise or ringing in the ears, which affects approximately 11% of adults and hyperacusis, an increased sensitivity to sound, which affects approximately 6% of adults [19] [20] [21].Research also shows that noise can contribute to exhaustion and also release cortisol in the body as it is linked to increased stress [22] [23], hypertension, ischaemic heart disease, stroke [24] and obesity [25].
A recent study showed that a large number of U.S. adults may be exposed to noise levels above the EPA recommended daily noise dosage limit of 70 dBA [26].And another study of more than 4500 adults in New York City suggested that nine out of ten adults exceeded the same EPA daily noise dosage limit [27].
To address the dangers of noise pollution, the Center for Disease Control and Prevention (CDC) recommends avoiding prolonged exposure to loud environments to prevent noise-induced hearing loss (NIHL) [18] [28].
Noise pollution also affects the quality of life [29].Even for people with normal hearing, noise is a barrier to the enjoyment of communicating, socializing and connecting with colleagues, friends and families through intimate conversation.The focus of this study is on indoor noise, specifically in restaurants and bars.Restaurants are not just eating establishments, but social spaces important for gathering and communicating and noise has hindered and negatively affected customers' dining experience [30].And for those with hearing loss, the difficulty of hearing others in loud venues can lead many to withdraw from social situations, leading to increased isolation [31].
The number of people exposed to high sound levels has tripled since the 1980's [32] due, in part, to the rise of personal music players and louder restaurants and bars.The Speak Easy 2016 survey of 1461 people with and without hearing loss in the United Kingdom (UK) showed that 77% of respondents reported increasing sound levels in restaurants since 2011 [33].
There are increasing numbers of reports from patrons complaining about the difficulty in conversing with fellow diners due to loud venues.York City, noise was the number one complaint [34].Zagat also reported that 72% of New Yorkers actively avoid restaurants that are too loud [35].In 2015, nearly 80% of 1232 surveyed people in New Orleans said they "loathe" restaurant noise with noise annoying them the most (27%) followed by poor service (24%) [36].The Speak Easy survey also found that due to venues with loud noise: 1) close to eight out of ten people had left a restaurant, café or bar 2) 79% had difficulty holding a conversation in such venues, and 3) 91% stated they would not return to such a venue.Even amongst respondents with no hearing loss, 84% said they considered lower noise levels important when deciding where to eat [33].And an organization in the UK called Pipedown that was formed to campaign for freedom from background music has now expanded to the U.S., Canada, Germany, Austria, Switzerland, Holland and South Africa [37].
Noise is clearly an important health and quality-of-life issue.The general public could benefit from information about sound levels of a particular venue for both health and the ability to converse with others.Because many may not know specifically which sound levels comprise a quiet or loud auditory environment, they may be unknowingly endangering their health by patronizing an establishment they believe to be quiet but is actually loud.
The advances of smart-phone technology is providing new ways to capture sound levels with digital sound level meters which could become a valuable tool for improving the public's noise pollution awareness [18].Ideally, a database would be maintained by a local governmental entity and have frequent and recurring precise sound level measurements of long duration for all venues in a city.And similar to the health grades that are prominently displayed on venue windows, a noise grade would enable patrons to know the general sound level of each venue.Unfortunately, this is impractical due to the high costs of labor, equipment and time.While some people do employ a digital sound level meter on their smart phone to take real-time measurements, such data is not collected, aggregated and widely disseminated for public consumption.What is suggested is a method to systematically quantify the noise levels of a large number of venues employing digital sound level meters and making such information easily accessible to the public.
A few studies have attempted to undertake a systematic quantification of restaurant and bar sound levels in an urban environment, but the number of venues measured were rather small using less than 100 restaurants, ranging from Lebo's study of 27 restaurants in the San Francisco Bay Area in 1994 [38], Yu [43], and Nielsen's study of five restaurants in Denmark in 2016 [13].
Because these studies were based on a small number of restaurants and were not conducted on a continuing basis, the data collected do not accurately represent current sound levels and do not help people determine which venues are quiet or loud.In addition, such information is not easily accessible to the public.Some online websites that rank or describe restaurants and bars such as Yelp do show a venue's sound levels, but when information is included, it is based on subjective interpretation and reporting by venue managers and/or patrons.Such information may not provide accurate data about sound levels.
This exploratory study is based on an effort to capture the sound level data on a large-scale basis in an urban environment (New York City) using the SoundPrint app ("SoundPrint"), a free digital sound level meter available on the iPhone at no cost to users, that measures, aggregates, categorizes and displays the average sound levels of venues into quiet, moderate, loud or very loud categories on an ongoing basis and is easily accessible to the public.
We believe the public lacks sufficient awareness in determining whether a certain auditory environment is quiet or loud.For most people without digital sound level meters, such determination is merely subjective.If most venues are loud, people may erroneously believe that such noise levels are the "norm" and therefore, acceptable and safe.This study aims to compare people's subjective interpretation and reporting of sound levels to actual quantitative measurements.
Another purpose of this study is to determine whether the recent increase in the number of media articles and qualitative surveys indicating that venues are too loud is true and accurate.In answering this, data from SoundPrint was utilized to explore the following: 1) What percentage of venues in Manhattan are quiet or loud? 2) Are there certain types of venues or venues in certain neighborhoods that are generally quieter or louder than others?And 3) Do managers and/or patrons accurately assess and report the sound levels of these venues?2) Venues measured A total of 3137 restaurants and bars were measured at least once for their sound levels.To qualify for inclusion in the analysis, however, each venue had to be measured a minimum of three times during prime days and hours and many were measured more than three times 2,3 .A total of 2376 restaurants and bars met this minimum requirement for inclusion in this report.Only restaurants that had on-the-premise seating with waiter-based service were included in this analysis; venues that had pick-up-at-the-counter service or no waiters were excluded.
Methodology
There are several reasons why a venue may not have been measured the minimum three times: a) the venue launched their business after the beginning of the collection stage, thus were not measured at least three times b) the venue was either permanently or temporarily closed (i.e.private event, remodeling, or changed locations), or c) the collector missed the venue when surveying the selected neighborhood or streets.
3) Instrument of measurement
The measurements were conducted with the SoundPrint app, a free digital sound level meter and aggregator available on the iPhone platform.SoundPrint measures dBA with slow response and automates the sound level calibration across different iPhone hardware devices for a more consistent measurement.
The main output is the average dBA (an arithmetic average) as this represents sufficient information for individuals to measure, understand and employ in making decisions about whether to patronize a venue.
To illustrate, the average dBA output is the result of collecting the average dBA of each individual measurement, and averaging across those measurements.
To illustrate, if one venue had three separate average measurements of 72 dBA, 74 dBA and 78 dBA, then the venue's dBA used for this analysis would be 75 dBA.
The SoundPrint app's sound level measurement accuracy was tested in two separate ways.First the app's sound measurement capabilities were tested side by side with a Type-1 sound level meter at various levels of pink noise.Agreement was within ± 0 -1.2 dB at all levels 4 .And second, it was compared to the 1 The West Village is comprised of the West Village and Greenwich Village.Flatiron/Gramercy also includes Union Square and SoHo includes Nolita. 2 The highest number of times a venue was measured was 18. 3 The breakdown of the number of venues measured "x" times is as follows: Faber Sound Meter 4 iPhone app, referred to as the SoundMeter app and available on the Apple store, which was found by Chuck Kardous at NIOSH to be the most accurate within 0.2% of an OSHA-certified sound level meter [44].Results show that across seven different general static sound level decibel ranges, SoundPrint was within ± 0 -1 dBA of the SoundMeter app 5 .
4) How measurements were taken and submitted
To take a measurement, the 24 collectors, recruited through friends and posting sites, were instructed to measure from a central spot in the venue.If this was deemed not to be an effective location by the collector, then the measurement took place closer to the source of the ambient sound.The collector was instructed to ensure that there was at least three feet of space extending outward in all radial and azimuthal angles from the microphone during the measurement.
Note that regular users of the SoundPrint app also crowdsourced their sound level measurements to the database.
Each measurement is run for a minimum duration of 15 seconds and calculates the average sound level over the length of the measurement.Note that while the minimum duration is 15 seconds, the collector was instructed to measure for the amount of time necessary to sufficiently reflect the actual sound level.Thus, many of the measurements exceeded 15 seconds.When the collector ended the measurement, he or she then tagged the measurement to the venue name and submitted it to the SoundPrint database.The database then produced an arithmetic average sound level for each venue based on all the submitted measurements tagged to the venue.
6) Sound level categories
As shown in The EPA has shown that normal hearing people have difficulty following a conversation above 75 dBA [45].This is supported by Lebo et al.'s description of restaurants with noise levels of 75+ dBA as those that made conversation difficult for patrons with normal hearing and very difficult for those with hearing loss [38].Loud noise could alternatively be described as an auditory environment where people must raise their voice to be heard by someone sitting within three feet of them.And in 1990, The National Institute of Health Consensus Development Conference on Noise and Hearing Loss stated that 75 dBA is the Open Journal of Social Sciences threshold by which, if the sound level is lower, is unlikely to cause permanent hearing loss [50].This implies that as the sound level increases above 75 dBA, the likelihood of incurring permanent hearing loss increases.Hence SoundPrint sets 75 dBA as the threshold between the Moderate and Loud categories.
An alternative interpretation for the Loud category is that it could represent an auditory environment that is more likely to be conducive to a fun, exciting atmosphere with lively music, lots of people in attendance, and loud conversation or shouting.
In summary, the four sound level categories are: i) 70 dBA or lower (Quiet) ii) Between 71 -75 dBA (Moderate) iii) Between 76 and 80 dBA (Loud), and iv) 81 dBA or higher (Very Loud).
The presentation of the data in this study is represented as Low, Moderate and High with the High category separated into two categories, Loud and Very Loud.
The High category reflects the sound level threshold above 76 dBA and, as the dBA level increases, the likelihood of incurring noise-induced hearing loss increases and the ability to converse with others becomes more difficult.
7) Restaurant classifications
The analysis also categorizes restaurants by various classifications that include user ratings, cost, ambience and type of cuisine.This information was gleamed from each venues' business listing on the Yelp website ("Yelp") which displays the aforementioned variables.
8) Analysis of managers and/or patrons' assessment and reporting of venue's sound levels The study also attempts to gauge the accuracy of the manager and patron's reporting of a venue's sound level.other room characteristics such as clarity were not measured.Third, the length of the recordings is typically shorter than the measurements conducted in prior restaurant noise studies.This was due to the restrictive timeframe for primetime days and hours, the large number of venues needed for measurement at least three times and the limited amount of human resources to complete the measurements.
Fourth, there is no uniform recording length for each submission as the length of the recording varies since it the user's decision as to when to stop each recording, despite there being a minimum 15 seconds.The vast majority of submissions were under one minute.Fifth, due to the nature of the app being a crowdsourcing app and despite giving instructions to collectors and users, there is not a verifiable way to ensure they followed precise instructions (i.e.record from the center of the room, ensure at least three feet of space in a 360 degree angle) during their measurements.Additionally, the resulting average sound levels for a particular venue could differ from the sound levels during other times on the same day.It is possible that a collector took a measurement of a venue that was quieter or louder at a specific time and then an hour later the venue produces a louder or quieter sound level that may be more typical for the venue.
Data
In general, Tables 2-4 show the percentage breakdown of venues by sound level categories.Tables 5-8 show the average sound level of restaurants by various segments such as User Ratings, Cost, Ambience, and Cuisine Type.managers and/or patrons, as gleaned from the venue's Yelp business listing.
In detail, Table 2 and Table 3 show the percentage breakdown by sound level categories of "All Restaurants" and "Mainstream Restaurants."The tables also show the breakdown by neighborhood to enable comparisons between different areas in Manhattan.Table 2 represents all the restaurant venues in the study.The study analyzed restaurants that had waiter-based service and on-the-premise dining for patrons.However, some of these restaurants also offered takeout and delivery options and generated much of their revenue from one of these two options.It became evident to collectors that these restaurants were often either relatively empty or had minimal patrons dining on the premises, which meant the sound level often measured as Quiet.
These venues do not represent the type of restaurant the study targets, which are those that generate their business primarily from customers dining on-the-premises.Consequently, a second category titled "Mainstream Restaurants", was formed as an alternative, but not exact, approach to analyze restaurants that were mainly on-the-premise dining establishments.The restaurants G. Scott DOI: 10.4236/jss.2018.6800576 Open Journal of Social Sciences that earned most of their business via takeout and delivery, and also had on-the-premise seating with mostly empty tables, were predominantly Asian and Indian cuisine-based restaurants.Thus, the Mainstream Restaurant table excludes all Asian and Indian restaurants at the expense of excluding those Asian and Indian restaurants that generate most of their revenue via on-the-premise dining.
1) Restaurants
For restaurants, as shown in Table 2, including those that have a significant delivery and take-out presence, 10% of the venues are Quiet, 27% are Moderate, and 63% are High with 38% being Loud and 25% Very Loud 6 .The average sound level for all restaurants was 77 dBA.
2) Mainstream Restaurants (On-the-Premise Dining) For Mainstream Restaurants, as shown in to have a lower percentage of venues in the High category and also lower average sound levels.
3) Bars
For bars, as shown in Table 4, 2% are Quiet, 8% are Moderate and 90% are High with 31% being Loud and 60% Very Loud.The average sound level for all bars was 81 dBA.Segmenting the data into neighborhoods shows a similar pattern consistent with the restaurant data as the same neighborhoods tend to have higher sound levels than their uptown brethren above 34 th Street.
4) Restaurants by Cost, User Ratings, Ambience and Cuisine
For restaurants segmented by User Ratings, as shown in Table 5, a positive correlation was found between sound levels and User Ratings, and while statistically significant at the p<0.01 level, the relationship is not strong (R = 0.093).
For restaurants segmented by Cost, as shown in Table 6, there was no statistically significant relationship found between the price of a venue and the average sound level experienced.For restaurants segmented by Ambience, as shown in For restaurants segmented by Cuisine, as shown in Table 8, the different cuisine types were found to have statistically significant differences in measured sound level.An ANOVA test conducted with the different cuisine groups was found to be highly significant (p = 0.000).Indian (73 dBA), Chinese (73 dBA), Vietnamese (73 dBA) and Japanese (74 dBA) restaurants comprise the relatively quieter restaurants compared to the Mexican (80 dBA), Latin (79 dBA), American (79 dBA), Spanish (79 dBA), Korean (78 dBA) and Mediterranean (77 dBA) restaurants.
5) Manager's assessment and reporting of sound levels
For the Quiet venues, as shown in
Discussion
1) A significant majority of restaurants are likely too loud for conversation and connecting with others As Tables 2-4 suggest, the number of venues that are Loud or Very Loud is significantly high.For Mainstream Restaurants, 71% produce average sound levels above the threshold that is likely to be too difficult to have a conversation without the need to raise one's voice.This is supported by the average sound level being 78 dBA which means that if a patron were to patronize a randomly selected restaurant during peak days and hours in Manhattan, it would likely be Loud.Also, a significant majority of bars (90%) are also Loud or Very Loud.
This data lends credence to the anecdotal comments, recent surveys and increasing number of media articles that suggest that restaurants and bars are most often too loud for conversation and socializing.
2) A significant number of restaurants and bars are dangerous to people's hearing health Approximately 31% of Mainstream restaurants and 60% of all bars have sound levels during peak days and hours that are potentially dangerous to the hearing health of their patrons, and even more importantly, to their employees (i.e.waiters, hostesses, bartenders, chefs and managers) who are often subject to a longer duration of exposure to occupational noise.during the evening hours compared to the daytime.There are still a good number of venues that are well attended in the evenings, just less so compared to their downtown brethren.
This data could assist people by informing them of the neighborhoods that have quieter or noisier restaurants and bars based on their sound level preference and could be a deciding factor in choosing which neighborhood to live in.
The data could also assist local governance in Manhattan, notably the community boards that represent certain districts and recommend various policies to government agencies.These community boards now have access to data that, should they deem it an important health issue, could help them undertake and direct noise pollution awareness efforts.
4) User Ratings, Cost and Ambience trends
Table 5 suggests that High user-rated restaurants are louder than Average user-rated restaurants which are higher than Low user-rated restaurants.This is not surprising as one may expect that more people attend higher rated restaurants but without accounting for occupational density, this cannot be truly tested.The statistical correlation between the two factors can only account for approximately 9% of the variation in the sound level, so the relationship is not strong.
Table 6 suggests that Average priced restaurants tend to be louder than Moderate, Expensive and Cheap restaurants.We hypothesize that more people patronize an Average priced restaurant compared to a Moderate or Expensive one and hence such restaurants are more crowded.However, when a statistical correlation was run, there was no significant relationship found.veyed based on these features and hence this trend cannot be deemed to be a reliable predictor of sound levels.The differences between the means of the values in the three categories were found to be statistically different from one another.
5)
Restaurants serving certain cuisines types may be louder or quieter than others Table 8 suggests that restaurants may be quieter or louder based on their Cuisine Type.This is not surprising since the collectors observed that Indian, Chinese and Japanese restaurants tended to have less background music, more carpeting and tablecloths to absorb sound and that the patrons tended not to talk as loud compared to patrons of venues with different cuisines.Whether the latter was due to less background music that would otherwise require patrons to raise their voices, otherwise known as the Lombard effect, or due to the type of people who patronize these venues who tend to be quieter cannot be determined without resorting to cultural stereotypes (i.e.Latin, Greek, American and Korean venues tend to have patrons who are louder).
Thus, if a patron desires a quieter restaurant, their chances of finding one is likely to be greater at either Indian or Chinese venues.
7) Sound level trends over time-to be determined
There appears to be an increase in the sound levels of restaurants over the past 10 years as noted by the number of articles in the media, qualitative surveys and anecdotal comments about the so-called "increasing din" of restaurants.Thus, it would be beneficial to quantifiably gauge whether restaurants are actually getting louder over time.Because this is the first exploratory study, we cannot determine trends or make comparisons at this time.However, we aim to collect additional data and conduct comparative analyses to this study's data on an annual or biennial basis in the future.
Conclusion and Recommendations
In this exploratory study, the data suggests that the increasing number of media articles and anecdotal comments from qualitative noise surveys about sound levels being too loud are correct.In New York City, a significant number of restaurants and bars in New York City have average sound levels that 1) approach levels that are too difficult for patrons to have a conversation without the need to raise their voice and 2) approach levels that are known to be dangerous to hearing health.The average dBA for all restaurants included in this study is 77 dBA, for mainstream restaurants 78 dBA, and for all bars is 81 dBA.A person randomly walking into a restaurant or a bar in New York City during prime days and hours is more likely than not to encounter a Loud or Very Loud auditory environment.
Furthermore, the sound levels of venues in New York City tend to be correlated with certain neighborhoods, possibly as a reflection of the venues in that neighborhood that attract a certain demographic.The sound levels also tend to be correlated with types of ethnic cuisines, possibly reflecting certain cultural preferences for the type of dining experience that appeal to the venue owners and patrons.Further investigation is needed to support these findings, notably by conducting measurements using precision sound level meters, measuring more variables including minimum and maximum sound levels, occupational density and reverberation, as well as measuring a significantly larger number of restaurants and bars compared to the number measured in the smaller scale noise surveys.The relationship between sound levels and other variables such as a venue's user rating, cost or ambience also need to be explored on a large scale.
The data also suggests that venue managers and patrons are not very accurate in assessing and reporting the actual sound levels of venues-93% of venues that are either Loud or Very Loud by SoundPrint's objective measurements were reportedly mismatched, on a qualitative basis, as either Quiet or Average.This means that people may be patronizing or working in venues that are Loud or Very Loud that they mistakenly believe to be Quiet or Moderate.
Because traditional scientific sound level measurement practices are time and labor-intensive, it has made large-scale collection of sound level data on individual venues difficult.This results in a public today that has lacked accessibility to and knowledge about the sound levels of venues in their neighborhood.Such access is needed to help people determine whether a particular venue is likely to be quiet or loud, whether for social purposes (to be able to have a conversation) or to protect their hearing health.But with the advent of smartphone technology, the public now has the available tools, via digital sound level meters, to collect, crowdsource and create large sets of evidence-based sound level data for researchers, public health agencies and local governments to begin monitoring the effect of noise on hearing health and more effectively raise noise pollution awareness.
SoundPrint allows the user to see the average sound level of an individual venue that has been measured and submitted to the database.Users can see the sound levels of individual venues in the SoundPrint app in three ways: a) in a list sorted by sound levels (how quiet or loud they are) b) on a visual map showing the sound levels of venues in a certain geographical area (i.e.East Village), or c) by searching for a specific venue by name in the search bar.Note that the sound levels of individual venues available in SoundPrint are not shown or analyzed in this study, only aggregated data is.
Table 1
, but the Moderate and Loud categories are also guided by Open Journal of Social Sciences the threshold sound level where the ability to hear and converse with others becomes either easier or difficult.Note that the Discussion section discusses and compares SoundPrint's four categories of Quiet, Moderate, Loud, and Very Loud to Yelp's four noise level categories of Quiet, Average, Loud and Very [49]ours in a day.Two years later, the Environmental Protection Agency (EPA) calculated a safe noise exposure level for the general public on a daily basis to be 70 dBA[45].This means that, on average, during a typical 24-hour day, one's noise exposure should be 70 dBA or lower to protect against noise-induced hearing loss according to the EPA.This threshold has been supported by the World Health Organization (WHO) that has recommended people avoid sound level exposures above 70 dBA[46].Hence the Quiet category reflects an average dBA of 70 or lower.ii)81dBAorlouder(VeryLoud) Notwithstanding the above, NIOSH's 85 dBA standard is not meant to protect all workers as NIOSH acknowledged that 8% of workers could still develop material hearing loss under their guidelines.Also, the 85 dBA threshold level assumes that the employee has no further noise exposure during the remaining 16 non-working hours each day and accounted for only 40 occupational-work years for noise exposure rather than the 80-year life expectancy today [47][48].In 2003, the European Parliament established new, and more stringent, standards for acceptable occupational noise exposure levels as it lowered the threshold by which companies must make hearing protection available from 85 dBA to 80 dBA[49].Thus, SoundPrint employs the more conservative 80 dBA as the threshold for a venue being Very Loud and potentially dangerous to one's hearing health.iii)Between 71 and 75 dBA (Moderate) and Between 76 and 80 dBA (Loud)
Table 1 .
Summary of sound level categories.Yelp business listing reported the same sound level category as SoundPrint's categorization.For example, if SoundPrint measured "John Smith Restaurant" as having an average sound level of 69 dBA over three submitted measurements, it would be categorized as Quiet.And if "John Smith Restaurant's" Yelp business page reported the sound level as Quiet, then it was marked as "Matched."If the venue had listed itself as Average, Loud, or Very Loud, then it was marked as "Mismatched."Similarly, if SoundPrint categorized a venue as either Loud or Very Loud and the venue's Yelp business page reported the sound level as either Loud or Very Loud, then it was marked as "Matched."But if it was reported as When filling out their business listing survey on Yelp, managers can choose one of four sound level categories: Quiet, Average, Loud or Very Loud.Patrons, or a Yelp reviewer, can subsequently fill out a similar survey when writing a Yelp review.Note that Yelp does not assist the user in providing the associated numerical dBA range associated with each sound level category.Instead it relies on the subjective assessment and reporting of the manager and/or patron.The data was collected and analyzed as follows: First, the venue's sound level categorization was quantitatively measured and categorized by SoundPrint as either Quiet, Moderate, Loud or Very Loud.Then this categorization was compared to Yelp's sound level categories.digital sound level meter app was used rather than a highly advanced sound level measuring instrument.Although SoundPrint provides a reasonable approximate measurement of the sound level, it is not sufficiently accurate for legally-based measurements.Second, only one variable, the average dBA was measured and aggregated as other variables that could provide additional insight such as minimum or maximum sound levels, occupational density and/or reverberation or
Table 9
represents the percentage breakdown of correct assessments and reporting by G. Scott DOI: 10.4236/jss.2018.6800573 Open Journal of Social Sciences
Table 2 .
Sound levels by categories for all restaurants.
Table 3 .
Sound levels by categories for Mainstream restaurants (excluding Asian and Indian restaurants).
Table 4 .
Sound levels by categories for all bars.
Table 5 .
Average sound levels by user ratings.
Table 6 .
Average sound levels by cost.
Table 7 .
Average sound levels by ambience.
Table 9 .
Matching of self-reporting by venues/patrons-bars and restaurants.
Table 3 ,
6% are Quiet, 23% are Moderate, 71% are High with 40% being Loud and 31% Very Loud.The average sound level for all mainstream restaurants was 78 dBA.Many of the neighborhoods below 34 th Street (excluding the Financial District, Little Italy and Chinatown) tend to be louder than those above 34 th Street.On the Lower East Side of Manhattan, 0% of the restaurants measured are Quiet, 9% are Moderate and 91% are High with 38% being Loud and 54% being Very Loud.
The Lower East Side, Tribeca, SoHo, Chelsea, East Village, West Village, Flatiron and Murray Hill all have +70% of their venues in the High category.Not surprisingly, these neighborhoods also have the highest average sound levels for restaurants.In contrast, neighborhoods above 34 th Street, except Murray Hill, such as the Upper West Side, Upper East Side, Midtown East and Midtown West tend
Table 7 ,
the different ambiences are associated with differences in sound levels, significant (p = 0.000) one-way ANOVA test and Games-Howell post-hoc values (all significant at the p < 0.05 level).Trendy, Classy and Upscale (TCU) restaurants are associated with the highest levels, Intimate with the lowest and Casual between the two.
6Throughout this analysis, percentages may not add to 100% due to rounding.G. Scott DOI: 10.4236/jss.2018.6800577 Open Journal of Social Sciences confirmed by a
Table 9 ,
26% of the Yelp venue listings matched with SoundPrint's data as being Quiet and 74% mismatched as a category other than Quiet.For the Loud or Very Loud venues, 7% matched with SoundPrint's data as being Loud or Very Loud and 93% mismatched with SoundPrint's data.
[52]e-induced hearing loss often does not appear until years later, which by then is too late[51][52].And for patrons, their hearing health also depends greatly on the amount and degree of noise exposure they experience during the rest of their 24-hour day, that is whether their average daily noise exposure falls below the 70 dBA threshold recommended by the EPA.The Upper West Side and Upper East Side are known to be more family-oriented, residential, skew older in age and have less of a vibrant night life.Midtown West and Midtown East are neighborhoods with a mix of residential units and business offices with less people frequenting restaurants
Table 7
suggests that Intimate restaurants are quieter than Casual or Trendy, Classy or Upscale ones.The collectors noted that Intimate restaurants tended to have lower table density, more carpeting and tablecloths and lower or less background music.However, each individual restaurant was not quantifiably sur-
Table 9
suggests that user-based subjective reporting of sound levels by managers and/or reviewers on Yelp does not match the objective measurements of Sound Print's data: a) 74% of Sound Print's Quiet venues are reportedly mismatched as being Average, Loud, or Very Loud on Yelp, and b) 93% of Sound Print's Loud or Very Loud venues are reportedly mismatched as being either Quiet or Average.This suggests that the general public lacks sufficient awareness about what constitutes a quiet or loud auditory environment.Consequently, should people be relying on such subjective interpretation, they may be unknowingly placing themselves in Loud or Very Loud auditory environments they believe to be Quiet or Moderate which could be dangerous to their hearing health. | 2018-08-13T13:15:08.149Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "3c58d63911b15c6a5de3bdc8a8d713ca57ba10f9",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=86590",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3c58d63911b15c6a5de3bdc8a8d713ca57ba10f9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
230626168 | pes2o/s2orc | v3-fos-license | Core prescription pattern of Chinese herbal medicine for depressive disorders in Taiwan: a nationwide population-based study
Background Depressive disorders (DD) affect not only mood and behavior but also various physical functions. Traditional Chinese medicine (TCM) has been shown to have some benefits in treating DD. However, one formula or one single herb might be not show high efficacy when used to treat depression. Thus, this study aimed to examine the core prescription pattern of Chinese herbal medicine (CHM) among patients with DD in Taiwan as a reference for related research and clinical applications. Methods All patients, who had been diagnosed with major depressive disorder or minor depression or dysthymia without any other baseline diseases and had at least one CHM outpatient clinical visit from 2002 to 2011, were extracted from three randomly sampled cohorts, namely the 2000, 2005 and 2010 cohorts of the National Health Insurance Research Database (NHIRD) of Taiwan. The collected data was analyzed to explore the patterns of herbal products. Results There were 197,146 patients with a diagnosis of DD and of these 1806 subjects had only a diagnosis of DD and utilized CHM. The most common formula was Gan-Mai-Da-Zao-Tang (12.19%), while Suan-Zao-Ren (3.99%) was the most commonly prescribed single herb. The core pattern of prescriptions consisted of a combination of Gan-Mai-Da-Zao-Tang, Jia-Wei-Xiao-Yao-San, Chai-Hu-Jia-Long-Gu-Mu-Li-Tang, He-Huan-Pi, Yuan-Zhi and Shi-Chang-Pu. Conclusions This study describes the CHM core prescription pattern used to treat patients in Taiwan with DD and it is a potential candidate for study in future pharmacological or clinical trials targeting DD.
Introduction
According to Diagnostic and Statistical Manual of Mental Disorders (DSM-5), depressive disorders (DD) include several mood disorders (major depressive disorder, dysthymia, and so on) that not only affect emotions (anxiety, sadness), mind (slow thinking, indecisiveness) and behaviors (irritability, suicide attempts), but also Many studies are been carried with the aim of discovering the underlying mechanisms behind DD. Until the present it has been suggested that several related factors are involved such as monoaminergic systems, brain-derived neurotrophic factor (BDNF), the hypothalamic-pituitary-adrenal axis, and neuroinflammation. 5 The treatment of DD is based on the use of antidepressants and/or various psychological treatments. Antidepressants are effective when treating moderate and severe depression, although they are not recommended as a first line of treatment for either mild depression in adults or depression in adolescents. However, many of the side effects of antidepressants, including nausea, headache, insomnia, anxiety, weight gain, sexual dysfunction and so on, often make patients reluctant to continue to use them. 6 On the other hand, the use of psychological treatments is limited by factors such as time-consuming process, high cost (in many countries, the fee of psychological treatments is not covered by insurance), and lack of professionals (psychiatrists, psychologists, and so on). 7 Because of the reasons mentioned above, patients with DD have looked for other treatments. In Taiwan, because Traditional Chinese Medicine (TCM) has been incorporated into health insurance system, many people use TCM to promote their health or to treat various diseases. 8 , 9 A study has shown that more than 40% of depression patients in Taiwan used TCM in 2003, and that younger age individuals, women, individuals with other chronic diseases, and individuals with less exposure to psychiatric treatment are likely to use TCM frequently. 10 Chinese herbal medicine (CHM) has been demonstrated to show benefits when used to treat depression. To investigate the effects, saf ety, and types of CHM for depression, Yeung and his collaborators conducted a meta-analysis. This showed that three most common formulae used in clinical trials were Xiao-Yao-Tang or its modifications, Chai-Hu-Shu-Gan-Tang and Gan-Mai-Da-Zao-Tang ; and CHM had a better efficacy than the placebo control group. The effects were equal to those of antidepressants, and, furthermore, the integration of CHM and antidepressants resulted fewer side effects when treating depressive disorders. 11 However, due to the low methodological quality of most the included studies, more randomized and better controlled trials using internationally accepted methods and standards are required to confirm the benefits of CHM for the treatment of DD.
As mentioned above, DD has been suggested to develop via a number of different possible mechanisms. One formula or one single herb might be not able to regulate all the pathways relevant to depression. Thus, it is necessary to examine if combination of different formulae or single herbs are able to enhance the treatment's effectiveness. In 1995 Taiwan established its National Health Insurance program, and 99.9% of Taiwan's population were enrolled in the system by the end of 2014. National Health Insurance Research Database (NHIRD) provides a platform for understanding the core pattern of prescribed CHM among the depressed patient population in Taiwan. Therefore, the purpose of this study was to analyze three randomly extracted cohort samples from the NHIRD database in order to investigate the core pattern of CHM prescriptions for patients with DD in Taiwan; this can then be used as a reference for related research and for specific clinical applications.
Data sources
NHIRD is a nationwide population-based claims database with long-term follow-up. Annually, data has been collected from National Health Insurance program and de-identified before being sent to the National Health Research Institutes to form NHIRD. The NHIRD's data includes patients' gender, age, dates of clinical visits, major disease diagnosis codes according to the Inter-national Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) format, and details of any prescription provided to the patient. All identity information of the beneficiaries and the medical facilities used are encoded in order to protect the patients' and hospitals' privacy. This study used a retrospective observational population-based design to analyze three random cohort samples from the Longitudinal Health Insurance Database (LHID20 0 0, LHID20 05, and LHID2010) to survey the patterns of prescribed CHM among patients with only a diagnosis of depressive disorder between January 1, 2002 and December 31, 2011. LHID 20 0 0, LHID20 05, and LHID2010 are three datasets which include one million beneficiaries randomly extracted from NHIRD in 20 0 0, 20 05, and 2010, respectively. The random sampling method is to assign all people in the insured population to serial numbers, and use a random number generator to generate at least one million random values, after that take the same serial numbers as one million random values to randomly select the required sample. There are no differences in demographic factors between the randomly selected sample and the entire datasets; thus, these samples could be regarded as representative of the general population. This study was approved by the Institutional Review Board of Taipei Veterans General Hospital (VGHIRB-2018-03-010CC).
Study subjects
The flowchart of subject extraction from the 3 million random samples forming the Taiwan NHIRDs is presented in Fig. 1 . First, all patients diagnosed with major depressive disorder (ICD-9 234.2x or 234.3x), with minor depression (ICD-9 300.4) or with dysthymia (ICD-9 311) were extracted. From 2002 to 2011 in Taiwan, there were 76,425 depressed patients within LHID20 0 0, 68,142 within LHID2005, and 52,579 within LHID2010. Second, within these DD cohorts, patients were separated into either CHM users, who had received at least one CHM prescription between 2002 to 2011 (n = 877 within LHID20 0 0, n = 859 within LHID2005, and n = 780 within LHID2010, respectively), and non-CHM users, who has received no CHM prescription based on these outpatient records. Finally, only CHM users with only a diagnosis of DD were included. Claims with only one of the diagnosis codes for DD, and without any other baseline diseases, were defined as subjects with only a diagnosis of depression. The use of only a diagnosis of DD, in terms of CHM visits, should diminish measurement bias caused by CHM visits for non-depressive disorders treatments. Among the CHM users, a total of 1806 subjects (622 in LHID20 0 0, 627 in LHIDD2005, 557 in LHID2010, respectively) achieved this criterion.
Statistical analysis
Data analysis comprised of descriptive statistics, including the basic characteristics of patients, the most common formulae used to treat DD and the most common single herbs used to treat DD. This study used SAS software, version 9.4 (SAS Institute Inc., Cary, NC, U.S.A.) to analyze the data. In addition, an open-sourced freeware NodeXL was used to discover the core pattern of CHM used when treating patients with DD, and the most frequent combinations of two formulae/single herbs were then utilized for the network analysis. Within the network, formulae and single herbs were connected via lines. The number of combinations between a certain CHM and co-prescribed CHM was used to determine the width of the line connecting them, and the thicker the width of a given line connections is used to identify important prescription patterns. For example, if CHM-A and CHM-B are more frequently co-prescribed than CHM-A and CHM-C, the width of the line between CHM-A and CHM-B will be thicker than the one between CHM-A and CHM-C. The above approach allows the core pattern of CHM utilization to be clearly detected. This approach to identifying core prescription pattern analysis has been used in previous study. 12
Characteristics of DD patients
From 2002 to 2011 in Taiwan, there were 1806 subjects with only a diagnosis of DD used CHM. Table 1 presents the demographic characteristics of the CHM users. Females had a higher tendency to use CHM than males. Among the various age groups, the group with the highest percentage utilization was the 35-49 years old group (37.3%) and the mean ±SD overall age was 44.88 ±14. 34. Patients with diagnosis of dysthymia used CHM more frequently than two other groups. In addition, almost three quarters of the patients (72.3%) used CHM 1-3 times.
Top ten CHM formulae and single herbs for DD patients
The top ten CHM formulae for treating DD from 2002 to 2011 are listed in Table 2 . Gan-Mai-Da-Zao-Tang was the most commonly prescribed formula (12.19%), with an average dose of 4.36 g, followed by Jia-Wei-Xiao-Yao-San (10.08%), Chai-Hu-Jia-Long-Gu-Mu-Li-Tang (6.83%).
The top ten single herbs for treating DD are also listed and the most frequently used single herb is Suan-Zao-Ren (3.99%) with an average dose was 1.36 g ( Table 3 ). The next common single herbs were Da-Huang (3.07%), Yuan-Zhi (2.89%). Table 2 Ten most commonly prescribed formulae for depressive disorders (total prescription number = 12,748).
Herbal formulas
Ingredients/Scientific name Therapeutic actions Average dose (g)
The prescription patterns between formulae and single herbs
The most common prescription patterns with regard to formula/single herb associations are shown in Table 4 ( Fig. 2 ).
Discussion
This study investigated the most common formulae and single herbs, as well as the core pattern of the prescriptions for patients with only a diagnosis of DD in Taiwan. As presented in Table 2 , the most commonly prescribed formula for DD was Gan-Mai-Da-Zao-Tang . A previous study has reported that Gan-Mai-Da-Zao-Tan g decoction was able to ameliorate depressive-like behaviors, attenuate glutamate levels, and enhance expression of N-methyl-D-aspartate receptors in the frontal cortex and hippocampus of unpredictable chronic mildly stressed rats, 13 as well as decreasing immobility times and regulating the concentration of monoamines using a forced swimming test model. 14 In other animal studies, this formula has been shown to have a sedative effect and lengthen the hexobarbital sleeping time, 15 as well as inhibiting sodium, calcium and potassium currents in neurons, in associated with a local anesthetic action using a nerve fiber model. 16 A meta-analysis found that Gan-Mai-Da-Zao-Tang was as efficient as antidepressants; fur-
Table 4
The most common prescription patterns for two and triple drug combinations in a single prescription of depressive disorders. Tang 440 3 Gan- thermore, when combined with antidepressants, Gan-Mai-Da-Zao decoction showed an increased effectiveness, as well as a reduction in side effects, both compared to antidepressants alone. 17 The next most commonly prescribed formula was Jia-Wei-Xiao-Yao-San , which has been widely used in TCM to treat psychological disorders including depression, sleep disturbances, and anxiety disorder. Reportedly, Jia-Wei-Xiao-Yao-San has an antidepressantlike effect when animal models of depression are used via a hippocampal neurogenesis mechanism. 18 In addition, other experimental studies have identified anxiolytic, 19 antioxidant, neuroprotective, 20 anti-inflammatory effects 21 of Jia-Wei-Xiao-Yao-San . In clinical trial studies, the formula has been shown to effectively improve the quality of sleep in peri-menopausal and postmenopausal women, 22 to improve depression in patients with premenstrual dysphoric disorder, 23 and to reduce vasomotor and psychological symptoms in climacteric patients. 24 The third most commonly used formula was Chai-Hu-Jia-Long-Gu-Mu-Li-Tang . This formula was shown by Kazushige Mizoguchi et al. to attenuate chronic stress-induced abnormalities of the hypothalamic-pituitary-adrenal axis, which has been shown to be related to depression. 25 In another study by the same authors, the results indicated that Chai-Hu-Jia-Long-Gu-Mu-Li-Tang is able to relieve chronic stress-induced depressive state via preventing dysfunction of the prefrontal cortex. 26 Additionally, this formula has been shown to decrease corticosterone levels during psychological stress and conditioned-fear stress using a mouse model. which implies that this formula could be useful when treating stress that involves emotional factors. 27 In Taiwan, TCM physicians frequently use Tian-Wang-Bu-Xin-Wan , Suan-Zao-Ren-Tang , and Wen-Dan-Tang for the treatment of sleep disorders. 28 Experimental studies have suggested that Tian-Wang-Bu-Xin-Wan is able to promote sleeping using hyposomnia models. 29 In human study, Tian-Wang-Bu-Xin-Wan showed a significant effect when combined with dormancy hygiene education when it was used to treat insomnia patients. 30 Suan-Zao-Ren-Tang would appear to have a sedative effect when tested in pharmacological and clinical studies. 31 , 32 When Wen-Dan-Tang is examined, several clinical and case studies have revealed that it is able to relieve the symptoms of somatic disorders 33 and melancholia. 34 The next formula, Yi-Gan-San , has been demonstrated to prevent an accumulation of cerebral A β while bringing about a reduction in anxiety-like behaviors 35 ; these were preclinical studies. Other recent studies have found that this formula improves the quality of sleep when psychological insomnia is present, 36 as well as being able to ameliorate the psychiatric symptoms of both dementia and borderline personality disorder, including low mood, anxiety, and irritability. 37 , 38 A meta-analysis found that a combination of Gui-Pi-Tang and antidepressants was able to ameliorate the depressed symptoms better than antidepressants alone. 39 Finally, Ban-Xia-Hou-Po-Tang in a number of current studies and case reports has been shown to be effective when treating depression, 40 anxiety, 41 and insomnia. 42 Most of the common formulae in this study are frequently used by TCM practioners to treat insomnia, depression, and anxiety, the exception being Wu-Zhu-Yu-Tang . By way of contrast, the latter formula is widely used to treat headache and migraine. Experimental studies and randomized controlled trials have reported that it is useful when treating headache. 43 , 44 This formula also has an anti-emetic effect when tested using animal models. 45 Thus, TCM physicians may be using Wu-Zhu-Yu-Tang to treat the physical symptoms of DD, including headache and nausea, rather than the depression itself. 46 Table 3 presented the top ten most frequent single herbs used for DD in Taiwan. The most commonly prescribed single herb for DD from 2002 to 2011 was Suan-Zao-Ren . Sanjoinine A, one of Suan-Zao-Ren active compounds, has been shown to have an antianxiety effect using a mouse model; it seems to act increasing chloride influx, activating GAD65/67 expression, and thus increasing GABA transmission. 47 Another main constituent of this herb is jujuboside, and this has been reported to have a sedative-hypnotic effect. 48 Da-Huang is used as a purgative medicine in TCM and is often used to treat constipation. Modern experimental studies have proven that rhubarb, the main component of Da-Huang , increases the contractile frequency of gastric body circular muscle and improves gut motility. 49 , 50 According to the study of Haug et al., depression is often associated with constipation. 51 Lifestyle and diet changes during depression such as a decrease in physical activity, the consumption of a lot of foods that are high in sugar or fat, and/or a loss in appetite, might be the reasons for constipation in depressive patients. In addition to the above, one of the side effects of antidepressants is constipation 6 and therefore it seems likely that TCM physicians might be using Da-Huang to relieve these symptoms/side-effects in depressive patients.
Two combination 1 Jia-Wei-Xiao-Yao-San Gan-Mai-Da-Zao-Tang 580 2 Gan-Mai-Da-Zao-Tang Chai-Hu-Jia-Long-Gu-Mu-Li-
Yuan-Zhi has been shown to have antidepressant, 52 antistress, 53 anxiolytic, sleep-enhancing, 54 and anti-inflammatory ac-tivities. 55 The underlying mechanisms would seem to include increasing the expression of CAM-L1, pCREB and BDNF in the hippocampus, protecting and bring about the proliferation of neurons, inhibiting norepinephrine in locus coeruleus, stimulating various GABAergic systems, suppressing various noradrenergic systems, and restraining the NF-κB/MAPK pathways. Moreover, several main chemicals in Yuan-Zhi , including polygalasaponin XXXII and onjisaponin B, have been shown to ameliorate cognitive impairments using in vivo studies. 56 , 57 Studies about TCM for the treatment of insomnia have indicated that Ye-Jiao-Teng is one of the most frequently used herbs used to treat sleep disorders. 58 In TCM terms, its action is related to the nourishment of the Heart Yin and Blood, as well as calming the Spirit; thus, it could be affecting the Heart Yin Deficiency or Heart Blood Deficiency that are associated with insomnia and irritability. In addition, Ye-Jiao-Teng has shown to have a sedative-hypnotic effect using mouse and rat models 59 as well as showing anti-oxidant activity in an in vitro study. 60 The major constituents of Bai-He and He-Huan-Pi have been reported to have antidepressant effects. 61 , 62 Two recent in vivo studies have revealed that He-Huan-Pi had an anti-anxiety effect via the regulation of neurotransmitters 63 and the serotonergic nervous system. 64 Additionally, a study used a Chinese formula that consisted of He-Huan-Pi, Suan-Zao-Ren, Bai-Shao and Bai-Zi-Ren using a depressed mice model indicated that this formula was able to reduce the immobility time of depressed mice by inhibiting the monoamine oxidase enzyme system, as well as by increasing serotonin and noradrenaline levels. 65 If we examine research available on Dan-Shen , a pharmacological study has shown that it has a sedative-hypnotic effect when combined with Suan-Zao-Ren ; the results suggest that a combination of these two herbs prolongs sleeping time, as well as reducing sleep latency. 66 Magnesium lithospermate B, the active compound extracted from Dan-Shen , has been reported to have an antidepressant-like effect using a rat model. 67 Furthermore, curcumin, which is the main component of Yu-Jin 68 has been shown to reduce depression and anxiety symptoms. 69 The possible mechanisms behind this antidepressant activity of curcumin seem to be a promotion of hippocampal BDNF and ERK levels. 70 In this context curcumin also has anti-inflammatory effects, 71 enhances neurotransmitters, 72 and suppresses monoamine oxidase. 73 Finally, Fu-Shen has a long history in TCM of being used for the treatment of insomnia and memory disorders. 74 In TCM, one disease can include various TCM syndromes and these are used to guide the practitioner toward a treatment principle and whereby to specific herbs and herbal formulae that can be used for treatment. However, one formula or one single herb alone may not alleviate all the symptoms or signs with different severity the patients have because there are differences in the effects and targets of each formula or each single herb. In addition, as mentioned above, DD is through to involve various mechanisms. 5 Therefore, TCM physicians often combine different formulae and single herbs to enhance the treatment depending on a specific patient's symptoms and signs. In this study, we aimed to investigate in clinical practice which formulae and single herbs TCM physicians usually combine together to treat DD; thus, the hundred most common combinations found in formulae and single herbs were analyzed to examine the core prescription pattern used to treat DD. The results showed that the core pattern was the association of Gan-Mai-Da-Zao-Tang, Jia-Wei-Xiao-Yao-San, Chai-Hu-Jia-Long-Gu-Tang, He-Huan-Pi, Yuan-Zhi and Shi-Chang-Pu ( Fig. 2 ). In the famous classic TCM book "Essentials from the Golden Cabinet" Gan-Mai-Da-Zao-Tang is mentioned as a treatment for "Zang Zao " one of the traditional terms for DD. On the other hand, Jia-Wei-Xiao-Yao-San has been used to treat Liver Qi stagnation by turning it towards Heat when there is underlying Spleen and Blood Defi-ciency, which is also related to depressive disorders. The clinical manifestations associated with Chai-Hu-Jia-Long-Gu-Mu-Li-Tang include anxiety, insomnia, irritability, agitation, depression, fatigue and so on. In modern studies, these three formulae have been found to have anti-depressant as well as sedative effects via a variety of mechanisms. As discussed above, He-Huan-Pi and Yuan-Zhi seem to alleviate depression-like or anxiety-related behaviors, as well as reduce inflammatory activity, when tested using animal models. In addition, Yuan-Zhi gas been shown to ameliorate cognitive impairment, which ought to be helpful when treating DD. Rhizoma Acori Tatarinowii , used in TCM as Shi-Chang-Pu , has been shown to have antidepressant activity 75 both in vivo and in vitro . Moreover, Yuan-Zhi and Shi-Chang-Pu together have been demonstrated to have an anti-amnestic effect on memory impairment. 76 In summary, because DD involve many different biological mechanisms and no single formula or single herb is able to affect all the pathways involved in a given disease, a combination of a number of different TCM medicines is necessary. Moreover, several previous studies in Taiwan have shown that a combination of several formulae and single herbs could improve some diseases. 77 , 78 Therefore, the core pattern described in Fig. 2 provides a significant number of potential candidates that can be used in the future in new pharmacological/clinical trials targeting DD.
There are some limitations to this study. Firstly, this study did not include folk medicines or herbal diets that may have been directly purchased by patients from TCM herbal pharmacies; thus, the use of CHM among depressed patients might have been underestimated. Secondly, this study has only focused on the utilization of CHM in patients with DD; specifically, the utilization of acupuncture, and/or various other TCM treatments, which may also have been offered by TCM practitioners, at the same time as the TCM, in order to treat depression, are not included in this study and have not investigated.
In conclusion, this study describes the Chinese herbal medicine prescription patterns of patients with DD. Gan-Mai-Da-Zao-Tang is the most commonly prescribed formula, and Suan-Zao-Ren is the most commonly prescribed single herb. The core prescription pattern comprises Gan-Mai-Da-Zao-Tang, Jia-Wei-Xiao-Yao-San, He-Huan-Pi, Yuan-Zhi, Shi-Chang-Pu , and Chai-Hu-Jia-Long-Gu-Mu-Li-Tang . Although previous studies have shown that CHM can be efficacious in relieving the symptoms of DD, there have been only a limited number of such studies, and often their quality is low. Therefore, further pharmacological studies, as well as clinical trials, need to be conducted to examine the mechanisms, the efficacy and the safety of this CHM core prescription pattern in depression treatments.
Conflict of interest
The authors have no conflicts of interest to declare.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Ethical statement
This study was performed under the recognition of the Institutional Review Board of Taipei Veterans General Hospital (VGHIRB-2018-03-010CC).
Data availability
The data that support the findings of this study are available from the National Health Insurance Research Database provided by the Bureau of National Health Insurance, Department of Health and managed by National Health Research Institutes but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. | 2020-12-10T09:04:08.038Z | 2020-12-04T00:00:00.000 | {
"year": 2020,
"sha1": "1f50a020f6d6226cc625a6a725dc96f0fc1c27b3",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.imr.2020.100707",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3495130106b44986cda1439e7d8c50692e01d5c0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269999876 | pes2o/s2orc | v3-fos-license | Automatic pacing output optimization system causes pacing failure: Two case reports
are both Medtronic tined leads (5554-53 cm and 5054-58 cm, respectively). Lead dislodgement or fracture was not evident. Holter ECG monitoring showed continuous loss of ventricular capture after the P wave, with the longest being 11.4 seconds without junctional escape rhythms (Figure 1C); this recurred throughout the day. Pace-maker interrogation revealed that the ventricular pacing burden was 100% and that the ventricular lead impedance was unchanged from previous measurements (1596 ohms). Meanwhile, the ventricular pacing threshold was 0.5 V with a pulse width of 0.4 ms, which was a good value. The ventricular pacing output was programmed with VCM. The output safety margin was programmed at 1.5 times the measured threshold, while the minimum adjusted output was programmed at 1.5 V with a pulse width of 0.4 ms. Hence, the ventricular pacing output was 1.5 V with a pulse width of 0.4 ms. Ventricular sensitivity was set to 2.8 mV, as no intrinsic R waves were observed owing to atrioventricular block without escape rhythm. Far-fi eld sensing of the P wave was not evident. Additionally, pacemaker recordings did not show high-rate episodes that were consistent with electromagnetic compatibility, and noise signals due to upper limb movement were not detected. When VCM threshold
Introduction
][3][4] As surgeries for pacemaker generator replacement may result in complications such as lead fracture or infection, the automatic adjustment system benefits patients with a pacemaker; however, this system may also fail.Herein, we report 2 cases wherein pacing failure was attributed to the pacemaker's automatic threshold measurement and output adjustment function.
Case Report
Informed consent was obtained from all patients.
Case 1
The first case involved 70-year-old man with a dual-chamber pacemaker that was implanted in his left chest in 2006 for symptomatic Mobitz type 2 atrioventricular block.Pacemaker generator replacement (Adapta ADDR01; Medtronic, Minneapolis, MN) was performed in 2015.His medical history was unremarkable.He had visited our clinic regularly and was asymptomatic.The pacemaker was programmed in DDD mode with a base rate of 50 beats/min.
Since September 2021, the patient complained of persistent dizziness and shortness of breath.On 12-lead electrocardiogram (ECG), an atrial-sensed and ventricular-paced rhythm was shown, and he was pacemaker dependent (Figure 1A).Chest radiographs revealed that the atrial lead was implanted in the right atrial appendage, while the ventric-ular lead was implanted in the right ventricular apex (Figure 1B), which are both Medtronic tined leads (5554-53 cm and 5054-58 cm, respectively).Lead dislodgement or fracture was not evident.Holter ECG monitoring showed continuous loss of ventricular capture after the P wave, with the longest being 11.4 seconds without junctional escape rhythms (Figure 1C); this recurred throughout the day.Pacemaker interrogation revealed that the ventricular pacing burden was 100% and that the ventricular lead impedance was unchanged from previous measurements (1596 ohms).Meanwhile, the ventricular pacing threshold was 0.5 V with a pulse width of 0.4 ms, which was a good value.The ventricular pacing output was programmed with VCM.The output safety margin was programmed at 1.5 times the measured threshold, while the minimum adjusted output was programmed at 1.5 V with a pulse width of 0.4 ms.Hence, the ventricular pacing output was 1.5 V with a pulse width of 0.4 ms.Ventricular sensitivity was set to 2.8 mV, as no intrinsic R waves were observed owing to atrioventricular block without escape rhythm.Far-field sensing of the P wave was not evident.Additionally, pacemaker recordings did not show high-rate episodes that were consistent with electromagnetic compatibility, and noise signals due to upper limb movement were not detected.When VCM threshold
KEY TEACHING POINTS
The automatic pacing threshold and output adjustment system is beneficial.However, it may cause life-threatening events in some patients.
Indications for automatic pacing systems should be carefully considered, especially in pacemakerdependent patients with highly variable pacing thresholds.
The Holter electrocardiogram is essential in patients with pacemakers who are symptomatic because the pacemaker cannot detect errors on its own.recordings over the past year were reviewed, we found that these varied widely from 0.5 V at the bottom to 1.5 V at the top (Figure 1D).We shared information about this case with the company representatives; however, no conclusive explanation was obtained.Based on our findings, long-term capture failure owing to over-sensing or lead problems was considered unlikely, and our team strongly suspected that the VCM was causing pacing failure.The VCM was turned off, and the pacemaker output was increased (3.5 V with a pulse width of 0.4 ms).Subsequently, dizziness and loss of consciousness resolved, and the Holter ECG thereafter showed no evidence of pacing failure.
We then investigated the cause of the pacing threshold fluctuation.Results of blood tests did not reveal any electrolyte imbalances or other abnormalities.Transthoracic echocardiography revealed normal cardiac function, while coronary angiography did not reveal significant stenosis.There were also no problems in the living environment that could have affected pacing.Although the cause of the threshold fluctuation remained unknown, our team concluded that pacing failure was caused by the VCM.We opted not to replace the pacemaker generator but would consider it during the follow-up, if necessary.The patient was discharged 3 days after pacemaker reprogramming.We monitored the threshold at the pacemaker clinic for 18 months after discharge and concluded that we could safely use the VCM.We turned it on, with twice the output safety margin programmed than the measured threshold, whereas the minimum adjusted output was programmed at 2.5 V with a pulse width of 0.4 ms.The patient was monitored for the next 8 months, and no symptoms or evidence of pacing failure were noted.
Case 2
The second case involved an 87-year-old man with a dualchamber pacemaker implanted in 2004 for complete atrioventricular block; the generator (Adapta ADDRL1; Medtronic, Minneapolis, MN) was replaced in 2012.He had coronary vasospasm, hypertension, and dyslipidemia.He regularly visited our clinic and was asymptomatic.The pacemaker was programmed in VDD mode owing to the high pacing threshold of the atrial lead, and the base rate was set to 50 beats/min.
In January 2022, he was admitted because of recurrent loss of consciousness.A 12-lead ECG revealed an atrial-sensed and ventricular-paced rhythm (Figure 2A), and he was pacemaker dependent.Chest radiographs revealed that the atrial lead was implanted in the right atrial appendage, while the ventricular lead was implanted in the right ventricular apex; both were Medtronic tined leads (5554-45 cm and 5054-52 cm, respectively).Lead dislodgement or fracture was not evident (Figure 2B).Pacemaker interrogation showed that the ventricular pacing burden was 100%.The ventricular lead impedance was 1172 ohms, which was unchanged from previous measurements, and the ventricular pacing threshold was 1.0 V with a pulse width of 0.4 ms, which was a good value.The ventricular pacing output was programmed with the VCM, the output margin was set to 1.5 times the measured threshold, and the minimum output was set to 1.5 V, resulting in a ventricular pacing output of 2.0 V with a pulse width of 0.4 ms.Ventricular sensitivity was set to 2.8 mV, as no intrinsic R waves were observed in the atrioventricular block.As in case 1, pacemaker recordings did not reveal lead malfunction.However, Holter ECG monitoring after hospitalization showed that ventricular capture after the P wave was continuously lost, similar to that in case 1 (Figure 2C), and recurred throughout the day.Additionally, there were 288 pauses longer than 2 seconds reflecting pacing failure, with the longest pause being 16.5 seconds without an escape rhythm.When the pacing threshold variations over the past year were reviewed, we found large daily variations ranging from 1.125 V to 2.25 V (Figure 2D).We then reprogrammed the ventricular pacing threshold measurement interval to every 30 minutes, and the threshold fluctuation was checked.Surprisingly, we confirmed that there was a large daily fluctuation of 1.125-2.5V, which was similar to the set output (Figure 2E).
Although investigations were similar to those in case 1, we did not identify any abnormalities that could affect the pacing threshold.We shared information with the company representatives, but no conclusive explanation was obtained.Hence, we reprogrammed the pacemaker settings and increased the output (3.0 V with a pulse width of 0.4 ms), resulting in resolution of symptoms and pacing failure.We also identified the VCM function as the cause of pacing failure in this case.As in case 1, we did not replace the pacemaker generator.Based on the measured threshold fluctuation after admission and keeping case 1 in mind, we activated the VCM function after 3 days of patient admission, setting the output safety margin twice to the measured threshold and the minimum adjusted output to 3.0 V with a pulse width of 0.4 ms.The patient was discharged 4 days later.We followed the patient for the next 8 months and confirmed that the VCM was functioning properly.
Discussion
Automatic measurements of the pacing threshold and optimization of the output system effectively reduce unnecessary ventricular pacing, reducing battery depletion and frequency of generator replacements.This system also detects the evoked response after the pacing stimulus as well as determines the pacing threshold.Currently, almost all pacemakers use an automatic output adjustment system.A previous study Variation of the ventricular pacing threshold in case 2 that was recorded every 24 hours for the past year.The maximum threshold was 2.25 V / 0.4 ms, and the minimum was 1.125 V / 0.4 ms.E: Variation of the ventricular pacing threshold in case 2 that was recorded every 30 minutes for 24 hours after admission.The maximum threshold reached 2.5 V / 0.4 ms.
revealed that the VCM function reliably measures thresholds in almost all patients, and that the use of an automatic pacing system reduces ventricular pacing and potentially prolongs device longevity. 5However, we experienced 2 cases of pacemaker failure due to the VCM.With the VCM AdaptaÔ pacemaker models, ventricular pacing thresholds can be measured at programmed time intervals, but its output can only be adjusted once per day, and a backup pulse function is absent in the event of pacing capture failure (Table 1).Even if a threshold change occurs between threshold measurements, the pacemaker may be unable to detect it.In pacemaker-dependent patients, this increases the risk of prolonged cardiac arrest, which may result in sudden death or serious events.In a previous study, threshold changes of 1.0 V were observed in 7.5% of patients during automatic threshold measurements. 6For pacemakers cannot perform automatic beat-to-beat backup pulse, pacing failure may occur between the output adjustment intervals in patients with large pacing threshold variations.Although cases of pacing failure owing to automatic optimization of the pacing output system have occurred, 7,8 to the best of our knowledge this is the first report of multiple cases of pacing failure owing to the automatic optimization of the pacing output system.Although the number of reports is limited, we have experienced similar cases at a single institution in a short period of time; therefore, it can be assumed that there may be many similar cases prevalent globally.In patients with nonspecific symptoms such as dizziness, as in case 1, the cause of pacing failure may not be identified, and appropriate treatment may not be provided.We must recognize that the pacemaker itself may not detect abnormalities.Hence, if a patient presents with nonspecific symptoms, it is important to conduct additional diagnostics aside from pacemaker interrogation, including Holter ECG monitoring.
The causes of the pacing threshold fluctuation were unknown in both cases.The patient in case 2 was being managed for vasospastic angina, and it is possible that myocardial ischemia influenced the pacing threshold change, although there was no obvious chest pain or ST-T segment changes on ECG or Holter ECG.Hence, clinicians must be cautious when using VCM in patients with underlying diseases that may cause pacing threshold fluctuations.Although symptoms resolved after altering the output settings, close monitoring and follow-up of patients are warranted.
Conclusion
We experienced 2 rare cases of pacing failure while using the VCM system, which is the automatic pacing threshold measurement and power adjustment function.The automatic power adjustment function should be used with caution in patients with large threshold variations, especially in pacemaker-dependent patients.
Funding Sources: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Figure 1 A
Figure 1 A: Chest radiograph showing the ventricular lead in the right ventricular apex and the atrial lead in the right atrial appendage.B: A 12-lead electrocardiogram showing the atrial-sensed and ventricular-paced rhythm.C: Holter electrocardiogram showing ventricular capture loss.The longest pause was 11.4 seconds.D: Variation of the ventricular pacing threshold in case 1 that was recorded every 24 hours for the past year.The maximum pacing threshold was 1.5 V / 0.4 ms, and the minimum was 0.5 V / 0.4 ms.
Figure 2 A
Figure 2 A: Chest radiograph showing the ventricular lead in the right ventricular apex and the atrial lead in the right atrial appendage.B: A 12-lead electrocardiogram showing the atrial-sensed and ventricular-paced rhythm.C: Holter electrocardiogram showing repeated loss of ventricular pacing capture.D:Variation of the ventricular pacing threshold in case 2 that was recorded every 24 hours for the past year.The maximum threshold was 2.25 V / 0.4 ms, and the minimum was 1.125 V / 0.4 ms.E: Variation of the ventricular pacing threshold in case 2 that was recorded every 30 minutes for 24 hours after admission.The maximum threshold reached 2.5 V / 0.4 ms.
Table 1
Comparison of pacemaker automatic threshold measurement and output adjustment functions LOC 5 loss of capture. | 2024-05-26T05:16:34.898Z | 2024-02-18T00:00:00.000 | {
"year": 2024,
"sha1": "7c8560fa05905dbcf30f83fbc06b4ca5379bcd9e",
"oa_license": "CCBYNCND",
"oa_url": "http://www.heartrhythmcasereports.com/article/S2214027124000319/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c8560fa05905dbcf30f83fbc06b4ca5379bcd9e",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119378991 | pes2o/s2orc | v3-fos-license | Fusion rules for admissible representations of affine algebras: the case of $A_2^{(1)}$
We derive the fusion rules for a basic series of admissible representations of $\hat{sl}(3)$ at fractional level $3/p-3$. The formulae admit an interpretation in terms of the affine Weyl group introduced by Kac and Wakimoto. It replaces the ordinary affine Weyl group in the analogous formula for the fusion rules multiplicities of integrable representations. Elements of the representation theory of a hidden finite dimensional graded algebra behind the admissible representations are briefly discussed.
Introduction
The fusion rules (FR) are basic ingredient in any 2-dimensional conformal field theory [1], [2]. In [3] Awata and Yamada derived the FR for admissible irreducible representations of sl(2) characterised by rational values of the level and the highest weights. The aim of this work is to extend this result to higher rank cases -for simplicity we present here the case sl (3). More details and the general case sl(n) will appear elsewhere.
There is a formula for the FR multiplicities [4], [5], [6] in the case of integrable representations of sl(n), equivalent to the better known Verlinde formula. It generalises the classical expression for the multiplicity of an irreducible representation in the tensor product of two finite dimensional sl(n) representations, resulting from the Weyl character formula. The derivation in [6] was based essentially on the fundamental role played by the representation theory of sl(n) and of its quantum counterpart U q (sl(n)) at roots of unity.
What complicates the problem under consideration is precisely the lack of knowledge of what is the finite dimensional algebra and its quantum counterpart, whose representation theory lies behind the fusion rules of admissible representations. In the case of sl(2) Feigin and Malikov [7] have noticed that the relevant algebra is the superalgebra osp(1|2) and its deformed counterpart. Inverting somewhat the argumentation in [6], we shall try to show that the understanding of the fusion rules for admissible representations leads naturally to a set of finite dimensional representations of some (graded) algebra with well defined ordinary tensor product.
Admissible weights.
We start with introducing some notation. The simple roots of sl(3) are α i , i = 0, 1, 2 . The affine Weyl group W is generated by the three simple reflections w i = w α i .
Given a fractional level k such that κ ≡ k + 3 = p ′ /p with p, p ′ coprime integers and p ′ ≥ 3 , the set of admissible weights of sl(3) is defined [8] Due to invariance with respect to a Coxeter element generated subgroup of the horizontal Weyl group W (see [8] for details) the domain in (2.1) can be equivalently represented using other elements w ∈ W .
We shall refer to P p ′ ,p as the admissible alcove and to its first, second subsets as its first, second leaf. We have reversed the traditional notation putting the prime on the integer part of the weights and leaving the fractional part unprimed, the reason being that in this paper we shall restrict ourselves mostly to the particular series of admissible representations defined by p ′ = 3, p ≥ 4 , in which only λ ′ = 0 survives in the pairs (λ ′ , λ) of sl(3) weights appearing in (2.1). The addmissible alcove P 3,p (to be called sometimes "the double alcove") is described by a collection of p+1 2 + p 2 = p 2 integrable weights at integer levels p − 1 and p − 2, entering the fractional parts of the weights of the first and second leaf respectively. The choice p ′ = 3 is not very restrictive since the novel features of the fusion rules are essentially governed by the fractional part of the admissible highest weights and furthermore the subseries p ′ = 3 is interesting by itself. Shorthand notation [n 1 , n 2 ] and [ [ n 1 , n 2 ] ] , n i = λ, α i , will be used for the weights Λ = −λκ on the 1 st leaf and Λ = w 3 · (−λκ) on the 2 nd leaf of P 3,p in (2.1), respectively; n 3 = n 1 + n 2 . For Λ ∈ P 3,p we shall exploit the automorphism groups Z 3 of the alcoves P p−1 + and P p+1 ++ generated by The sl(3) Verma modules labelled by admissible highest weights are reducible, with submodules determined by the Kac-Kazhdan theorem [9]. In general for an arbitrary there is a singular vector of weight w β i · Λ (or w βî · Λ ) in the Verma module of highest weight Λ. It corresponds to the affine real positive root respectively. These Weyl reflections can be represented as (2.3) 1 We neglect the terms dδ in the full admissible highest weights as irrelevant to our purposes.
The corresponding singular vectors were constructed in [10]. To a decomposition of the reflections (2.3) into a product of simple reflections corresponds a monomial of the lowering generators of sl(3), namely every w i , i = 0, 1, 2, is substituted by E −α i to an appropriate (in general complex) power, see [11] for more explicit presentation in the case of sl(3).
Consider the Weyl groups W λ and W λ generated for the representations on the first (second) leaf of (2.1) by the reflections [8] in the study of the characters of admissible representations. We shall refer to these groups, which will play a crucial role in what follows, as the KW groups. Apparently any element of the KW group W λ depends on (and is determined by) the point on which it acts, so in a sense this is a "local" group acting on the double alcove and "spreading" it much in the same way as the ordinary affine Weyl group acts on the fundamental integrable alcove. This is more transparent using an alternative description of P 3,p .
3. Alternative description of the admissible alcove P 3,p . Affine Weyl group graph replacing the weight lattice.
First recall that the affine Weyl group W can be represented as a graph to be denoted W. This is the well known "honeycomb" lattice (which we saw for the first time in [12] ) highest weights Λ such that w i · Λ ∈ P 3,p , or wî · Λ ∈ P 3,p , for some i = 0, 1, 2 -they precisely exhaust the weights labelling the border points of the two alcoves P p−1 + , P p+1 ++ . "Reflecting" Λ in the three boundary lines enclosing the alcove we land at a weight of a singular vector in the sl(3) Verma module of highest weight Λ. These reflections generate a group that coincides with W λ . The alcove is a fundamental domain with respect to the action of this group. In this realisation the affine Weyl group graph W plays the role of "weight lattice". The choice of the reflecting "hyperplanes" described by Λ ′ + ρ + κω 0 , β i = 0 and Λ ′ + ρ + κω 0 , βî = 0 , i = 0, 1, 2 (i.e., their precise identification with the three planes cutting the big triangle) depends on the value of p and the given highest weight Λ in the alcove.
As it is clear from fig. 1. any "big" alcove P 3,p can be canonically mapped into P 3p ++ , the admissible highest weights being identified with a subset of the triality 0 integrable (shifted by ρ) highest weights at level 3p − 3. (for short we will call it fundamental). Using the standard realisation of the induced representations of SL(3) in terms of functions ϕ(x, y, z) depending on ("isospin") coordinates (x, y, z), described by a triangular matrix in the Gauss decomposition of the elements of SL(3), we realise the generators by the corresponding differential operators (see, e.g., [13]).
The isospin coordinates (as well as the space-time coordinates) of the first and third fields in the 3-point functions are fixed to 0 and "∞". For the weight f (labelling the field at the first point) there are two singular vectors corresponding to the roots β 1 = δ + α 1 and Since the second of the singular vectors reduces simply to the sl(3) generator E −α 2 = −(∂ y + x ∂ z ) of ytranslations, which annihilates the monomials of type x a (xy − z) c , we look for a solution of the equation corresponding to the first MFF operator (see [11] for details) in terms of such monomials (in general -x a y b (xy − z) c z d ) -it reduces to an algebraic system of two equations for the unknowns (a, c). Due to Ward identities corresponding to the two Cartan generators the powers a, b (or (a, b, c, d) in general) are expressed in terms of the three weights Λ (i) , i = 1, 2, 3 , labelling the fields in a 3-point function (4.1) Taking Λ (1) = f and Λ (2) = Λ ′ , Λ (2) +ρ, α i = M i , i = 1, 2 , we find 7 solutions for the resulting representation Λ (3) in the fusion f ⊗ Λ ′ , described by the values (a, c) = (0, 0) , , while the corresponding highest weights read: Proposition 1: The result (4.2) holds for arbitrary generic admissible highest weight Λ ′ ∈ P p ′ ,p (i.e., such that all representations in the r.h.s of (4.2) belong to the admisible domain (2.1) ) and arbitrary p ′ and p ≥ 4, i.e., we do not restrict here to the subseries p ′ = 3. Analogously the fusion rule of the conjugate representation f * = [0, 1] with a generic Λ ′ reads as in (4.2) with w 1 and w 2 interchanged.
Taking Λ ′ = 0, the set of 7 weights in the r.h.s. of (4.2) replaces the three weights of the sl(3) fundamental representation. The latter have clear counterparts in (4.2) represented by the highest weight f = −κω 1 and the two last weights We can visualize this 7 point "weight diagram", to be denoted G f (or its "shifted" by Λ ′ version defined by the weights in the r.h.s. of (4.2), to be denoted G f • Λ ′ ) as a collection of points on the affine Weyl group graph W , see fig. 2. Identifying a reference point on this graph with a highest weight Λ = −κλ on the first leaf of P p ′ ,p (in our case On fig. 3 we have depicted the weight diagram G [2,0] . The same picture with the reference point Λ substituted by Λ + Λ ′ represents the shifted weight diagram G [2,0] • Λ ′ obtained solving the decoupling equations for the fusion of [2, 0] with a generic Λ ′ ∈ P p ′ ,p . The working hypothesis which emerges after analysing the decoupling equations for a couple of examples is that in general these generalised weight diagrams G Λ for highest weight Λ = −κλ from the 1 st leaf of the admissible alcove are obtained according to the following rule. Let Γ λ ⊂ Q + λ be the weight diagram of the sl(3) irreducible representation of highest weight λ . Embed −κΓ λ (as a set of points) into the sublattice Λ + κQ + of W + Λ . Draw all paths on W + Λ , starting from the highest weight Λ, that lie within the borders of −κΓ λ , including the "border" path connecting all points on the outmost (multiplicity one) layer of −κΓ λ . The diagram G Λ is the resulting finite set of weights on W + Λ . For a general computation of this kind, recovering the sl(3) representation spaces, see [13]. Before starting this analysis we first find that the representations described by the Z 3 orbit of the identity element 1 = [0, 0], i.e., the corner points of the big alcove, are simple currents, i.e., their fusion with an arbitrary representation on the first or second leaf of the double alcove produces only one representation living on the same leaf, namely the The result of this analysis is summarised in the following
Proposition 2:
For interior points on P 3,p the FR read On the border lines of the two alcoves constituting P 3,p the FR multiplicities read the rest being determined by the Z 3 symmetries (5.1).
Following the approach of [14], [15] we can look at the set of rules (5.2), (5.3) as defining for each p the adjacency matrix G ab = N b f a of a fusion graph, the vertices of which correspond to the points of the corresponding double alcove P 3,p ; we do not indicate explicitly the links of this graph. The complex conjugated representation f * provides the conjugated adjacency matrix G * ab = N b f * a describing the same graph with inverted orientation of the links; G * ab = G ba . Unlike the known fusion graphs for integrable representations of sl 3 and their nondiagonal generalisations studied in [15] (see also [16]) the graphs de- The formula can be looked at as a generalisation of the Verlinde formula with the matrix a replacing the (symmetric) modular matrix. In (5.5) the vector ψ to generate a fusion ring. However the description of that set is not needed in formula (5.5), the latter providing the full information about the fusion rule. In writing (5.5) we have essentially assumed that the admissible reps fusion algebra is a "C -algebra" (from "Characters -algebra"), following the terminology in [17], i.e., an associative, commutative algebra over C with real structure constants, with a finite basis, an identity element, an involution (here the map of a weight to its adjoint) requiring some standard properties of the structure constants. The knowledge of the fundamental matrix N f specifies the algebra and allows to describe the common to all matrices N a eigenvector matrix ψ (µ) a . The fact that the general formula (5.5) for the "C -algebra" structure constants gives nonnegative integers is highly nontrivial. Comparing with [15] (where such "C -algebras" were discussed and used), recall that the nonnegativity of the integers in the l.h.s. of formulae analogous to (5.5) selects a subclass of the graphs related to modular invariants of the integrable models; a counterexample, where this nonnegativity cannot be achieved, is provided e.g., by the E 7 Dynkin diagram.
Thus the knowledge of the fundamental fusion graph allows to determine in principle all FR multiplicities.
Remark:
The graphs appearing in the study of the admissible representations and the structures they determine deserve further investigation. In particular it is yet unclear whether the "dual" version of (5.5), describing the structure constants of a dual "Calgebra", has any importance. It would be also interesting to check whether the set {µ} , can be interpreted as some "exponents" set in the sense of [16]. These questions have sense already for the case of admissible representations of sl(2) at level 2/p − 2 where the corresponding graphs (or their unfolded colourable ladder type counterparts) look rather simple.
The basic FR (5.2), (5.3) which we have derived are checked to admit another interpretation which generalises the corresponding fusion formula of [4], [5], [6] in the case of integrable representations. First a representation Λ ′′ appears in the fusion f ⊗ Λ ′ only if it belongs to the intersection of the 7 points "shifted" weight diagram of the fundamental Another example of this truncation mechanism is provided by the fusions of [1,1].
Taking a sufficiently large Λ ′ in the fusion [1, 1] ⊗ Λ ′ -i.e., Λ ′ and p such that G [1,1] • Λ ′ ⊂ P 3,p , so that no truncations occur, the FR multiplicities coincide with the weight multiplicities of [1,1] and can be computed from the general formula (5.5). Alternatively the same weight multiplicities can be recovered (and that is how originally we obtained the values indicated on fig. 4), by the above mechanism of truncation along orbits of the KW group, given the FR multiplicities for "smaller" Λ ′ and p. 6. "Verma modules" and weight diagrams. The role of KW affine Weyl group as a "truncating" group.
We shall now formulate in a more concise general form the analog of the FR multiplicities formula of [4], [5], [6] for integrable representations "inverting" the argumentation of [6]. As a motivation for what follows recall that the sl(3) finite dimensional representations, equivalently their supports, i.e., the weight diagrams Γ λ , can be resolved via the action of W in terms of sl(3) Verma modules.
Vice versa given a sl(3) root and a corresponding reflection there is an unique affine root among the above subset and a corresponding reflection in W . Similarly for any set of sl(3) roots w(S) , w ∈ W ,S = {α 1 , α 2 } , there is a set S w comprised of a pair of affine roots such that it projects to w(S). For even (odd) length w ∈ W the sets S w consist of roots in {α i } ({αî} ) i = 0, 1, 2, respectively. Let Q w = ⊕ α∈w(S) Z + α for w ∈ W . We recover Q + for w = id. For w ∈ W of even length let T (w) be a group isomorphic to W generated by the affine reflections labelled by the roots in S w : T (id) ≡ W , T (w 21 ) is generated by {w 0 , w 1 } , etc. For odd length w ∈ W the set T (w) (which also projects to W ) is defined as {1 , wî , w , w jk , w ik , w k } ,k =î, , if S w = {αî , α} , i, j = 0, 1, 2 , i = j .
Recalling that the affine Weyl group W is a semidirect product of the horizontal (finite) Weyl group W and the root lattice Q, one can view the affine Weyl group graph W introduced above as the translations of the basic hexagon, the graph of W , by κQ, i.e., action on the six vertices of the basic hexagon by powers of w iî or wî i , i = 0, 1, 2. More In particular W id Λ coincides with W + Λ from above (see figs. 2,5), while the subgraph W − Λ := W w 3 Λ is illustrated on fig. 6. By analogy with the sl(3) case where the set λ − Q + describes the support of the Verma module of highest weight λ we shall refer to W w Λ as to "Verma modules" of highest weight Λ . The range of Λ is not confined to the admissible domain (2.1) and accordingly we can drop at this point the requirement of rationality of the parameter κ .
In the case when the affine Verma module of highest weight Λ = −λκ or Λ = w 3 ·(−λκ) contains by the Kac-Kazhdan theorem a singular vector of weight w β i · Λ or w βî · Λ, i = 1, 2 (i.e., λ is integral dominant, λ ∈ P + , or strictly integral dominant, λ ∈ P ++ ) then this weight belongs to the graph W + Λ or W − Λ respectively. We can view these weights as the highest weights of "Verma submodules" W w i w β i ·Λ and W w i3 w βî ·Λ , respectively. One can extend this to the other four types of "Verma modules" with proper infinite range of Λ dropping the upper bounds of the alcoves. Thus we have six different analogs of the sl(3) reducible Verma modules (of integral dominant highest weights), the singular vectors of which are governed by the corresponding KW horizontal group W λ .
One can think of the "Verma modules" introduced here as comprising some "extended" in the sense of [10] states given by compositions of noninteger powers of the three lowering generators E −α i , i = 0, 1, 2, of sl(3) -these powers are dictated by the (shifted) affine Weyl reflections indicated on the graph W, in particular any path from the origin to a "singular vector" corresponds to a MFF expression for the "true" singular vector of the affine algebra Verma module. This intuitive picture is quite precise in the sl(2) case where all states of the corresponding "extended" Verma modules have multiplicity one. Then these modules are seen to be isomorphic to Verma modules of the superalgebra osp(1|2); the two sublattices of the corresponding graphs, associated with the two elements of the sl(2) Weyl group, correspond to even or odd submodules with respect to sl(2).
The problem of multiplicities however makes doubtful the usefulness of such "extended Verma modules" interpretation in our case and what replaces the above superalgebra is yet to be seen. Instead we follow further the analogy with the sl(3) case and we assign multiplicities K Λ µ to the weights µ of a general "Verma module" of highest weight Λ, starting from a multiplicity 1 assigned to the points on the border rays, and with each step inward increasing by 1 the multiplicity at the points of the subsequent pair of lines parallel to them (going through diagonals of the honeycomb hexagons) ; see fig. 5,6 where this assignment is depicted. Not to overburden the notation we shall not indicate explicitly the dependence of K Λ µ on the type of module. There are simple formulae (to be presented elsewhere) for the generating functions of the weight multiplicities assigned to any of the 6 sublattices in (6.1) which generalise the Kostant generating function for sl(3) Verma modules.
Let Λ = −λκ , or Λ = w 3 · (−λκ) , with λ ∈ P + , or λ ∈ P ++ , respectively. We define now "weight diagram" multiplicities m Λ µ according to and we shall refer to the (finite) collection of points {µ} with nonzero m Λ µ as "weight diagram" of Λ to be denoted G Λ . The proof that this definition has sense, i.e., that the numbers in the l.h.s. of (6.2) are nonnegative integers, extends the corresponding argument for the weight diagrams of the finite dimensional representations of sl (3) The same pictures admit another interpretation which may serve as an alternative definition leading to (6.2): refining by 3 the lattice Qκ (i.e., p → 3p) and scaling by 3 the weights λ we can identify the two types of diagrams of highest weight Λ with standard sl (3) weight diagrams with some points removed. Namely we map the highest weight Λ = −λκ (λ ∈ P + ) to i(Λ) := 3λ , and Λ = w 3 · (−λκ) (λ ∈ P ++ ) to i(w 3 · (−λκ)) = 3λ − 2ρ , resp. The points removed from the sl(3) weight diagrams originate from the centers of the honeycomb hexagons shaping G Λ , i.e., having at least 3 (2) common links with G Λ , see figs. 8,9, where these points are indicated by squares. The weight multiplicities defined in (6.2) coincide with the sl(3) weight multiplicities of the corresponding surviving points.
The same embedding applies to the supports of the Verma modules. The sl(3) counterparts of the weights µ ∈ W ± Λ are recovered from the highest weight Λ. More explicitly, the same definition of the map i applies to the points µ on the first, w ′ = 1 lattice in (6.1), w = 1 , containing the highest weight. The rest are recovered from such points. Namely for Λ = −λκ and µ ∈ Λ + Q + κ , we have i(w · µ) = i(µ) + w −1 · (0, 0) , for w ∈ W . The points removed from the support 3λ − Q + of the sl(3) Verma module of highest weight 3λ (not representing images of W + Λ ) belong to the subset ∪ α>0 3λ − 3Q + − (3 − α, ρ ) α . This rule can be also used to recover the images of the "Verma modules" of highest weight w 3 ·(−λκ), representing each point of W − Λ as w ·µ , with w ∈ W and some µ ∈ −λκ+Q + κ . The definition (6.2) of the weight diagram multiplicities and the mapping into sl(3) representations extend to the other 4 of the 6 different types of "Verma modules" (6.1), taken with a proper choice of the simple root system generating W λ ; we omit here the details.
Apparently this map preserves (up to permutation) the positions of the singular vectors, We have already discussed the "shifted weight diagram" G Λ • Λ ′ assigned to a highest weight Λ sitting on the first leaf of P 3,p . In general for Λ , Λ ′ ∈ P 3,p the shifted weight if Λ is on the first (second) leaf of P 3,p , respectively. The weight multiplicities of G Λ • Λ ′ are given by We are now at a position to formulate the analog of the FR multiplicities formula in [4], [5], [6].
Proposition 3. The FR multiplicities of (a triple of) admissible weights on the double alcove P 3,p are given by the formula An example illustrating (6.4) with Λ sitting on the second leaf of the alcove is provided by the representation h = [ [ 1,1 ] ] . Its weight diagram appeared in fig. 7. and its fusions for any representations Λ ′ on the second leaf and for generic (i.e., not on the border lines of the alcove) representations Λ ′ on the first leaf. For border representations on the first leaf (6.5) reduces to three multiplicity one terms (or to one such term if Λ ′ = σ l (1)). Apparently the latter truncation is a result of a 2-points orbit leading to m Λ ′ − m w i ·Λ ′ = 2 − 1 = 1 for Λ ′ such that w i · Λ ′ ∈ P 3,p , for some (one) i = 0, 1, 2 or to a 3-points orbit for Λ ′ = σ l (1) , in which case w i · Λ ′ ∈ P 3,p , for at least two i = 0, 1, 2 . The example Λ ′ = σ 2 (1) is illustrated on fig. 10.
For another example see fig. 11 and the Appendix. Note that unlike the integrable case there are no analogs here of "q-dim= 0" representations since the walls of the admissible alcove and its images by the action of the KW group do not support (being on the lattice dual to the graph W) weights of "classical" representations.
The nonnegativity of the FR multiplicities in (6.4) is ensured in general by the fact that the r.h.s of (6.4) can be expressed in terms of sl(3) weight diagram multiplicities and the ordinary affine Weyl group W adapting the map i discussed above. One recovers in this way the r.h.s. of the formula in [4], [5], [6] for the FR multiplicities N i(Λ ′′ ) i(Λ)i(Λ ′ ) (3p) for particular triples of triality zero highest weights in an integrable theory at level 3p − 3.
(Recall that the Verlinde multiplicities are Z 3 graded.) The details will be presented elsewhere, here we only illustrate this statement with an example depicted on fig. 12, see the Appendix for more details. The essential property used is that the (shifted action) Weyl group orbit of a "removed" point contains only "removed" points. This reformulation of (6.4) is a generalisation of the same type of relation between admissible and integrable representations FR multiplicities in the case of sl(2) pointed out in (4.1) of [18]; in particular we expect that the general fusion rules for p ′ ≥ 3 will admit a similar factorised form in terms of fusion multiplicities for integrable representations at levels p ′ − 3 and 3p − 3 , see Appendix.
This is a strong indication that our weight diagrams are the supports of finite dimensional representations of a "hidden" algebra. Its q-version at roots of unity would provide eventually a truncated tensor product equivalent to the fusion rules.
Summary and conclusions.
There are several different descriptions and derivations of the fusion rules at integer level. No one of them is easily extendable to the case of fractional level. Instead we have used a combination of several methods -neither of them is completely rigorous or thorough at present.
We started with the method which directly generalises the approach exploited by [3] in the derivation of the admissible representations fusion rules in the sl (2) case. This Unlike the sl(2) case the decoupling method is technically rather involved due to the complexity of the general MFF singular vectors and presumably it has to be further elaborated in order to treat cases with nontrivial multiplicities. Instead we have followed a strategy influenced to some extent by the study [14], [15] of graphs (generalising the ADE Dynkin diagrams) related to nondiagonal modular invariants of the integrable WZNW theories. Namely selecting and analysing a subset of the decoupling systems of equations, corresponding to representations on the border paths of the admissible alcove, we have determined (under some additional assumptions) for p ′ = 3 and any p ≥ 4 a fundamental fusion graph described by the fusion matrix N f . This allowed us to write a formula, borrowed from [14], [15], for the general FR multiplicities at level 3/p − 3 . It generalises the Verlinde formula in which the symmetric modular matrix is replaced by the eigenvectors matrix ψ (µ) a of N f . It is highly nontrivial that a formula of this kind gives nonnegative integers -we have no general proof of this fact checked for small values of p.
Knowing the matrix N f one can determine in principle for any p its eigenvectors -yet the proposed Pasquier -Verlinde type formula (5.5) is still not very explicit in view of the absence so far of a general analytic formula for ψ (µ) a . So our next step was to look for an alternative formula for FR multiplicities, generalising our old work in [6], see also [4], [5].
While the previous approach can be looked at as related to a resolution of the irreducible admissible representations in terms of a kind of generalised reducible "Fock modules" (since the differential operators realisation of the generators of sl(3) is equivalent to a generalised free field bosonic realisation) the formula (6.4) described in section 6 rather relies on the idea of "Verma modules" resolution.
We recall that the starting point in [6] was the standard Weyl formula for the multiplicities of irreps in tensor products of sl(n) finite dimensional representations. This formula involves the weight multiplicities of sl(n) finite dimensional representations, i.e., their weight diagrams, which can be recovered by resolution of sl(n) Verma modules. So it was natural to try to interpret similarly the generalised weight diagrams we have encountered in section 4 -formula (6.2) is precisely of this type with the Weyl group W replaced by the horizontal KW group W λ .
The hard part in this alternative approach is the absence initially of an obvious candidate for the finite dimensional algebra whose representation theory matches the structures introduced in section 6. Instead we have described the "Verma modules" by their supports, i.e., the set of weights and their multiplicities. Our final step was as in [6] to "deform" the classical formulae replacing the horizontal Weyl group with its affine analog, in our case the affine KW group W λ . While in [6] we have been guided at this step by the representation theory of the deformed algebra U q (sl(n)) for q -a root of unity, here once again we lack so far the q− counterpart of the hidden algebra -rather the consistency of (6.4) with the alternative approaches suggests the existence of such deformation.
The emerging finite dimensional (graded) algebra (and its q -counterpart) behind the series κ = 3/p of A (1) 2 admissible representations is the most interesting outcome of this work. It might be possible to recover this algebra (containing sl(3) as a subalgebra) from the supports of the (reducible) Verma modules introduced above. In fact there is some evidence that the algebra is encoded by the 43 -dimensional representation [1,1], the fractional "adjoint" representation. In particular the map i discussed in section 6 provides a natural way of introducing a root system related to the weight diagram of this representation. While a subset of these roots sits on the sl(3) weight lattice P (which contains the sl(3) root lattice Q), there are "fractional" roots beyond it.
Another remaining related problem is the description of the characters of the finite dimensional representations of this algebra and their q-version, i.e., the derivation of an explicit formula for the eigenvectors matrix ψ Starting from an explicit finite dimensional algebra may simplify also a more abstract derivation of the sl(3) admissible representations fusion rules as well as their generalisation to sl(n).
These questions are under investigation.
Let us finally mention that the problem of deriving the admissible representations fusion rules might be relevant for the analogous problem for representations of W -algebras obtained by quantum (non principal) Drinfeld-Sokolov reduction from the affine algebras -see the analogous reduction of Verma modules singular vectors in [11]. | 2019-04-14T03:00:14.440Z | 1997-09-15T00:00:00.000 | {
"year": 1997,
"sha1": "60ba2d36bd2997a60ac4639f0f41895a3cfc3efe",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9709103",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8b06fa5d09680957657aca72bf718239b28d7621",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
270782165 | pes2o/s2orc | v3-fos-license | Efficient colorectal polyp segmentation using wavelet transformation and AdaptUNet: A hybrid U-Net
The prevalence of colorectal cancer, primarily emerging from polyps, underscores the importance of their early detection in colonoscopy images. Due to the inherent complexity and variability of polyp appearances, the task stands difficult despite recent advances in medical technology. To tackle these challenges, a deep learning model featuring a customized U-Net architecture, AdaptUNet is proposed. Attention mechanisms and skip connections facilitate the effective combination of low-level details and high-level contextual information for accurate polyp segmentation. Further, wavelet transformations are used to extract useful features overlooked in conventional image processing. The model achieves benchmark results with a Dice coefficient of 0.9104, an Intersection over Union (IoU) coefficient of 0.8368, and a Balanced Accuracy of 0.9880 on the CVC-300 dataset. Additionally, it shows exceptional performance on other datasets, including Kvasir-SEG and Etis-LaribDB. Training was performed using the Hyper Kvasir segmented images dataset, further evidencing the model's ability to handle diverse data inputs. The proposed method offers a comprehensive and efficient implementation for polyp detection without compromising performance, thus promising an improved precision and reduction in manual labour for colorectal polyp detection.
Introduction
A new era of medical diagnostics and patient care has begun with the advent of technology in healthcare.Recent advancements in machine learning and computer vision have yielded promising results for automated disease detection, a previously unexplored field.This newly discovered territory is especially important for the early detection of conditions such as colorectal polyps, abnormal tissue growths in the bowel that may progress to colorectal cancer if left undiagnosed and untreated.
The morphologies of colorectal polyps range from flat to pedunculated, and their colours and surface patterns are distinct.Their subtle appearances frequently result in oversights during manual examination, highlighting the need for efficient automatic detection tools.One of the pivotal advancements within the medical diagnostics domain is the application of deep learning techniques, such as the U-Net architecture, for automated disease detection.Polyp segmentation is the process of identifying and delineating these polyps in medical images, which is a challenging task due to their varied appearances.Specifically, the term "U-Net" refers to a convolutional neural network (CNN) architecture that is particularly well-suited for biomedical image segmentation due to its ability to capture both local and global information in images [1,2].
In this research study, the authors have introduced a novel deep learning approach utilizing a customized U-Net framework enhanced with attention mechanisms and skip connections.Wavelet transformation is a mathematical tool used to decompose an image into its frequency components, allowing for the extraction of essential features that might be overlooked in standard image processing.In the context of this study, wavelet transformations are employed to enhance the feature representation of colonoscopy images.This combination improves the accuracy of polyp segmentation by preserving spatial details and facilitating information flow throughout the network.This inventive approach, which combines deep learning with wavelet transformations and data augmentation, has the potential to advance polyp detection, resulting in earlier interventions and better patient outcomes [3,4].
The research content can be summarized as follows: Section 2 provides an insight into the motivations and unique contributions of the study.Section 3 delves into existing methods and relevant studies in the field.The datasets used for model training and testing are elucidated in Section 4. Methodological details, including data pre-processing and the deployment of the AdaptUNet model, are covered in Section 5. Section 6 is dedicated to the presentation and in-depth discussion of the results across various datasets.The paper concludes with Section 7, where a summary and implications of the findings are articulated.
Motivation and objectives
Deep learning methodologies have emerged as potent tools, exhibiting profound potential in the realm of image-guided medical diagnostics.These methods have found significant applicability in the field of colorectal cancer detection, where they examine colonoscopy images with precision.By teaching these complex artificial intelligence-driven models to distinguish and segment polyps from the milieu of background tissues, they improve detection accuracy while simultaneously reducing the amount of manual labour required for polyp identification.As a result, these developments may pave the way for decreased screening costs, reduced patient distress, and increased patient compliance with examinations.
This research is motivated by the critical need to identify and eliminate colorectal polyps at an early stage, aiming to prevent the development of cancer in the colon.Despite its efficacy, the current gold standard for screening, colonoscopy, is hampered by its timeconsuming nature, invasiveness, and susceptibility to human error, resulting in occasional missed polyps and increased cancer risks.The incorporation of deep learning techniques for automated polyp detection and segmentation appears to be a promising countermeasure against these obstacles.
Related works
Over the recent years, the rapid advancements in machine learning techniques have displayed immense potential in transforming healthcare outcomes, particularly in the field of colorectal cancer detection and diagnosis.Researchers have explored various techniques and methodologies to develop increasingly precise and efficient models, including convolutional neural networks (CNNs), ensemble classifiers, hybrid models, and the integration of deep learning with classical machine learning approaches.
Tharwat et al. outlined the overall potential of the recent advancements in the application of machine learning techniques for the early detection of colon cancer [5].Specifically, various research has verified the efficiency of CNNs in this domain.For instance, Collins et al. showed the effectiveness of a 3D CNN in conjunction with Support Vector Machines (SVM) in detecting colon and esophagogastric cancer tissues, achieving an impressive ROC-AUC of 0.93 [6].Similarly, González-Bueno Puyal et al. introduced a hybrid 2D/3D CNN architecture for enhancing polyp detection in colonoscopy videos [7].Nisha et al. proposed a Dual-Path CNN (DP-CNN) for polyp detection, showing higher precision, recall, and F1-score on two databases [8].Yeung et al. built upon the U-Net architecture, introducing the Focus U-Net, a dual attention-gated deep neural network, to enhance polyp segmentation with mean DSC and IoU scores of 0.878 and 0.809, respectively [9].
Several other studies introduced unique deep learning architectures for the detection and classification of colon cancer.The concept of model fusion was manifested by Sharma et al. who championed an ensemble classifier approach integrating various classification models, yielding impressive performance metrics [10].This approach was mirrored by Talukder M. et al., who developed a hybrid model for lung and colon cancer detection, utilizing preprocessing, cross-validation, transfer learning, and ensemble learning techniques [11].Ho et al. developed a composite algorithm combining deep learning with a Faster Region-Based Convolutional Neural Network (Faster-RCNN) architecture, paired with a ResNet-101 feature extraction backbone for glandular segmentation [12].Escorcia-Gutierrez et al. combined Galactic Swarm Optimization (GSO) with deep transfer learning, achieving an accuracy of 95 % on a test set of histopathological colorectal cancer images [13].
Dealing with challenges such as low contrast, blurred images, and noisy data, numerous studies employed pre-processing techniques alongside deep learning.Murugesan et al. used the YOLOv3 Multi-Scale Framework (YOLOv3-MSF) for effective detection and classification of various stages of colon cancer [14].Similarly, Khan et al. developed an AI-based screening method for lymph node metastases in CRC, combining a segmentation model for lymph node tissues with a CNN using Xception architecture and Vision Transformer (ViT16) [15].Chen et al. employed a self-attention-based faster R-CNN for polyp detection from colonoscopy images, achieving an accuracy of 93.4 % [16].
Notable strides were made in detecting and classifying colon adenocarcinomas.Hasan et al. applied a deep convolutional neural network (DCNN) and achieved an impressive accuracy of 99.80 % [17].Xu et al. utilized the Inception V3 model to a similar effect [18].Tanwar et al. used SSD for colorectal polyp detection and classification, obtaining an accuracy of 92 % [19].S. Hosseinzadeh Kassani and colleagues investigated the efficacy of numerous deep learning architectures for the automatic segmentation of colorectal tumor tissue samples.They found that a shared DenseNet and LinkNet architecture outperformed other methods, achieving a dice D. Rajasekar et al. similarity index of 82.74 % ± 1.77, accuracy of 87.07 % ± 1.56, and an f1-score value of 82.79 % ± 1.79 [20].
Zhang et al. developed a method for label-free colorectal cancer screening that combines spatial light interference microscopy and AI.They manually segmented the images and used the VGG16 network for classifying cancerous and benign tissue, yielding an accuracy of 97 % [21].The AMNet architecture, as developed by Song.P. et al., combines advanced feature fusion and attention mechanisms for improved polyp segmentation.Employing the Res2Net backbone, it focuses on high-level features to enhance performance and efficiency.The network features a multi-scale fusion model, utilizing bilinear upsampling and 2 × 2 convolution for feature integration.Central to its design are the Contextual Attention Module, Polarised Self-Attention (PSA), and Reverse Context Fusion (RCF) modules.These elements collaboratively enhance segmentation by optimizing multi-scale feature extraction, adjusting weight distributions, and fusing features and contextual guidance.Specifically, the CFP module captures varied contextual information through dilation convolution, and the PSA module refines features by redistributing weights in both channel and spatial dimensions.The RCF module further improves segmentation by merging PSA outputs with prior contextual insights, showcasing the network's modular and efficient approach to medical image segmentation [22].Billah et al. (2017) have implemented a methodological approach that applies a three-level, two-dimensional discrete wavelet transformation to each color channel of RGB images, with a particular focus on the middle wavelet detail images for the purpose of textural analysis.Additionally, they incorporate a co-occurrence matrix to analyze spatial relationships within these images.This approach is rooted in the statistical examination of texture feature distributions by analyzing the spatial relationships of pixels, specifically how pixel values co-occur within an image at predetermined orientations and distances.The utilization of a co-occurrence matrix is especially effective in capturing both texture and structural information within the images, offering a nuanced understanding of texture and spatial relationships [23].
Overall, the integration and augmentation of deep learning techniques are propelling advancements in the early detection and diagnosis of colorectal cancer.These methods enhance detection accuracy, streamline disease detection, and improve patient outcomes.This promising field could potentially lead to major breakthroughs in colorectal cancer diagnosis and treatment.
Datasets
In the study, the HyperKvasir Segmented Images dataset, consisting of 1000 images focused on the polyp class, was employed for training the machine learning model.This dataset was divided into training, validation, and testing sets in a 70:15:15 ratio.Specifically, 70 % of the images were used for training the model, while 15 % were set aside for validation, and the remaining 15 % for testing.This split ensures a comprehensive assessment of the model's performance across different subsets of the data.This dataset provides original images, their corresponding segmentation masks, and bounding boxes for each polyp image.The segmentation masks are binary images that differentiate the region of interest, which in this case is the polyp tissue (highlighted in white), from the background (depicted in black).This detailed demarcation of the polyp regions assists in accurate segmentation and serves as a crucial tool in training this model.
For the purpose of testing and evaluating the effectiveness of AdaptUNet, a set of four datasets have been employed: 1. CVC-300 [24]: This dataset comprises 60 endoscopic images depicting various gastrointestinal (GI) tract diseases, including polyps, ulcers, and inflammatory conditions.It serves as a robust benchmark for evaluating the efficacy of algorithms and models in identifying and classifying GI tract diseases.2. ETIS-LaribDB [25]: The ETIS-LaribDB dataset has a particular focus on colorectal lesions, especially polyps.It contains 196 endoscopic images along with ground truth annotations, including binary masks for polyp segmentation.This dataset has been widely utilized for benchmarking computer-aided diagnosis systems and evaluating polyp detection and segmentation algorithms.3. Kvasir-SEG [26]: The Kvasir-SEG dataset comprises 100 gastrointestinal endoscopy images that include annotations specifically designed for semantic segmentation.It includes a variety of anatomical structures and pathological conditions, providing pixel-level annotations for organs, lesions, and background regions.It has proven instrumental in the development and evaluation of segmentation models in the field of gastroenterology.4. CVC-ColonDB [27]: The CVC-ColonDB dataset focuses specifically on colonoscopy images and provides a collection of 380 images with corresponding ground truth annotations.It is designed for evaluating the performance of algorithms and models in detecting and segmenting polyps in the colon.The dataset includes various types of polyps, along with normal tissue and other abnormalities found in colonoscopy images.
During the experiments, the model trained on the HyperKvasir Segmented Images dataset were tested on the CVC-300, ETIS-LaribDB, and Kvasir-SEG datasets, by leveraging the weights obtained from the training process.This approach allowed us to assess the adaptability and robustness of the trained model on different datasets.
Proposed methodology
The proposed methodology as shown in Fig. 1 for polyp segmentation encompasses two key components: pre-processing of images and a modified U-Net model architecture with attention mechanisms.The workflow begins with the input of original images and their corresponding masks, which are subjected to a Data Preprocessing stage.Within this stage: 1. Image Augmentation is performed to artificially increase the dataset's size and variability, enhancing the model's ability to generalize. 2. The images then undergo a Wavelet Transformation, specifically a 2-D Discrete Wavelet Transform (DWT) where they are processed using the 'bior1.3′biorthogonal wavelet type.This transformation is conducted with the PyWavelets library, which performs a single level of decomposition.This operation divides the image into four sets of coefficients: the approximation coefficient (LL) and detail coefficients (LH -horizontal, HL -vertical, HH -diagonal).Each set of coefficients captures different aspects of the image's structure at various resolutions and orientations.After decomposition, the coefficients are resized back to 256 × 256 pixels to ensure uniformity.The resized components are then concatenated back into a single image forming a multi-channel image where each channel corresponds to one set of coefficients.This concatenated image is further normalized to have values between 0 and 1 to ensure that the model receives inputs within a standardized range.The transformation enhances edges and textures along with retaining the original image's salient features that are important for the segmentation task.
Following preprocessing, the images are fed into the Model Training stage, where the AdaptUNet model is employed.AdaptUNet is a hybrid model that combines the architecture of U-Net with Attention Blocks.These attention blocks help the model focus on specific regions of the image, enhancing its segmentation accuracy.
After training, the model enters the Model Evaluation stage.For this stage, the trained model was tested on both the validation subset (15 % of HyperKvasir dataset) and the external datasets (CVC-300, ETIS-LaribDB, Kvasir-SEG, and CVC-ColonDB).Here, its performance is assessed using various metrics, including the Dice coefficient, Intersection over Union (IoU), and Balanced Accuracy.This evaluation process is repeated for 100 epochs to ensure the model's robustness and accuracy over multiple iterations.
The final output of the workflow is a set of images paired with their corresponding segmented masks, showcasing the model's ability to accurately identify and delineate colorectal polyps.
This section introduces these components and provides a brief overview of their significance in achieving accurate and robust polyp segmentation.
Pre-processing
Pre-processing plays a crucial role in preparing the dataset and enhancing the input images for the subsequent segmentation task.The pre-processing techniques employed by the authors include data augmentation and wavelet transformation.These techniques enable the model to learn from a more diverse and informative dataset, enhancing its ability to accurately segment polyps from colonoscopy images.
D. Rajasekar et al.
The following discussion delves deeper into the specific techniques utilized for data augmentation.These components work in synergy to improve the accuracy and robustness of polyp segmentation, contributing to more effective diagnosis of colorectal diseases.
Augmentation techniques
In this study, the authors employed the Albumentations library, which offers a wide range of powerful and efficient augmentation techniques.A combination of geometric and image-only transformations was utilized to augment the training dataset, allowing the model to effectively deal with diverse variations in polyp images.Geometric transformations, such as horizontal and vertical flips, as well as random rotations, were employed to introduce spatial variations, simulate natural distortions, and replicate real-life scenarios.
These transformations enable the model to learn from different polyp orientations, shapes, and appearances, making it more robust and adaptable to diverse input data.Additionally, image-only transformations such as random brightness contrast, random gamma adjustment, and CLAHE were utilized to improve the model's performance under different lighting conditions, noise levels, and contrast variations.By augmenting the dataset with these transformations, the model becomes more capable of accurately segmenting polyps in endoscopic images, ultimately leading to improved performance and generalization.
Fig. 2 showcases two example images from the training dataset, highlighting the impact of various augmentation techniques discussed.The original images and masks represent the raw input data used for training the model.They depict real-world scenarios with inherent variations in lighting, perspective, and object placement.The augmented images exhibit increased diversity in terms of object positions, orientations, and backgrounds, providing a more comprehensive representation of real-world scenarios.This augmented dataset facilitates better training of the model, enabling it to handle a wider range of variations and improve its overall performance in object recognition and localization tasks.Note: The images in Fig. 2 are for illustrative purposes only and do not represent the full range of augmented images and masks used in this research.
Wavelet transformation
In this study, the authors employed a comprehensive data preprocessing approach to prepare the dataset for the polyp segmentation task.The preprocessing pipeline consisted of several key operations aimed at improving the feature representation of the images: 1. Image resizing: Each input image (I) is adjusted to a standard size of (256, 256, c), ensuring consistent dimensions without altering the color channels c.This is essential for ensuring consistent input dimensions across the dataset.2. Image normalization: The pixel values of each image (I) are rescaled to a common range.This involves dividing each pixel value by the maximum pixel value (255 for 8-bit images).Equation (1) represents the above: 3. Grayscale conversion: To convert RGB images to grayscale, a weighted sum of the color channels is calculated.When dealing with an image containing red (R), green (G), and blue (B) channels, the grayscale image (I grayscale ) is computed as shown in Equation ( 2): 4. Wavelet transformation: Grayscale images undergo a 2D discrete wavelet transform using the 'bior1.3′biorthogonal wavelet where they are divided into two sets of coefficients: approximation coefficients (A) and detail coefficients (H -horizontal, V -vertical, Ddiagonal).By utilizing the impulse response of a low-pass filter denoted as h[n], the approximation coefficients can be computed as: 5. Image concatenation: The original images (I normalized ) are concatenated with the wavelet-transformed images (A) to create a comprehensive image representation (I final ). 6. Mask preprocessing: The corresponding masks are resized and binarized in a similar manner to the images.If M is the original mask and T is the chosen threshold (usually 0.5 for binary masks), the binarization process can be formulated as: Furthermore, the specificity of the 'bior1.3′biorthogonal wavelet in capturing the nuanced textures and edges peculiar to polyp structures in colonoscopy images represents a novel approach within the preprocessing methodology.This wavelet was selected based on its proven effectiveness in medical image analysis for retaining critical spatial frequency characteristics, which are pivotal for the precise delineation of polyp boundaries [28].Additionally, the subsequent concatenation of wavelet-transformed images with the original dataset is a distinctive strategy that enriches the model's input with a multifaceted representation of polyp features.This enriched set of features is specially crafted to improve the model's ability to distinguish between different elements, offering a novel contribution to the task of identifying polyps.This approach marks a notable improvement on the usual methods of preparing images for analysis.
AdaptUNet model
The model architecture employed, AdaptUNet is a customized variant of the U-Net architecture, a widely recognized framework for semantic segmentation tasks.This architecture has proven effective in various image analysis tasks, including polyp segmentation.
The proposed model, referred to as AdaptUNet in Fig. 4, adopts an encoder-decoder architecture that takes advantage of skip connections.These connections effectively preserve spatial details while facilitating information flow throughout the network.By incorporating both low-level and high-level features, the model strives to capture intricate patterns and structures in the polyp images.Setting AdaptUNet apart from traditional U-Net architectures is the integration of adaptive spatial and channel attention blocks.These novel components dynamically refine the model's focus on the most informative features within the feature maps, accounting for both spatial and channel-wise relevance.This advanced strategy is designed to yield a more nuanced and accurate segmentation by enhancing the network's sensitivity to critical features in the image data.
During the initial phase, the input images which have a resolution of 256 × 256 pixels and consist of four channels, are subjected to a series of convolutional layers.These layers, each comprising a 3 × 3 kernel, perform feature extraction operations boosting the model's proficiency to identify relevant patterns.To ensure stable training and introduce non-linearity within the network, subsequent steps involve the utilization of batch normalization and rectified linear unit (ReLU) activation functions.To enable down sampling and preserve the most salient information, the authors employ MaxPooling2D layers with a pool size of 2 × 2. Additionally, dropout regularization (rate = 0.1) is introduced to mitigate overfitting during training.The network employs spatial attention mechanism that computes attention weights based on the features' spatial distribution.Utilizing a dynamically sized convolution kernel, the network adaptively adjusts its focus to different spatial scales, which is critical for capturing the varying sizes and shapes of polyps in colonoscopy images.Similarly, the channel attention mechanism selectively enhances the most informative channels, ensuring a comprehensive feature representation for accurate segmentation.
The bridge section of the model utilizes a set of two convolutional layers with 1024 filters and a 3 × 3 kernel size.These layers focus on capturing high-level features that are crucial for accurate polyp segmentation.
The decoder part of the model is crucial for recovering the spatial resolution of the segmentation masks.To achieve this, Con-v2DTranspose layers are employed, which perform upsampling by a factor of 2. Subsequently, concatenation operations are applied to combine the upsampled feature maps with the corresponding feature maps from the encoder pathway.This integration of multi-scale information aids in precise localization and context-aware segmentation.
To selectively attend to informative regions, attention blocks are introduced in the decoder pathway.These blocks consist of attention mechanisms, including spatial and channel attention.The spatial attention block adaptively weighs the importance of different spatial regions, while the channel attention block focuses on relevant channels for improved feature representation.These attention mechanisms enhance the model's discriminative power and enable it to concentrate on essential polyp regions.Within each decoder block, the authors utilize convolutional layers with decreasing numbers of filters (512, 256, 128, 64), batch normalization, ReLU activation functions, and dropout regularization (with rates of 0.4, 0.3, 0.2, and 0.1, respectively).This hierarchical decoding process allows for the extraction of increasingly abstract features and aids in the reconstruction of the image.These adaptive attention blocks are integrated at each level of the decoder, ensuring that the upsampled feature maps are refined with both the gating signal and the inter signal.This results in a feature map that is optimized for the subsequent convolutional operations.The spatial attention blocks operate on 2D feature maps using 2D convolutions, while the channel attention blocks process 1D feature maps with 1D convolutions.Consequently, at each stage of the decoder, the feature maps are attentively adjusted, both spatially and channel-wise, before being merged, leading to a more discerning reconstruction of the segmentation mask.
Ultimately, the model generates results through the utilization of a single-filter convolution and a kernel size of 1, employing a sigmoid activation function in the subsequent step.This generates the predicted segmentation mask, where each pixel value represents the probability of it belonging to the polyp class.In summary, the customized U-Net architecture, enriched with attention mechanisms and skip connections, facilitates accurate polyp segmentation by effectively capturing detailed information and contextual cues.
In this study, crucial model hyperparameters, including the cyclic learning rate (CLR) and the loss function utilized, Dice binary cross-entropy (BCE), played instrumental roles in improving the model's performance.The CLR, implemented using a custom CyclicLR class, optimized the learning rate during model training.It dynamically modulated the learning rate throughout the process, enabling the model to converge faster and potentially achieve enhanced performance.The CLR's rate was updated at the inception of each batch, fluctuating within each cycle based on the current iteration.This allowed the model to probe different learning rates and potentially discover an optimal range for improved convergence.
Complementing this was the Dice BCE loss function, which guided the polyp segmentation training process.This loss function combined the BCE loss, which assessed pixel-wise binary classification error, and Dice loss, quantifying the similarity between the predicted and target masks.By minimizing the Dice BCE loss during training, the model aimed to augment the precision of polyp segmentation, considering both binary classification and overlap extent between predicted and target masks.Consequently, the judicious employment of CLR and Dice BCE loss as key model hyperparameters amplified the training process but also facilitated achieving superior results in polyp segmentation.
Table 1 provides an overview of the key hyperparameters used in the development and training of the proposed AdaptUNet model.
Computational requirements
An essential aspect of our model's design is its computational efficiency, which we quantitatively assess by the total number of parameters and the memory footprint.The model comprises a total of 28,621,413 parameters, divided into 28,609,637 trainable parameters and 11,776 non-trainable parameters.The non-trainable parameters primarily reside in layers such as Batch Normalization, which are utilized in a frozen state during the inference phase to stabilize the network's predictions.
The memory requirement for storing the model parameters is a critical factor, especially when deploying the model on hardware with limited resources.Given that each parameter is represented as a 32-bit floating-point number, the total memory footprint of the model is approximately 109.18MB.This calculation is based on the assumption that each parameter requires 4 bytes of storage: Total memory footprint = 28,621,413 × 4 bytes ≈ 109.18MB.This compact memory footprint allows our model to be deployed in cloud-based environments and in edge devices, facilitating realtime application.
Performance measures
The performance measures incorporated in the evaluation encompass the Dice coefficient, IOU, and Balanced Accuracy.These metrics offer quantitative assessments to gauge the precision and efficacy of image segmentation algorithms and models.
Dice Coefficient:
The Dice coefficient is utilized to assess the similarity or overlap between two sets, particularly in image segmentation.It serves as a metric to evaluate the agreement between the predicted segmentation and the ground truth segmentation.With a scale from 0 to 1, the Dice coefficient signifies the degree of overlap, with a value of 1 representing a complete match between the predicted and ground truth segmentations.The formula presented in equation ( 5) calculates the Dice coefficient as follows: 2. Intersection over Union: IoU commonly referred to as Jaccard Index represents another similarity measure used in image segmentation tasks.It calculates the ratio of the intersection to the union of two sets.In the context of image segmentation, it measures the overlap between the predicted segmentation and the ground truth segmentation.Like the Dice coefficient, the IoU ranges from 0 to 1, with 1 indicating a perfect overlap between the predicted and ground truth segmentations.The formula presented in equation ( 6) calculates the IoU as follows: 3 Balanced Accuracy (BAcc): The Balanced Accuracy is a performance measure that considers both true positive and true negative rates.It provides an overall assessment of a model's performance across different classes or categories, considering imbalanced datasets.Balanced Accuracy is particularly useful in imbalanced datasets because it gives an equal weight to the performance on each class, regardless of its frequency in the dataset which is why we have chosen this metric over F1 score to calculate the performance of our proposed architecture.The BAcc is calculated as the average of sensitivity (true positive rate) and specificity (true negative rate).The BAcc ranges from 0 to 1, with a higher value indicating a higher level of overall accuracy in the segmentation task.The formula presented in equation ( 7) calculates the Balanced Accuracy as follows: BAcc = 0.5 × (Sensitivity + Specificity) (7)
Results and discussions
This section presents a comparative analysis of several models, including UNet, SFA, PraNet, UACANet-L, EU-Net, MSNet, BDG-Net, SANet, MFBGR, and AdaptUNet.Each model is evaluated using various metrics such as Dice coefficient, Intersection over Union (IoU), and Balanced Accuracy across multiple datasets, such as CVC-300, CVC-ColonDB, Kvasir, and ETIS-LaribDB.Their respective D. Rajasekar et al. performances are concisely summarized in tabular form for easy comparison.
Furthermore, a detailed visual representation illustrating the performance metrics of the AdaptUNet model throughout the training process across the four datasets is provided.This includes the Loss Curve, Accuracy Curve, IoU Evolution, Dice Evolution, and Receiver Operating Characteristic (ROC) curve, offering valuable insights into the model's progressive performance enhancement.
Additional figures showcase the evaluation results of the AdaptUNet model on each of the four datasets (CVC-300, CVC-ColonDB, Kvasir, and ETIS-Larib).These include the wavelet-transformed input image, the true mask (ground truth), and the mask predicted by the AdaptUNet model.These depictions underscore the proficiency and precision of the model's polyp detection and segmentation capabilities in comparison to the ground truth.Each subsection emphasizes a particular dataset and offers an in-depth analysis of the model's performance.
Results on the CVC-300 and CVC-ColonDB datasets
This subsection displays the results drawn from an exhaustive analysis that utilizes several metrics to gauge how the model perform on the CVC-300 and CVC-ColonDB datasets.It is accompanied by a comprehensive interpretation of the findings.
Fig. 5 provides a comprehensive visual representation of the model's performance metrics throughout the training process on the CVC-300 dataset and CVC-ColonDB dataset using the Receiver Operating Characteristic (ROC) curve.The ROC curves visualizes the trade-off between sensitivity and specificity, providing a comprehensive measure of the model's performance across various threshold settings.The ROC curve demonstrates a value of 0.99 for both the CVC-300 and CVC-ColonDB datasets.This indicates the model's exceptional performance in distinguishing between positive (presence of polyps) and negative instances, specifically in polyp detection.The high Area under ROC curve (AUC value) highlights the model's ability to accurately rank instances and assign higher probabilities to positive cases.It means that the model is exceptionally effective at assigning higher probability scores to actual cases of polyps while giving lower scores to non-polyp instances.Therefore, the model is reliable and effective in aiding early detection or screening for polyps, as evidenced by the strong discriminatory power exhibited in Fig. 5.
Table 2
Evaluation metrics of AdaptUNet on the CVC-300 and CVC-ColonDB datasets.
In Tables 2 and 4, the upward arrow (↑) next to each performance measure indicates that higher values correspond to better results.Table 2 provides a comparative analysis of the AdaptUNet model's performance on the CVC-300 and CVC-ColonDB datasets.
In the context of the CVC-300 dataset, the AdaptUNet model excelled, achieving superior results compared to all other models evaluated, including state-of-the-art methodologies like UNet, SFA, PraNet, UACANet-L, SANet, MSNet, EU-Net, BDG-Net, and MFBGR.The compared models in these tables use different pre-processing and data partitioning strategies.As indicated by a Dice These results, together, offer valuable insights into the strengths and weaknesses of different models, guiding researchers in choosing suitable methodologies for polyp segmentation on the CVC-300 and CVC-ColonDB datasets.Opportunities for future research and performance enhancements of the model on these specific datasets are also suggested.
Fig. 6 presents the evaluation results derived from testing the model on the CVC-300 Dataset.The figure is composed of three main components.
1. Firstly, it shows the input image which has undergone wavelet transformation, providing a visual representation of how the input data is prepared for the model.2. Secondly, it displays the true mask corresponding to the input image, serving as a benchmark for the polyp segmentation that the model aims to achieve.3. Lastly, the figure includes the mask as predicted by the model, demonstrating the effectiveness and accuracy of the model's polyp detection and segmentation capabilities in comparison to the true mask.
This triptych of images provides a comprehensive visualization of the model's performance.Fig. 7 showcases the evaluation of the model on the CVC-ColonDB dataset providing a concise overview of the model's segmentation capabilities.
Ablation study
In medical image segmentation, understanding the contribution of each component to a model's overall performance is crucial for optimization.This section presents an ablation study designed to dissect the impact of specific elements within our proposed Adap-tUNet architecture for colorectal polyp segmentation.The CVC-300 dataset, known for its challenging and diverse set of images, was selected as the focal point for this study due to the exemplary results achieved with our complete model.The components subjected to ablation include the wavelet transform and extra decoder blocks, chosen for their presumed significance in enhancing model performance.This study aims to elucidate their individual and combined contributions towards the model's accuracy and generalization capabilities.
Methodology
Our ablation study adheres to a consistent experimental setup across all tests to ensure comparability of results.Each variant of the AdaptUNet model-without wavelet transform, without extra decoder blocks, and lacking both-was trained on the CVC-300 dataset for 1000 epochs.This consistency in training duration underscores our commitment to a fair and rigorous evaluation of each component's impact.Performance metrics including the Dice coefficient, Intersection over Union (IoU), and Balanced Accuracy (BAcc) serve as the benchmarks for assessment, providing a multifaceted view of each ablation scenario's effects on segmentation quality.
To present the findings, the authors employ a comparative format that juxtaposes the performance metrics of the original model against those of its ablated variants.The table below encapsulates these results, offering a clear visualization of the impact exerted by the wavelet transform and extra decoder blocks on the proposed model's efficacy.Table 3 elucidates the profound impact of both the wavelet transform and extra decoder blocks on the model's performance.The substantial differences in performance metrics observed in the ablation study can be explained by the critical roles played by both the wavelet transform and the decoder blocks in the proposed model's architecture.These components significantly influence the model's performance.
Wavelet transform
The wavelet transform decomposes an image into various frequency components, capturing both the approximation (low-frequency details) and detailed coefficients (high-frequency details).This operation enriches the feature space available to the model, allowing it to capture and utilize both global structures and fine details within the images more effectively.Removing the wavelet transform leads to a model that relies solely on raw pixel intensities, which might not be as informative or discriminative for complex segmentation tasks.
By incorporating wavelet transforms, the proposed model can analyze the image content at multiple scales, improving its ability to recognize patterns and structures of varying sizes.This is particularly important in medical or detailed imagery where objects of interest can appear at different scales.The absence of this component forces the model to operate at a single scale, significantly limiting its segmentation capabilities.
Decoder blocks
Decoder blocks in U-Net-like architectures are essential for recovering spatial resolution after the encoding (downsampling) phase.They gradually upsample the feature maps and integrate skip connections from the encoder, which provide rich contextual information.This process is crucial for accurate pixel-wise classification needed in segmentation tasks.Without extra decoder blocks, the proposed model loses a significant amount of spatial context, leading to poorer reconstruction of the segmented objects.
The proposed model utilizes attention mechanisms within the decoder blocks to focus on relevant features by weighting the importance of different spatial locations and channels.This targeted approach enhances the model's ability to distinguish between relevant and irrelevant patterns in the image, leading to more precise segmentation.Removing these blocks reduces the model's capacity and its ability to selectively process information, further degrading performance.
Combined effect of Removing both components
The combined removal of both the wavelet transform and extra decoder blocks drastically reduces the proposed model's ability to capture and utilize critical information necessary for segmentation.The wavelet transform's multi-scale, enriched feature representation and the decoder blocks' spatial context recovery and attention-guided feature selection are both pivotal for achieving high accuracy in segmentation tasks.Without these, the model is significantly handicapped, relying on a much simpler, less informative feature set and lacking the mechanisms to effectively reconstruct detailed segmentations, as reflected in the drastically lower performance metrics observed.
Through this ablation study, it becomes evident that both the wavelet transforms and the extra decoder blocks are integral to the AdaptUNet model's superior performance, highlighting the importance of these components in enhancing edge detection, texture analysis, and the model's focus on relevant image regions for accurate polyp segmentation.
Results on the Kvasir and ETIS-Larib datasets
In this subsection, results obtained from the extensive analysis using various metrics are presented to assess how to model performs on the Kvasir dataset, accompanied by a detailed discussion of the findings.Fig. 8 showcases the model's training performance metrics on the Kvasir dataset and ETIS dataset, including changes in ROC curve, which collectively illustrate the model's performance evolution.
In Fig. 8, the ROC curves for the Kvasir and ETIS datasets are displayed, showing AUC values of 0.97 and 0.99, respectively.These high AUC values indicate that the models trained on these datasets perform exceptionally well in distinguishing between positive and negative instances.The AUC of 0.97 for Kvasir suggests a strong discriminatory power, while the AUC of 0.99 for ETIS suggests an even higher level of accuracy in polyp detection and classification.These results demonstrate the effectiveness of the models in accurately predicting polyp presence, making them valuable tools for diagnostic and screening purposes.
Table 4 provides a detailed comparison of the AdaptUNet model's performance on the Kvasir and ETIS datasets, benchmarked against several other models including UNet, SFA, PraNet, UACANet-L, SANet, MSNet, EU-Net, BDG-Net, and MFBGR.Focusing on the The DICE score for our method on the Kvasir dataset is lower compared to the other methods mentioned, likely due to the limited number of images (only 100) and the unbalanced nature of the dataset.However, as evidenced by our results on the ETIS-Larib dataset, where a larger number of images were available, our polyp segmentation model performs better when the sample size is increased.This suggests that with a more extensive and balanced dataset, our method has the potential to achieve higher DICE scores comparable to or even surpassing those of the compared methods.
Turning attention to the ETIS dataset, the AdaptUNet model outperformed all other models, with a Dice coefficient of 0.8075 and an IoU score of 0.7215.Furthermore, the high Balanced Accuracy score of 0.9687 indicates exceptional overall accuracy, thereby further confirming the model's effectiveness and reliability.These results collectively underscore the potential of the AdaptUNet model for achieving highly accurate and effective polyp segmentation across diverse datasets.Further enhancements and application of this model may yield significant advancements in polyp detection and diagnosis, thereby enriching the body of research in this field.Fig. 9 showcases the evaluation of the model on the Kvasir dataset providing a concise overview of the model's segmentation capabilities.Fig. 10 displays evaluation results from testing the model on the ETIS dataset.
Conclusion
This study introduces AdaptUNet, a pioneering deep learning architecture meticulously tailored for precise detection and segmentation of colorectal polyps in colonoscopy images.Its exceptional performance, demonstrated through the high Dice, IoU scores, and Balanced Accuracy on benchmark datasets, including CVC-300, Kvasir-SEG, and ETIS-LaribDB, underscores its superiority over existing models.The model's seamless integration of attention mechanisms, skip connections, wavelet transformations, and data augmentation techniques delivers a well-rounded solution for efficient and accurate polyp segmentation.The efficiency of AdaptUNet is marked by its accurate segmentation and its ease of implementation, offering a distinct advantage.Compared to existing models, AdaptUNet offers a straightforward implementation process without compromising efficiency or accuracy.A notable feature of this study is the innovative application of wavelet transformation, which ensures comprehensive feature extraction.The model's performance is further optimized by strategically employing the cyclic learning rate (CLR) and the Dice binary cross-entropy (BCE) loss function.Demonstrating its robustness, AdaptUNet exhibits outstanding performance across various datasets, a testament to its effective generalization capabilities.The meticulous selection and tuning of hyperparameters, as detailed in Tables 2 and 3, contribute to the model's superior results in polyp segmentation.The significance of this research lies in its contribution to the field of automated disease detection, specifically in the context of colorectal polyps.The model's performance and potential for real-world application highlight its importance in advancing colorectal cancer screening and improving patient outcomes.Further research and validation can be conducted to assess its performance in clinical settings and facilitate its integration into medical practice.In conclusion, AdaptUNet presents a promising new horizon for detecting and segmenting colorectal polyps.Its distinctive features and impressive performance serve as a valuable guide for researchers seeking to tailor polyp segmentation tasks to specific requirements and datasets.This research paves the way for future advancements in automated disease detection, emphasizing the necessity for continuous innovation in this critical field.
Fig. 2 .
Fig. 2. Visual comparison of original and augmented images and masks.
Fig. 3 .
Fig. 3. Comparison of images before and after wavelet transformation.
Table 1
Hyperparameter table of the suggested AdaptUNet model.
Table 3
Ablation study on the CVC-300 dataset.
Table 4
Evaluation metrics of AdaptUNet on the Kvasir and ETIS-Larib datasets. | 2024-06-28T15:20:34.818Z | 2024-06-26T00:00:00.000 | {
"year": 2024,
"sha1": "cf0d0ea6c688e28b4366daf7493991cac2415890",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.heliyon.2024.e33655",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dec6d927fb87c82173b3a27fc5bab51d0f4c5776",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": []
} |
53998002 | pes2o/s2orc | v3-fos-license | Speciation of Coagulase Negative Staphylococcal Isolates from Clinically Significant Specimens and their Antibiogram
Coagulase Negative Staphylococci (CONS) are the normal skin flora, have emerged as predominant pathogens in hospital acquired infections (Usha et al., 2013), often associated with implanted devices, such as joint prosthesis, shunts and intravascular catheters especially in very young, old, and immune compromised patients. These infections are difficult to treat because of the risk factors and the multiple drug resistant nature of the organisms. Hence study was undertaken to identify and speciate CONS and their antibiogram (Sarojgolia et al., 2015). The main objectives of this study include, to speciate CONS isolates from various clinical samples. To determine antibiotic susceptibility pattern of CONS by Kirby Bauer disc diffusion method.
Introduction
Coagulase Negative Staphylococci (CONS) are the normal skin flora, have emerged as predominant pathogens in hospital acquired infections (Usha et al., 2013), often associated with implanted devices, such as joint prosthesis, shunts and intravascular catheters especially in very young, old, and immune compromised patients. These infections are difficult to treat because of the risk factors and the multiple drug resistant nature of the organisms. Hence study was undertaken to identify and speciate CONS and their antibiogram (Sarojgolia et al., 2015). The main objectives of this study include, to speciate CONS isolates from various clinical samples.
To determine antibiotic susceptibility pattern of CONS by Kirby Bauer disc diffusion method.
Materials and Methods
The study was carried out in department of Microbiology, VIMS ballari for a period of 6 months from January 2016 to June 2016. Coagulase Negative Staphylococci (CONS) are the indigenous flora of the human skin and mucous membrane. They are usually contaminants, when isolated from clinical specimens. CONS have become important nosocomial pathogens, accounting for 9% of all nosocomial infections. These infections are difficult to treat because of the risk factors and the multiple drug resistance of these organisms. A total of 74 clinically significant CONS isolated from pus, urine, blood, sputum, ear swabs. These isolates were initially identified by colony morphology, Gram staining, catalase test, slide coagulase test, tube coagulase test. After confirming the isolates as CONS, species-level identification was performed by simple, non-expensive conventional methods and antibiotic sensitivity testing was also carried out by Kirby-Buer's disc diffusion method. Among the 74 isolates, S. epidermidis was the most common species isolated, seen in 29.7%, followed by S. hemolyticus (20.2%), S. saprophyticus (14.8%), S. lugdunensis (13.5%), S. capitis (10.8%), S. cohni (4%), S. schleiferi (2.7%), S. xylosus and S. hominis (1.3% each), Most of the isolates showed resistance to penicillin (83.7%) fallowed by Ampicillin (77%), Erythromycin (54%), Cotrimoxazole (27%) gentamicin (16%), amikacin (12%) piperacillin/tazobactum and linezolide (3% each). The increased recognition of pathogenic potential in CONS and emergence of drug resistance among them demonstrates the need to adopt simple laboratory methods to identify the species and determine the antibiotic resistant patterns. It will help the clinicians in treating the infections caused by CONS.
A total of 74 clinically significant CONS isolated from pus, urine, blood, sputum and ear swabs. The isolates were identified as CONS by colony morphology, Gram stain, catalase test and coagulase test (slide and tube coagulase). Bacitracin susceptibility was performed to exclude Micrococci and Stomatococcus species .
After confirming the isolates as CONS, species-level identification was performed by simple, non-expensive conventional methods These include the ornithine decarboxylase test, nitrate reduction test, Voges-Proskauer test, urease test and fermentation of sucrose, lactose, maltose, mannose, mannitol, xylose and trehalose. Susceptibility to novobiocin and polymyxin B was performed as per the standard procedure (Washington et al.; Patricia; Collee et al.,) and antibiotic sensitivity testing was also carried out by Kirby-Bauer's disc diffusion method according to CLSI guidelines (2016).
Identification of CONS by simple scheme (Washington et al.; Patricia; Collee et al.,).
Although the pathogenic role of CONS is now well established, the clinical significance of the various species is still being defined. We should not disregard any of the organisms until their clinical significance is resolved. In the hospital microbiology laboratory, nonaureus isolates are simply reported as CONS without speciation. Because there is increasing pathogenicity of these organisms, CONS should be identified to the species level by simple, reliable and preferably inexpensive methods (Usha et al., 2013).
In present study the CONS infection was more common in males (48%) than females (36%), which is similar to other studies as well, as males (59%) and females (41%) as shown by Usha et al., (2013) and males (64.9%), females (47%) shown by Golia et al., (2015). When the different age groups were compared most common age group affected was between 30and 50 years.
In the present study, most of CONS showed resistance to penicillin (83.7%), followed by Ampicillin (77%), Erythromycin (54%), Cotrimoxazole (27%) gentamicin (16%), amikacin (12%) piperacillin/tazobactum and linezolide (3% each), no resistance to Vancomycin was seen in this study, most of CONS showed resistance to pencillin (83.7%),which is correlating with the studies by Golia et al., (2015) (95.5%) and Rajyalakshmi Gunti et al., (2016) (90%). As well our study correlating with a study by Shubhrasingh, Gopa Banaerjee et al., where antibiotic susceptibility testing showed maximum resistance to penicillin and ampilicillin80% and 38% strains showed resistance to oxacillin . CONS, primarily S. epidermidis and S. hemolyticus are often resistant to multiple antibiotics and glycopeptides have been considered the drugs of choice for management of infections caused by these organisms (Silvia et al., 1992) and in a study by Del's Alamo, Cereda et al., (1999) showed that glycopeptide resistance is emerging among CONS isolate.
CONS have become the major cause of nosocomial blood stream infections as a result of combination of increased use of intravascular devices and an increase number of hospitalised immune compromised patients. S. epidermidis and S. hemolyticus are the common isolates identified and CONS are often resistant to multiple antibiotics (penicillin, ampicillin, oxacillin and etc.) and glycopeptides have been considered as the drugs of choice for management of infections caused by these organisms.
The increased pathogenic potential and multiple drug resistance demonstrate the need to adopt simple, reliable and non-expensive methods to identify the species and determine the antibiotic resistant patterns. It will help the clinicians in treating the infections caused by CONS. | 2019-04-02T13:08:13.769Z | 2017-06-20T00:00:00.000 | {
"year": 2017,
"sha1": "c0073890ad1067da80ab6a8192ed10f7b01cae11",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/6-6-2017/Mariraj%20Jeer,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b65216cd387ac3e71526848d5cc4fadf1ed894e9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
234401544 | pes2o/s2orc | v3-fos-license | Geochemical Characteristics of the Lower Cretaceous Hengtongshan Formation in the Tonghua Basin, Northeast China: Implications for Depositional Environment and Shale Oil Potential Evaluation
: The Tonghua Basin in Northeast China potentially contains shale oil and gas resources, but the exploration and development of these resources has been limited. The Sankeyushu depression represents the sedimentary center of the Tonghua Basin, and a large thickness of shale, the Hengtongshan Formation, was deposited in this depression. Exploratory engineering discoveries in recent years have confirmed that the Hengtongshan Formation has the potential to produce oil and gas. A series of methods, including inorganic and organic geochemistry and organic petrology, have been used to study the source material, organic matter maturity, depositional environment and oil-generating potential of the Hengtongshan Formation. Investigation of drill core samples has revealed that the Hengtongshan Formation in the Sankeyushu depression is mainly composed of black shale, with a small amount of plant fossils and thin volcanic rocks, and the content of brittle minerals (quartz + carbonate minerals) is high. The provenance of organic matter in the source rocks in the Hengtongshan Formation is a mixture of aquatic organisms (algae and bacteria) and higher plants, and there may be some marine organic components present in some strata.The organic matter was deposited and preserved in a saline reducing environment. Volcanism may have promoted the formation of a reducing environment by stratification of the lake bottom water, and the lake may have experienced a short-term marine ingression with the increase in the salinity. The maturity of the organic matter in all the source rocks in the Hengtongshan Formation is relatively high, and hydrocarbons have been generated. Some source rocks may have been affected by volcanism, and the organic matter in these rocks is overmature. In terms of the shale oil resource potential, the second member of the Hengtongshan Formation is obviously superior to the other members, with an average total organic carbon (TOC) of 1.37% and an average hydrogen index (HI) of 560.93 mg HC/g TOC. Most of the samples can be classified as good to very good source rocks with good resource potential. The second member can be regarded as a potential production stratum. According to the results of geochemical analysis and observations of shale oil and natural gas during drilling, it is predicted that the shale oil is present in the form of a self-sourced reservoir, but the migration range of natural gas is likely relatively large.
Introduction
The Tonghua Basin, located in Northeastern China and bordering North Korea, has an area of approximately 1417.5 km 2 and is an important oil and gas basin. The study area is located in the Sankeyushu depression within the Tonghua Basin ( Figure 1). There are three sets of potential source rocks in the Tonghua Basin, namely the Yuyingzuilazi Formation, the Xiahuapidianzi Formation and the Hengtongshan Formation. The Sankeyushu depression is the sedimentary center of the basin and is considered to be the area with the greatest potential for oil and gas development in the Tonghua Basin. The Hengtongshan Formation in the Sankeyushu depression features a large section of mud shale [1]. In 2016, the China Geological Survey and Jilin University carried out geological surveys and drilling research in the Sankeyushu depression, during which shale oil and natural gas were found.
At present, organic geochemistry has been widely used in the study of organic matter sources, source rock types and hydrocarbon generation processes [2], but data on the organic geochemistry and petroleum geological characteristics of the Tonghua Basin are limited. Previous studies include Wang Yubo's geophysical exploration of natural resources in the Tonghua Basin in 2011 and Han Xinpeng's petroleum geology research published in 2013 [3,4]. Dandan et al. [1] and Shan et al. [5] studied hydrocarbon accumulation in the basin [1,5]. These studies do not provide detailed descriptions or comprehensive oil and gas resource evaluations of the source rock characteristics of the Hengtongshan Formation in the Tonghua Basin. At present, the Tonghua Basin is in the primary stage of exploration and development, and there is a lack of systematic research on the main hydrocarbon source rocks and the oil and gas resources in the Tonghua Basin. Detailed research is needed to guide follow-up exploration and development work to reduce the risk of economic investment. As an important petroliferous basin in Northeast China, the number of studies on the petroleum geology in the Tonghua Basin is limited, which limits the study of the entire East Asian basin and the evaluation of oil and gas resources. In this study, a highly detailed organic geochemistry study of the Hengtongshan Formation is carried out on the basis of samples obtained through drilling in the Sankeyushu depression of the Tonghua Basin, and the characteristics of the source rock, the maturity, the sedimentary environment and the shale oil resource potential in the Hengtongshan Formation are studied in detail. In addition, through the study of the source material, the sedimentary environment and the potential of shale oil and gas resources of the Hengtongshan Formation in the Late Cretaceous of the Tonghua Basin, this study provides a reference for the study of the petroleum geological characteristics of the eastern basin group around the Songliao Basin.
Geological Background
The basement of the Tonghua Basin includes two sets of Archean to Proterozoic metamorphic rock series, and the sedimentary cover strata mainly include the Middle Jurassic Houjiatun Formation and the Lower Cretaceous Guosong, Yingzuilazi, Linzitou, Xiahuapidianzi, Hengtongshan and Heiweizi Formations [6] (Figures 1 and 2). The target layer studied in this paper is the Lower Cretaceous Hengtongshan Formation, which was mainly formed under humid and warm climate conditions after volcanic activity [7,8]. The basin experienced a late Middle Jurassic rifting stage (deposition of the Houjiatun Formation), an early Early Cretaceous volcanic filling stage (Changliucun Formation and Guosong Formation), a late Early Cretaceous posteruption depositional stage (Yingzuilazi Formation, Xiahuapidianzi Formation and Hengtongshan Formation) and a late Cretaceous volcanic filling stage (Sankeyushu Formation) [9,10]. The fault structures in the basin are mainly developed in the NE and NW directions. The NE faults control the distribution of Mesozoic strata and the development of the basin, while NW faults extend to the surface and produce visible faults on the surface (Figure 1). According to the basement structure and the sedimentary characteristics of the caprock, the basin can be divided into three parts: the Sankeyushu depression, the Sanyuanpu depression and the Yingebu uplift [4].
The Hengtongshan Formation is widely developed in the Tonghua Basin, and its sedimentary center is in the Sankeyushu depression, where its thickness is more than 600 m. According to the characteristics of surface outcrops and previously obtained drill cores, the Hengtongshan Formation is mainly composed of continental deposits that formed in a deep to semi-deep lake. The outer edges of the strata feature delta plain facies, delta-front facies and lake bottom fan facies, and the unit is underlain and overlain by volcanic units (Figures 2 and 3). The source rocks in the Hengtongshan Formation are mainly black shale, followed by grey black shale, mud shale and silty shale, and contain a small amount of thin-layer volcanic rocks.
Samples
In this study, a geological survey well was drilled in the Sankeyushu depression in the thickest section of the Hengtongshan Formation, and all the strata were cored. In the process of drilling, gas logging was carried out to obtain data on the organic gas composition of the natural gas in the formation. After the geological study of all the cores, the next step involved sampling the cores.
In total, 13 representative samples were selected to observe the lithology, kerogen organic matter and vitrinite reflectance (RO), and the whole rock minerals were analyzed. Moreover, 102 source rock samples were selected for organic element analysis and rock pyrolysis experiments. In total, 32 source rock samples were extracted. The 32 organic extracts and six shale oil samples were quantitatively separated by organic family components, and 38 samples were analyzed by GC-MS and saturated hydrocarbon biomarker compounds.
Experimental Methods
An organic petrographic study was conducted in the Key Laboratory of Oil Shale and Coexistent Energy Minerals of Jilin Province. For the production of light sheets for the organic petrography study, a small rock sample (20 × 20 × 10 mm) was first installed in a hardening agent mixture, and the polyester resin was allowed to slowly set. After preliminary treatment, the samples were polished with a diamond grinding plate, silicon carbide paper and alumina powder. The petrographic examination used a high-power optical microscope equipped with a photometry system with a fluorescent lamp. Mean random vitrinite reflectance measurements (%, Ro) and organic maceral observations were conducted on 24 samples following ASTM Standard D7708-14 (2014).
Using a Philip-PW 1830/40-CuKα radiation device (1.54 Å, 35 kV and 35 mA), the collected mineral data were qualitatively and quantitatively processed by X-ray diffraction (XRD) analysis at the Center for Scientific Test of Jilin University (China).
The rock samples were crushed into 200 mesh powder, the carbonate in the sample was removed with excessive dilute hydrochloric acid and the carbonate content was calculated. The total organic carbon (TOC), total nitrogen (TN) and total sulfur (TS) contents of all samples were determined by a vario PYRO cube element analyzer and calculated according to the carbonate content at the Key Laboratory of Oil Shale and Coexistent Energy Minerals of Jilin Province following standard GBT-19145-2003. The instrument used in the rock pyrolysis experiment analysis was a Rock Eval-6 analyzer at the Center for Scientific Test of Jilin University (China). The parameters S1, S2, S3 and T max were measured, and the hydrogen index (HI), oxygen index (OI) and production index (PI) were calculated accordingly. In these methods, the amount of pyrolyzate released from kerogen was normalized to TOC to give the HI. The temperature of maximum hydrocarbon generation (T max ) is defined at the maximum value of the S2 peak and serves as a maturity indicator.
For the organic geochemical analysis, thirty-eight selected samples were extracted using a Soxhlet apparatus with a mixture of dichloromethane and methanol (93:7). Elemental sulfur was removed by the addition of activated copper turnings. Asphaltenes were precipitated from a hexane-dichloromethane solution (80:1) and separated by centrifugation. The extracted organic matter was separated into saturated hydrocarbons, aromatic hydrocarbons and nitrogen, sulfur or oxygen (NSO) compounds by liquid chromatography. At the Key Laboratory of Oil Shale and Coexistent Energy Minerals of Jilin Province, the saturated components were dissolved in petroleum ether and analyzed by Agilent gas chromatography-mass spectrometry with an HP-5 MS elastic quartz capillary (60 m × 0.25 mm × 0.25 mm) GC column. The temperature ranged from 40 to 300 • C at a rate of 4 • C/min and was then held for 30 min at 300 • C. The hydrocarbon fractions were subsequently analyzed by an Agilent 5975B inert MSD mass spectrometer with a gas chromatograph attached directly to the ion source (70 eV ionization voltage, 100 mA filament emission current, 230 • C interface temperature). Compound separation was performed on a column similar to the GC column. The saturated hydrocarbon ratios and the relative abundance of steranes and triterpanes were calculated from the integrated peak areas of the relevant ion chromatograms m/z 191 and m/z 217. The aromatic hydrocarbon ratios were also calculated from the integrated peak areas of the relevant ion chromatograms following standard GBT-30431-2013.
In the process of drilling, the abnormal presence of natural gas was detected by gas logging. The gas returned to the surface from the bit at the bottom of the well was separated from the mud by a degasser. The separated gases then entered a SK-3Q04 hydrogen flame chromatograph. The total amount of natural gas and C 1 -C 5 components were analyzed with a chromatographic analyzer every 90 s following standard SY/T5191-93 (technical standard of gas chromatographic logging tool).
Lithology
Through the observation of the drill cores of the Hengtongshan Formation in the Tonghua Basin, it is known that the lithology of the source rock in the Hengtongshan Formation is mainly black shale, grey shale and grey silty shale. The maximum thickness of a single shale is 27.36 m, the cumulative thickness of shale is 191 m, and a small amount of plant fossils and thin tuff can be seen in the shale ( Figure 4).
Minerals
The mineral and carbonate content data of the Hengtongshan Formation can be seen in Table 1. The carbonate content is between 1.91 and 65.17 wt%, with an average of 11.65 wt%. The average carbonate contents of the first member, second member and third member of the Hengtongshan Formation are 18.58, 12.27 and 8.96 wt%, respectively. The overall trend is decreasing, but the differences are small. However, through targeted core observations, some samples with more than 50% carbonate content were found to be associated with carbonate dikes, resulting in high measured carbonate values. According to the XRD test, the mineral composition of the Hengtongshan Formation is primarily clay (average 38.62%), followed by quartz (average 35.54%, Table 1). The content of plagioclase is higher than that of K-feldspar, which is probably due to the loss of K-feldspar in the process of terrigenous clastic migration and diagenesis, and some samples feature values of less than 1% [11]. The calcite content is relatively high as a whole, with an average of 13.92%, which is highly consistent with the carbonate content observed in the previous test (Tables 1 and 2). TIC: total inorganic carbon; TOC: total organic carbon; TN: total nitrogen; TS: total sulfur; S1: free hydrocarbons; S2: hydrocarbons generated during Rock Eval pyrolysis; S3: carbon dioxide content in rock pyrolysis; S1 + S2: generative potential; PI: production index; Tmax: temperature with maximum hydrocarbon generation; HI: hydrogen index; OI: Oxygen index; Ro: Vitrinite reflectance.
Bulk Geochemical Parameters
The abundance and quality of organic matter in the source rock determine the hydrocarbon generation capacity. For the quantitative characterization of the organic matter abundance, many parameters have been proposed, such as TOC, extractable organic matter (EOM) and the HI, while TOC refers to the mass percentage of organic carbon in a unit mass of rock [12,13]. According to the test, the TOC content of the first member is 0.49~1.25 wt% (average 1.06 wt%), the TOC content of the second member is 0.63~3.49 wt% (average 1.38 wt%), and the TOC content of the third member is 0.14~1.47 wt% (average 0.79 wt%). The TOC value measured in this study is the residual organic matter abundance of the source rock after massive hydrocarbon expulsion. Tissot [12] stated that the organic matter abundance index of a source rock cannot be applied to a source rock with a high maturity because the initial TOC value may be much higher than the TOC measured at present [12].
TN is an important index in the study of the characteristics of organic matter in sedimentary rocks [14]. The TN content of the first member is 0.26-0.59 wt% (average 0.43 wt%), that of the second member is 0.08-0.6 wt% (average 0.27 wt%), and that of the third member is 0.05-0.26 wt% (average 0.12 wt%). The average TS values of the first, second and third members of the Hengtongshan Formation are 0.12, 0.12 and 0.06 wt%, respectively ( Table 2).
The rock pyrolysis data for the Hengtongshan Formation are shown in Table 2. The analysis shows that the HI values of the three members of the Hengtongshan Formation are quite different; among them, the HI values of the second member are very high as a whole, with values of 238.07-1276.47 mg HC/g TOC and an average of 560.93 mg HC/g TOC. In contrast, the average HI values of the first and third members are 247.01 mg HC. The T max values of all samples are between 445 and 460 • C, indicating a high maturity, but the samples with sampling depths between 233 and 238 m and between 287 and 294 m are all approximately 480 • C. Combined with the geological conditions of the Tonghua Basin and the core observation results, this may be due to the short-term volcanism that enhanced the thermal maturity of the source rock [9,10], resulting in over maturity of the source rocks in the two sections.
Bitumen Bulk Geochemical Parameters
Through the organic matter extraction experiment on the 32 source rock samples of the Hengtongshan Formation, it was found that the EOM content was relatively high, at 512.15-3589.26 mg HC/g TOC, with an average of 1490.80 mg HC/g TOC, indicating good organic abundance. The results of the EOM component separation experiment on the source rock samples and six shale oil samples are shown in Table 3 and Figure 5. The EOM from the shale samples has the same characteristics as that from the shale oil samples. In these samples, the content of saturated hydrocarbons is the highest. The content of saturated hydrocarbons extracted from the shale samples is 40.01-72.3%, with an average of 55.34%, and the content of saturated hydrocarbons in the shale oil samples is 54.71-82.84%, with an average of 69.8%. The average content of aromatic hydrocarbons in the EOM from the shale samples is almost equal to the average content of NSO, with values of 21.91% and 22.75%, respectively. The average content of aromatic hydrocarbons in the shale oil samples is higher than the average content of NSO, with values of 20.99% and 9.21%, respectively. These indexes are widely used in hydrocarbon potential evaluations of source rocks [15]. Therefore, most of the samples from the Tonghua Basin appear to be a rich source of oil and have the potential to produce a wealth of naphthenic oil.
Organic Geochemistry
Biomarker compounds are widely used in organic geochemistry research and are mainly used to study the source of organic matter in sediments, organic matter maturity and paleo-depositional environments [16]. This study investigated the organic matter composition and biomarker compound characteristics of organic extracts and shale oil, such as n-alkane, isoprenoid, sterane and triterpane, from the Hengtongshan Formation shale (Figures 6 and 7; Table 4). By comparing the components of the shale extract and shale oil samples with the GC-MS test results, it was found that the organic matter composition and relative content characteristics of the two were similar. The core and drilling data reveal that the layers of shale oil production coincide with the source rock layer, so it is possible that the hydrocarbon material generated by kerogen has not undergone migration or has migrated only a short distance and remains stored directly in the source rock.
n-Alkanes and Isoprenoids
The distribution of n-alkanes in the Hengtongshan Formation samples shows that low to medium molecular weight compounds (n-C 13 -n-C 23 ) are dominant, and there are significant waxy alkanes (+n-C 23 ), mainly with odd carbon numbers, thus yielding a medium carbon preference index (CPI) (Figure 6; Table 4). The distribution of alkanes in the samples is similar to that of a large number of long-chain alkanes present in algae or plants [17,18] The same conclusion was reached based on kerogen microscopy observations of the source rock in the Hengtongshan Formation because a large amount of alginite and amorphous organic matter can be seen under the microscope (Figure 4).
In the study of biomarkers, pristine (Pr) and phytane (Ph) are often detected, and their concentrations are very important for environmental research [19,20]. The acyclic isoprenoids Pr and Ph were found in all samples from the Hengtongshan Formation (Figure 6a-d). The ratio of Pr to Ph has been widely used as a redox condition parameter in sedimentary environment research [21]. Previous studies have shown that Pr/Ph ratios below 0.6 indicate anoxic conditions, ratios between 0.6 and 3.0 indicate sub-oxic conditions, and ratios above 3.0 indicate oxic conditions [22]. By testing 38 samples from the source rocks of the Hengtongshan Formation, the Pr/Ph ratios are 0.54 to 0.92 (average 0.76), which indicates that the sedimentary environment of the organic matter featured sub-oxic conditions, which are conducive to the preservation of organic matter.
The relationship between Pr/n-C 17 and Ph/n-C 18 is often used to study the source of organic matter and the sedimentary paleoenvironment of source rocks [23][24][25][26]. In this study, it was found that the Pr/n-C 17 and Ph/n-C 18 values are 0.23-0.64 (average 0.50) and 0.16-0.93 (average 0.54), respectively. The overall numerical fluctuation is relatively small, and the data indicate that the source rock of the Hengtongshan Formation was deposited under reducing conditions and that the material was from a mixed source [27,28] (Figure 7; Table 3); however, additional analysis is needed to determine the specific components.
Hopanoids and Steroids
In this study, the m/Z 191 mass chromatogram features are basically similar. The relative abundance of hopanes is higher than that of tricyclic terpanes (Figure 8a-d). The C 29 /C 30 17a (H) hopane ratios of the samples are between 0.38 and 0.54 (average 0.46; Table 3). Previous studies have suggested that this is a typical feature of clastic source sedimentary rocks [29,30].
As the molecular energy of Ts 18a(H)-trisnorhopane is higher than that of Tm 17a(H)trisnorhopane, the stability of Ts is higher than that of Tm. With the increase in maturity, the value of Ts/(Ts + Tm) increases gradually [31]. The Ts/(Ts + Tm) values of the samples are between 0.46 and 0.89 (average 0.70), reflecting the high thermal maturity of the source rock (Table 4). Previous studies have also shown that with the increase in the organic matter maturity, the ratio of C 30 moretane/C 30 hopane (C 30 M/C 30 H, moretane/hopane) decreases from 0.8 to below 0.1 [16]. The C 30 M/C 30 H values of these samples are between 0.11 and 0.19 (average 0.14), reflecting a high thermal maturity ( Table 4).
The C 31 R/C 30 H hopane ratio is used to distinguish marine and lake environments. The C 31 R/C 30 H hopane ratios of organic matter in marine environments are generally higher than 0.25, but the ratio of organic matter in lake environments is generally lower [16]. The C 31 R/C 30 H hopane ratios of the extracts from the Hengtongshan Formation range from 0.15 to 0.33 (average 0.22). According to the lithofacies results of the Hengtongshan Formation samples, the strata feature the characteristics of continental lakes, so the lake environment may have been invaded by sea water. The cross plot of the C 31 R/C 30 H hopane ratio parameter and Pr/Ph ratio (Figure 9) supports this conclusion. Gammacerane was initially considered to be a high-salinity indicator [32]. It is also thought to be related to increases in salinity in marine and lacustrine environments [21]. The gammacerane index (GI=gammacerane/αβC 30 hopane) of the extracted samples of the Hengtongshan Formation reflects a high-salinity, reducing environment [16] (Figures 8a-d and 10). In this study, the m/Z 217 mass chromatogram of saturated hydrocarbons in 38 samples from the Hengtongshan Formation shows a series of distributions of diastereanes and steroids. Among these compounds are conventional steroids C27, C28 and C29, which are related to the source of the parent materials of organic matter (Figures 8e-h and 11; Table 4). Previous studies have found that the conventional steroids C 27 , C 28 and C 29 are all specific to different sources of organic matter, and the relative proportions among them can be used to study the contribution of various types of organic matter in shale oil [33]. The relative proportions of the steroids C 27 , C 28 Figure 11) were calculated in the Hengtongshan Formation. Previous studies on the sources of the steroids C 27 , C 28 and C 29 have found that C 27 steroids are from zooplankton and red algae; C 28 steroids are believed to be from diatoms, green algae and higher plants; and C 29 steroids are from higher plants, some brown algae and green algae [18,[33][34][35]. According to the relative contents of C 27 , C 28 and C 29 steroids, it is speculated that the samples in this study are dominated by plankton and higher plants and may be mixed with some marine organic matter.
Natural Gas Geochemical Characteristics
The natural gas in the Tonghua Basin and the Hengtongshan Formation in the Sankeyushu depression is dominated by hydrocarbons, with a low non-hydrocarbon content ( Table 5). The non-hydrocarbon gas is present at proportions of between 0.04% and 7.29% (average 2.95%). The hydrocarbon gas is mainly methane, and the content of gases above ethane is low. The methane content is 90.06-98.53% (average 94.36%), the ethane content is 0.75-5.52% (average 2.47%), and the propane content is 0.02-1.08% (average 0.16%).
Oil Source Input
The TOC/TN ratio in sedimentary rocks is an effective indicator for analyzing the source of organic matter [14]. The original protein content of microalgae in lakes is higher than that of terrestrial higher plants, so the TOC/TN ratio of aquatic phytoplankton is relatively low, usually from 4 to 10, whereas the TOC/TN ratio of terrestrial plants is often greater than 10 [36]. Therefore, in the study of lake environments, the TOC/TN ratio is often used to determine whether the organic matter in the sediment originates from microorganisms or higher plants [36].
In this study, through systematic testing and calculation of the TOC/TN value of the source rock in the Hengtongshan Formation, it was found that the TOC/TN value of samples between 107.84 and 184.3 m and between 264.2 and 292.16 m is greater than 10, with a maximum of 21.00. In contrast, the values in the other sections are less than 10 ( Table 2). Therefore, higher plants may have made a greater contribution to the organic matter in the sediments of the first two sections. However, in terrestrial formations, organic nitrogen is easily mineralized or oxidized, and the presence of large amounts of exchangeable fixed ammonium in organic deposits in terrestrial sediments accounts for 10% of organic ammonia [37], which can result in a low TOC/TN ratio. Therefore, using this value to determine the source of organic matter is very uncertain, so in this study, biomarker compounds were used to further comprehensively determine the source of the organic matter.
Previous studies have shown that lower aquatic organisms are rich in lipid compounds, and n-alkanes are mainly composed of low carbon number components, while higher plants are often rich in wax and mainly composed of high carbon number components [16]. Therefore, the n-alkane distribution characteristics of saturated hydrocarbons can reflect the type of organic matter in source rocks. It is generally believed that the main peak carbon number is generally less than that of n-alkane C 23 and is unimodal. Additionally, the main carbon peak of higher plants is usually greater than that of n-alkane C 25 , and the main carbon peak of mixed organic matter is between the peaks of n-alkane C 23 and n-alkane C 25 [16]. In this study, it was found that the main carbon peaks of the n-alkanes in both the extracts and the shale oil in the m/Z 85 diagram are less than the peak of n-alkane C 23 , which indicates that the contribution of lower aquatic organisms in organic matter is larger than that of higher plants. Steroids and hopanoids represent eukaryotic and prokaryotic contributions to the source material of sediments, respectively [27], and the average values of the steroid/hopanoid ratio in this study are 0.12-0.51 (average 0.26). Low steroid/hopanoid ratios are characteristic of lacustrine or special bacterial-influenced facies [18,38], indicating strong microbial activity [16].
According to the results of Pr/n-C 17 and Ph/n-C 18 , the source of the organic matter in the sediments of the Hengtongshan Formation may have been a mixed source (Figure 7). This is further confirmed by the relative ratio of C 27 C 28 C 29 sterane to C 27 /C 29 , which indicates that the source of the organic matter in the source rock in the Hengtongshan Formation was a mixed source and that plankton and higher plants were the main contributors in many samples. Combined with the trend between the sample depth and the TOC/TN value, the relative proportions of C 29 sterane in the 107.84~184.3 m and 235.69~292.16 m depth ranges are greater than 50%, indicating that plankton and higher plants are dominant in the oil-related organic matter, which is consistent with the conclusion based on the TOC/TN value. As a whole, the source material of the source rock of the Hengtongshan Formation is a mixture of aquatic organisms (algae and bacteria) and higher plants and may have marine biological components.
Maturity
In this study, a variety of methods were used to evaluate the thermal maturity of the source rock, and a detailed maturity evaluation was carried out on the Hengtongshan Formation. According to the reflectance (% Ro) test data of 21 samples obtained from each source rock horizon, the maturity of the samples is between 0.97 and 1.53 (average 1.10, Table 2), which indicates that the samples have entered the oil generation window and kerogen has begun to produce hydrocarbon compounds [18,39]. Based on the T max values from a large number of samples, it was found that the T max values, which are indicative of thermal maturity, are highly correlated with Ro ( Figure 12). The sections between 233 and 238 m and between 287 and 294 m are of high maturity, which indicates that, compared with other sections, these sections may have suffered more intense thermal action because the kerogen has a high maturity. Combined with the geological background of the Tonghua Basin, volcanic activity may have caused these sections to have especially high thermal maturities; however, in the depositional stage of the Hengtongshan Formation, there was no large-scale and long-term strong volcanic activity [7,8]. Thus, local volcanic activity only caused a higher thermal effect on some strata. The biomarkers that can reflect the maturity of the organic matter in source rock extracts and shale oil, such as CPI, Ts/(Ts + Tm), C 30 M/C 30 H, C 32 17α(H)21(H)-hopanes, etc., were studied. These biomarker parameters all show that the source rocks in the Hengtongshan Formation have a high thermal maturity. The correlation parameters of biomarkers in the shale oil and shale extracts are similar, and the vertical variation throughout the entire formation is not significant, indicating that hydrocarbons have been discharged from the source rock. Figures 13 and 14 also confirm that the organic matter has reached the mature and highly mature levels and the source rock has crossed the hydrocarbon generation threshold. Kerogen can generate hydrocarbons as the thermal maturity increases, and the generated hydrocarbon gas and liquid hydrocarbons can have different molecular weights. Furthermore, hydrocarbon macromolecules can be heated further in the process of pyrolysis to produce solid asphalt and a large amount of pyrolysis gas [18]. The natural gas in the Hengtongshan Formation is mainly composed of methane and a small amount of ethane and propane. The methane content accounts for 90.06-98.53%, with an average of 94.36%. The gas is not dry gas. According to the interaction diagram of InC 1 /C 2 and InC 2 /C 3 , the gas may be a mixture of kerogen degradation gas and oil cracking gas, and most of the gas was produced when the Ro exceeded 1.5% (Figure 15).
Depositional Environment
The Hengtongshan Formation was deposited in the warm and moist Early Cretaceous, and there was no large-scale tectonic activity in the basin during this period [1,5,9,10]. According to previous studies, during the depositional period of the Hengtongshan Formation, only occasional small-scale volcanic activity occurred near the faults in the Tonghua Basin, resulting in a basin-wide, relatively quiescent stage, and sediments were deposited under a humid and warm climate [7,37]. The volcanic activity released a large amount of gas at the bottom of the lake basin, causing stratification of water masses. Reducing environments provide good conditions for preserving organic matter in the sediment, and volcanic activity can release some nutrients, stimulating the growth of microorganisms at the bottom of the lake and, therefore, increasing the accumulation of organic matter.
Through maps of Pr/Ph and Pr/n-C 17 vs. Ph/n-C 18 , it can be shown that the organic matter was deposited under anoxic conditions, and the change in the TOC value in the vertical direction demonstrates that the organic matter has been enriched and preserved ( Figure 1). Figures 9 and 10 also show that the aquatic environment of the Hengtongshan Formation during the depositional period was a kind of saltwater environment, and some strata show the characteristics of marine strata. In the discussion of organic matter, it is also suspected that the source rocks of the Hengtongshan Formation include marine organic matter.
Previous researchers have also used the TS content in sedimentary strata to determine whether the formation environment was a marine environment or a freshwater lake environment. A high TS content often indicates that the sediments formed in a marine environment, while a low sulfur content often indicates that the sediments formed in a continental lake environment [41,42]. The TS values of the samples in this study show that most of the samples formed in a continental lake sedimentary environment and that a small number of samples formed in a marine sedimentary environment ( Figure 16). Combined with the observations of the sedimentary facies, these findings also indicate a continental origin, and the inclusion of marine organic matter components may be the result of a short-term transgression.
Resource Potential of the Shale Oil
A large number of TOC and pyrolysis evaluation tests on the source rocks in the Hengtongshan Formation show that the TOC contents of the three members of the formation are quite different. The hydrocarbon-generating ability of the source rocks in the Hengtongshan Formation was studied using pyrolysis data. Previous studies have shown that sample HI values higher than 300 mg HC/g TOC and high TOC contents (>1 wt%) will produce oil [2]. The second member is obviously better than the first and third members, with an average TOC of 1.37% and an S2 of 7.34. Most of the samples are classified as good and very good source rocks [43] (Table 2, Figure 17). The relationship among the TOC content, EOM and hydrocarbon production ( Table 3 and Figure 18) shows that the sediments in the second member are classified as good to very good source rocks with a good oil production potential [43]. This paper discusses the maturity of the source rocks in the Hengtongshan Formation and concludes that the source rocks have crossed the threshold of hydrocarbon generation. Table 1 shows that the content of brittle minerals (quartz + carbonate) in the formation is relatively high, with an average of 50.61%, which is also favorable for oil exploitation. The comprehensive analysis shows that the Hengtongshan Formation may be rich in shale oil and natural gas, while the second member can be regarded as a potential production stratum.
Conclusions
The Hengtongshan Formation in the Tonghua Basin in Northeastern China is an important shale oil-and gas-bearing horizon. In this paper, through systematic sampling of the Hengtongshan Formation in the Sankeyushu depression of the Tonghua Basin, observa-tions of the petrology and organic petrology and a study of the geochemical characteristics, we found that the Hengtongshan Formation has the following characteristics.
The Hengtongshan Formation is mainly composed of black shale and contains a small amount of plant fossils and thin-layered volcanic rocks. The content of brittle minerals (quartz + carbonate) is high, with an average of 50.61%, and the content of clay is 38.62%. The organic matter of the Hengtongshan Formation has the characteristics of lacustrine or special bacteria-influenced facies. On the whole, the source material of the source rock of the Hengtongshan Formation is a kind of mixture of aquatic organisms (algal and bacteria) and higher plants, and some geochemical parameters also indicate that the organic matter in the formation may include some marine organic matter.
The organic matter in the Hengtongshan Formation was deposited in a saline, reducing environment, which was conducive to the enrichment and preservation of the organic matter. Volcanism may have promoted the formation of this reducing environment in different water layers at the bottom of the lake, while a short-term transgression may have led to the saltiness of the lake water.
The maturity of the organic matter in all the source rocks in the Hengtongshan Formation is relatively high and has entered the oil generation window, resulting in the generation of a large amount of hydrocarbons. During the drilling process, shale oil and natural gas were also encountered. The source rocks at 233-238 m and 287-294 m are overmature, which may be due to the volcanic process leading to enhanced thermal processes in some strata.
Through a large number of TOC and pyrolysis evaluation tests, we found that the second member of the formation is obviously superior to the first and third members, with an average TOC of 1.37% and an average HI of 560.93 mg HC/g TOC. Most of the samples can be categorized as good to very good source rocks with a good resource potential. Based on the shale oil and natural gas encountered in the process of drilling, it is predicted that the shale oil is present in the form of a self-sourced reservoir, but the migration range of natural gas may have been relatively large. The comprehensive analysis shows that the Hengtongshan Formation may be rich in shale oil and natural gas, and the second member can be regarded as a potential production horizon. | 2020-12-24T09:08:22.027Z | 2020-12-22T00:00:00.000 | {
"year": 2020,
"sha1": "a5378d96fae09430954c89ff65727cebf2e62f7f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/1/23/pdf?version=1609834796",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5be575e602d3b450807d7278c72b5b4b449e8c6c",
"s2fieldsofstudy": [
"Geology",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
233704650 | pes2o/s2orc | v3-fos-license | Maximizing the Economic Benefits of Hydraulic Fracturing while Mitigating the Risk to Human Health in Colorado
Executive Summary: Over the past two decades, hydraulic fracturing, commonly known as ‘fracking’, in Colorado has increased crude oil and natural gas production exponentially. This growth continues to benefit the Colorado economy and employs hundreds of thousands of residents across the state (U.S. EIA 2020a; Hochman 2019). However, despite these economic benefits, studies over the past ten years demonstrate that fracking presents serious environmental and human health risks, particularly to those who live near wells. Hydraulically fractured wells can release toxic hydrocarbons into the atmosphere as well as contaminate land and water supplies, which puts Colorado residents living within 1 kilometer of these wells at an increased risk for adverse dermal and upper respiratory symptoms (Jackson et al. 2014, 347-348; Rabinowitz et al. 2015, 25). Additionally, people living within 1⁄2 mile of a well are at an increased risk for developing cancer (McKenzie et al. 2012, 85). Colorado Senate Bill 19-181 responded to this issue in 2019 by delegating regulation of fracking to local jurisdictions (SB 19-181). However, this legislation attempts to solve a statewide issue at a local level and is therefore an inconsistent and insufficient response. For this reason, I urge the Colorado state government to reclaim the authority to regulate fracking and implement a policy to ban all wells within 3000 feet of residential areas and schools, effective 2 years from date of passage. This measure will reduce residents’ exposure to toxic chemicals and their risk of disease while allowing the fracking industry to continue to benefit the Colorado economy and energy sector.
I. Introduction and background
In the modern world, energy is everything. While renewable sources comprise 20% of the energy produced in the United States, most energy comes from non-renewable sources to include coal, natural gas, and nuclear power (U.S. EIA 2020c). Since fossil fuels are a limited resource and crucial to energy production, it is important to maximize the amount that can be extracted from the Earth. Hydraulic fracturing is a drilling technique that allows for the extraction of natural gas and oil from impermeable rocks that previously prevented the extraction of fossil fuels. The fracking process works by injecting a liquid consisting of water and chemical additives, such as lubricants and sand to prop open fractures, approximately 1-3 km into the earth via a vertical well (Bazant et al. 2014, 101010-1). The liquid is then channeled through horizontal boreholes using highpower pumps at the surface. The pressure created by the surface pumps in the horizontal boreholes causes fractures to form in the impermeable rock layer, allowing for the extraction of trapped, previously inaccessible oil and natural gas (Bazant, et al. 2014, 101010-1-101010-2). The combination of horizonal drilling and hydraulic fracturing technology has greatly increased the yield of oil and gas from the Earth's subsurface (U.S. EIA 2020a, U.S. EIA 2020b).
i. History
Fracking originated in the Civil War era, when Col. Edward Roberts discovered that detonating torpedoes in artesian oil wells increased their yield (AOGHS 2020). In the 1940s, the use of high-pressure liquid blasts replaced explosives as the primary means for fracturing subterranean rock (Denchak 2019). Modern fracking began in the 1990s when Nick Steinsberger utilized a "slick-water frack" for the first time. Steinsberger's liquid consisted of water, sand, and other chemicals that proved to be more effective in extracting fossil fuels (Gold 2018).
ii. Associated hazards While fracking has increased fossil fuel extraction, it is a controversial technique in the U.S. and around the world due to its effects on the environment and human health (Davis & Fisk 2014, 6-13;Aczel et al. 2018, 431-438;Thomas et al. 2017, 4-13). One primary concern is the release of greenhouse gasses and volatile organic compounds (VOCs) into the atmosphere. The presence of VOCs in the troposphere causes the abundance of ozone, a greenhouse gas, to increase. For this reason, VOCs can be considered an indirect greenhouse gas and thus contribute to climate change (Albritton et al. 2001, 44). Additionally, these substances are generally toxic. High concentrations of VOCs, such as benzene and toluene, were reported less than 500 feet downwind from well pads in an air sampling study conducted in Garfield County, Colorado (Jackson et al. 2014, 347-348). Furthermore, in the oil and gas-rich Denver basin region, 70% of the total VOC emissions stem from approximately 6,000 oil and condensate storage tanks that contain liquid hydrocarbons produced by natural gas wells (Jackson et al. 2014, 348;Snyder et al. 2017, U.S. EIA 2013. These toxic emissions are detrimental to the health of nearby Colorado residents. The other major issue with fracking is its potential to contaminate groundwater. When exposed to high pressure, wells can release contaminated fracking liquids into local groundwater sources (John 2020, 3). Furthermore, the chemical additives that allow the water-based solution to be effective for fracturing shale are toxic and will contaminate groundwater if they are not filtered out (Kharaka et al. 2013, 420-421). Wells with poor integrity due to hydraulic fracturing and poor cement casings also have the potential to leak gaseous toxins and hazardous chemicals into groundwater sources (Jackson et al. 2014, 337-338). This potential leakage is hazardous to Colorado residents who rely on local water wells for drinking water. Finally, hydraulic fracturing requires millions of gallons of freshwater, straining limited water resources in the western U.S. (Goodwin et al. 2014, 5993-5995). To exploit the benefits of fracking while mitigating the risks, evidence-based, scientificallysound policies are necessary.
i. Current regulation
Fracking regulation in Colorado is administered primarily by the Colorado Oil & Gas Conservation Commission (COGCC). Recently, Colorado Senate Bill 19-181 rebranded the COGCC as the chief state agency in regulating oil and natural gas production with the intent of protecting public health and the environment. SB 19-181 also delegated more authority to local jurisdictions, allowing them to place additional regulations on the surface impacts of drilling . Some potential restrictions that could be imposed by local governments include increased setback distances for surface drilling, more stringent inspection standards, higher inspection fees, drilling moratoriums, and an increased authority to withhold or delay drilling permits (Little & Prulhiere 2019, 120-121; Avery 2019). SB 19-181 is controversial since Colorado oil production accounts for over 89,000 jobs and adds more than $13.5 billion to the state GDP and the law is viewed by some as a threat to the oil and gas sector of the state economy (Orlando 2019, 12, 14;Clark 2019).
ii. Oil and natural gas sector growth
In 2008, Colorado began to employ shale oil extraction, allowing drillers to extract oil trapped in shale reserves, to include Mancos Shale in the Piceance Basin, the second-largest deposit in the U.S. (Orlando 2019, 9-10;Loris & Tubb 2016). The yearly production of crude oil since 2005 follows an exponential trendline, as depicted in Fig. 1 (U.S. EIA 2020a). Since fracking is generally used to extract shale oil and gas, this exponential growth can be attributed to the growing prevalence of fracking in Colorado (Rosa et al. 2018, 745). If the current growth trend continues, Colorado will produce approximately 316,000 barrels of oil in 2022, a 66% increase from 2019 (U.S. EIA 2020a). Therefore, any substantial restriction on fracking would inhibit oil production and adversely affect employment and energy production revenue. Additionally, restrictions on fracking could prevent natural gas production, which has grown steadily over the past 15 years and ii. Human health risks Despite the potential detrimental economic consequences of limiting fracking, increased regulation may be warranted due to the harmful effects it can have on the environment and the health of people that live near wells. According to a 2015 study, people living less than 1 kilometer, or approximately 3000 feet, from a fracking well are four times as likely to show adverse dermal symptoms, such as rashes and dermatitis, and three times as likely to show adverse upper respiratory symptoms than those living more than 2 kilometers from a well (Rabinowitz et al. 2015, 25). Furthermore, people living within ½ a mile of a well are 66% more likely to be diagnosed with cancer due to the increased concentration of harmful hydrocarbons released into the atmosphere by fracking wells (McKenzie et al. 2012, 85). This is likely because 75% of the chemicals identified in natural gas operations can affect the skin and respiratory system and 25% are carcinogens (Colborn et al. 2011(Colborn et al. , 1039(Colborn et al. , 1046. In addition to negative health implications, fracking has also been shown to cause an increased crime rate in communities surrounding wells, likely associated with a rise in young male laborers (Bartik et al. 2019, 134). For these reasons, policymakers must weigh the consequences to human health and the environment against the economic benefits of fracking.
III. Courses of action
It is important to note that all three presented options would replace Colorado SB 181-19, returning jurisdiction over fracking to the state government.
i. Option 1: Ban hydraulic fracturing in Colorado
The first course of action to address fracking in Colorado is a statewide ban and mandated plugging of all wells due to the potential health and environmental risks. This option has several benefits. First, it eliminates the human health risks associated with fracking. Over 255,000 Colorado residents live within 3000 feet of a fracking well and are thus at an increased risk for health issues (Czolowski et al. 2017, Table S3). By banning fracking, residents would no longer be subject to harmful pollutants or the increased crime rate in their communities correlated with the presence of fracking wells (Bartik et al. 2019, 134). Moreover, the plugging of all wells would greatly reduce methane emissions in Colorado (Marcacci 2018).
One enormous cost associated with this option is the loss of fracking employment, which would put hundreds of thousands of Colorado residents out of work and therefore negatively impact the economic well-being of their communities. Colorado produces over two times as much energy as consumed annually, 91% of which comes from oil and natural gas (U.S. EIA 2020b). As fracking is the main source of these natural resources, banning the technology would prevent Colorado from being energy independent and force the state to import natural resources to meet their energy needs. Mitigating these effects would require a massive investment in alternative energy sources.
ii. Option 2: Increase crack density of hydraulically fractured wells
The second option is a law to mandate that drillers increase the density of cracks within hydraulically fractured wells while decreasing the size of the cracks. Creating a "densely fractured volume with many narrow cracks" as opposed to fewer wide cracks reduces the contaminated water flowback (Bazant et al. 2014, 101010-9). This course of action would reduce the amount of contaminated water flowback resulting from hydraulic fracturing while also decreasing the likelihood that contaminated water and hazardous gases would reach groundwater sources via subsurface cracks. One cost of this option is the need to restructure fracking wells. Without government aid, this cost would fall on fracking companies, thus inhibiting their growth. This option could also be detrimental to the environment as it would require the use of more contaminated water to refracture existing wells. Finally, this option does little to address gas emissions and their effects on the health of nearby residents.
iii. Option 3: Increase surface setback distance requirement for hydraulically fractured wells
The third option is a law mandating that all fracking wells be set back at least 3000 feet from residential areas and schools, effective 2 years from passage of the legislation. Additionally, this option would enact an immediate ban on wells within 1000 feet of residential areas and schools and mandate that all abandoned wells be plugged. Like the first proposed action, this option reduces the risk of health problems due to fracking for over 255,000 Colorado residents (Czolowski et al. 2017 , Table S3). However, this law would not dismantle the fracking industry. Instead, it would allow drillers to construct new wells away from residential areas while allotting 2 years for existing close-proximity wells between 1000 and 3000 feet from residential areas and schools to continue operation before being shut down. Therefore, local economies could continue to take advantage of the economic benefits of fracking, enabling Colorado to maintain a large degree of energy independence.
Additionally, the 2,556 active wells within 1000 feet of schools and residential areas make up less than 7% of Colorado hydraulically fractured wells and ceasing their operation would result in only a 1% subsurface resource loss (Finley 2019, FracTracker 2020, Ericson et al. 2019. Given that these closeproximity wells pose the greatest risk to human health, the immediate ban on them would be worth the minimal resource loss. The main cost associated with this law is the large proportion of the subsurface that would become inaccessible, thus limiting producers' ability to extract oil and gas (Ericson et al. 2019). However, by increasing the subsurface horizontal drilling distance of wells to three miles, the estimated resource loss is only 25% (Ericson et al. 2019, Appendix Figure S3). This resource loss could be further mitigated by increasing the horizontal drilling distance beyond three miles as there is no current regulatory limit on subsurface drilling. The subsurface extension of wells is a viable option that has the potential to yield oil and natural gas safely as some horizontal boreholes are several kilometers long (Bazant 2014, 101010-1). Nevertheless, after the 2-year transition period expires, oil and gas production could decrease and cause the Colorado economy to shrink.
IV. Policy recommendation
The most advantageous solution to the issue of fracking in Colorado is a policy that would reclaim the Colorado state government's jurisdiction over fracking and implement a ban on all fracking within 3000 feet of schools and residential areas, fully effective 2 years after its passage. The law would also prohibit wells within 1000 feet of residential areas and schools, effective immediately and ensure that all abandoned wells are plugged. This policy is beneficial for several reasons. First, and most importantly, it addresses the human health risks posed by fracking. For people living more than 3000 feet from a fracking well, there is no statistically significant relationship between well proximity and an increase in dermal, respiratory, cardiac, gastrointestinal, or neurological symptoms (Rabinowitz et al. 2015, 25). Therefore, by mandating a well setback distance of 3000 feet from schools and residential areas, 255,000 Colorado residents would be at a lower risk for various health conditions. Furthermore, it would mitigate this risk while appealing to those who oppose increased fracking regulation and without sacrificing the economic benefits.
Banning wells within 1000 feet of residential areas and schools immediately is an acceptable loss as the ban would protect those at highest risk of inhaling toxic hydrocarbons. The policy would also alleviate a potentially significant disruption to the oil and natural gas industries resulting from a sudden ban on all wells within 3000 feet of residential areas. Instead, it will allow companies 2 years to increase the subsurface horizontal drilling distance in their wells to compensate for the resources lost from their closeproximity wells. Finally, this policy would likely gain a substantial level of public support, as a 2020 survey showed that 70% of Colorado residents believe that the effect of drilling on local land, air, and water is at least a "somewhat serious" problem (Colorado College 2020).
V. Conclusion
Fracking and the extent to which the technology is regulated presents a complex issue in Colorado. The energy industry is an important part of Colorado's economic welfare, and arguable 'overregulation' of fracking would be detrimental to economic growth. In 2019, oil and gas production in Colorado contributed over $30 billion to the state GDP and accounted for over 200,000 jobs (Hochman 2019). As depicted in Fig. 2, the onset of the 2020 COVID-19 pandemic caused crude oil prices to plummet. This sudden decrease, however, is likely only a temporary roadblock for the Colorado oil and natural gas industry, which is projected to rebound in the next 1-2 years and thrive for decades to come (Collins 2020).
While the increase in oil and natural gas production due to fracking benefits workers and the Colorado economy, the health risks that it poses are detrimental to hundreds of thousands of Colorado residents. Recent studies have determined that people living within 3000 feet of a hydraulically fractured well are at a much higher risk for adverse dermal and upper respiratory symptoms (Rabinowitz et al. 2015, 25). Living in proximity to wells has also been linked to an increased risk of cancer (McKenzie et al. 2012, 85). For these reasons, the Colorado state government should move swiftly to reclaim its authority to regulate fracking and implement a ban on fracking wells within 3000 feet of residential areas and schools. This policy will allow the oil and natural gas industry to continue to prosper while protecting the health and welfare of all Colorado citizens. Figure 2. West Texas Intermediate (WTI) crude oil price since 2018. The price declined sharply following the onset of the COVID-19 pandemic but has since begun to recover as the demand for crude oil continues to increase with the opening of the worldwide economy (U.S. EIA 2021). | 2021-05-05T00:08:20.878Z | 2021-03-24T00:00:00.000 | {
"year": 2021,
"sha1": "c0f7b80bf94505d3cb8a51729dcf872dd33e3324",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.sciencepolicyjournal.org/uploads/5/4/3/4/5434385/boyle_jpsg_18.1.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "88ac3e90099d9b8d08cf721a7010bc981c5a0af2",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
259747393 | pes2o/s2orc | v3-fos-license | Potential Domoic Acid (Neurotoxin) Producing Phytoplankton Pseudonitzschia in Indian Coastal Water - Do We Need To Care?
Pseudonitzschia species are potential domoic acid producers, a neurotoxin, responsible for the infamous human HAB intoxication at Prince Edward Island, Canada in 1987, costing human lives. Global warming has widened the reach of these phytoplankton species and it is being reported in Indian waters. We report the occurrence of ten Pseudonitzschia species in the northwestern coastal waters of India, out of which, seven are potential domoic acid producers. The question arises, are we vulnerable to HAB (Domoic Acid) toxicity? In light of the observation that Pseudonitzschia dominates the coastal waters of Veraval and its abundance is increasing with time, the present study briefly synthesizes the available information on the ecology, metabolism, and other relevant knowledge related to the domoic acid production by Pseudonitzschia and assesses the risk of human intoxication through trend and forecast analysis with the possible preventive measures required.
Introduction
About half of the total human population lives within 200 km of coastlines (Creel,2003). According to data on the blue economy by World Bank (Blue economy, 2022),oceanic resources contributeabout $1.5 trillion annually to the world economy, apart from providing much-needed nutritional benefits to human beings. Fish and other edible oceanic natural resources are one ofthe cheap and very good sources of proteins for a large population of coastal communities. According to the data on fish landing by Central Marine Research Institute,India produces3820207tons of fish resources annually (Annual data, 2022) which supports a large coastal community, the health of such aquatic ecosystems are of critical importance in ensuring human health as well as their economic wellbeing. One of the emerging challenges faced by the coastal ecosystems is the outbreak of harmful blooms (HAB) of phytoplankton which sometimes causes the release of toxic secondary metabolites into the water thereby posinga serious threat to the coastal ecosystem,biodiversity and human health. Phytoplankton are microscopic, free-floating and pigment-containingorganisms performing the ecological function of fixing non-bioavailable energy (light energy) into bioavailable chemical form (carbohydrate) much like the plants of terrestrial ecosystems. Apart from the normal function of photosynthesis, a few phytoplankton species produce harmful secondary metabolic products which may be harmful to the ecosystem and/or cause toxicity to other organisms. According to a study by D'Silva et al., 2012, out of the 5000 species of phytoplankton existing in the world's oceans, 7% are reported to form blooms that included diverse phytoplankton groups such as diatoms, dinoflagellates, raphidophytes, prymnesiophytes and silicophytes. Among the bloom-forming phytoplankton, only 2% were reported to be toxic and dinoflagellates contributed 75% to the toxic bloom-forming phytoplankton.39 phytoplankton species are documented as responsible for the formation of algal bloom in Indian waters. In context to Indian waters, the earliest event of toxicity was reported by Hornell in 1908 when massive fish mortality occurred due to an unidentified flagellate bloom. Subsequently, many toxic and nontoxic bloom events have been reported from Indian waters. Appendix 1 summarizes the occurrence of bloom events in Indian waters, their causative phytoplankton species and their effects. Considering the western coastline of India, most bloom events have been reported from Kerala followed by Mangalore and Goa(D'Silva et al., 2012). There has been no report of Pseudonitzschia bloom or its toxicity from any part of Indian water to date. This study reports the first occurrence of Pseudonitzschia (a diatom) bloom along thenorthwestern coastline (Veraval coast, Gujarat) of India. Also, a mini review on bloom events in Indian waters along with the biology of Pseudonitzschia and the eco-physiology of domoic acid production is briefly considered in this paper.Pseudonitzschia produces a neurotoxin called domoic acid and various species of Pseudonitzschiaare well known for causing toxic blooms in many parts of the globe (Lelong et al., 2012). The study assesses the potential risk of toxicity through trend and forecast analysis.
Mini Review-Pseudonitzschia and domoic acid Toxin-producingPseudonitzschia species
Morphologically,Pseudonitzschia is a diatom with lanceolate or spindle-shaped frustule in its value view. Frustules overlap at value ends to form stepped chains (Tomas 2007 (Lelong et al., 2012).Similar to many other diatoms, Pseudonitzschia often blooms in the upwelling zones where light and nutrient conditions are most favourable. It was observed by Trainer et al. 2008; that Pseudonitzschia blooms are common along the western coast of the continents due to upwelling and water circulation produced by sea floor and coastal topographies. A distribution map documentingthe worldwide occurrence of various species of Pseudonitzschiawas given by Lelong et al., 2012.
History of bloom events The first incidence of harmful effects produced by Pseudonitzschia was reported from the eastern coast of Prince Edward Island,Canada, in 1987; when many people got ill and three died due to the consumption of intoxicated mussels (Mytilus edulis) (Todd, 1990). Following this, the toxic blooms of P.multiseraewere observed for three years along the eastern coast of Canada (Smith et al., 1990a;Villac et al., 1993). Since then, Pseudonitzschia blooms and production of domoic acid has been observed in many other parts of the world (Shumway 1989;Chang 1993;Hallegraeff 1994;Miguez et al. 1996;Beltran et al. 1997;Lelong et al., 2012).
Symptoms of domoic acid toxicity
The most unusual and serious symptom of domoic acid poisoning in humans is loss of short-term memory and in some cases, it causes permanent damage to the brain. The poison is not destroyed either by cooking or freezing. Apart from Pseudonitzschia, domoic acid is released by a variety of other macro and microalgae The poison was first discovered from red algae called 'doumoi' (Chondriaarmata) in the Japanese language, in 1958 and was used as a folk medicine in Japan to treat intestinal pinworm infestations (Villac et al., 1993;Mos, 2001).
General properties of domoic acid
Domoic acid is water soluble and heat-stable amino acid (Hatfield et al., 1995;Leira et al., 1998) but gets degraded by bacterial action (Windust, 1992;Stewart et al., 1998) and by exposure to UV radiation (Wright et al., 1990, Bates et al., 2003. It is also known to chelate iron and thus iron is also considered as a potential degrading agent (Rue and Bruland, 2001). Limitations of nutrients such as phosphorus, silica and metals like iron and copper have been shown to promote toxin production (Pan et al., 1996b;Bates, 1998;Wells et al., 2005). Also, increased levels of salinities, dissolved inorganic carbon and urea have been related tothe enhancement of toxin production
Materials and Methods
Water samples were collected to study phytoplankton diversity from the coastal waters of Veraval, Gujarat, which is a part of the northeastern Arabian Sea. The study was conducted from March 2003 to April 2017.The geographical distribution of all the sampling sites is shown in Figure 1. The relative abundance of phytoplankton cells was calculated to study the extent of dominance of Pseudonitzschia cells over other phytoplankton communities.Decadal trend analysis and forecast analysis for the next 10 years was carried out to understand the pattern of rise and estimate the potential hazardposed by Pseudonitzschia cells.The threshold of bloom initiation and level of risk was determined according to the method proposed by Siegel et al (2002).For this, the mean value for decadal phytoplankton cell counts was calculated and rise in Pseudonitzschia cell counts above 30% of the meanvalue was considered as bloom initiation.
Result and discussion Reporting
A study on phytoplankton assemblage revealed that diatoms dominated the overall phytoplankton diversity with 77.93% in coastal waters of Gujarat, comprising the northeastern part of the Arabian Sea.
Pseudonitzschiaalone contributed to 26% of overall phytoplankton diversity in the study region.
In order to further understand the effect of seasonal changes in Pseudonitzschia dominance, the study period was temporally classified as fall inter-monsoon, winter monsoon and spring inter-monsoon seasons. Again, diatoms dominated in all the seasons studied, but Pseudonitzschia did not (table 1). Although the abundance of Pseudonitzschia was significantlyhigh in all the seasons, it did not dominate the overall diversity during the two inter-monsoon seasons. Pseudonitzschia dominated with a relative abundance value of 0.26,exclusively in the winter monsoon season.As mentioned in the mini-review above, Pseudonitzschia grows best in upwelling zones and coastal waters where nutrient conditions are favourable for their growth and multiplication.During winter monsoon, cooler water at the surface sinks towards the bottom causing upwelling (Motwani et al., 2014). This upwelling water brings along nutrients from the bottom, turning the nutrient conditionfavourablefor the growth of Pseudonitzschia. Whereas in fall and spring inter-monsoon seasons, the hydrological condition is reversed and the water is stratified. Nutrient supply in stratified water is low thus lowering the overall phytoplankton diversity and thereby abundance of Pseudonitzschia as well. there was no presence of toxin. The possibilities may be either these Pseudonitzschiaspecies were not producing toxins or the produced toxin wastoo low to produce any harmful effect and that they were assimilated into the aquatic system.
The decadal trend of Pseudonitzschia abundance showed that its abundance in the coastal waters of Gujarat was almost zero ( Figure 3) till 2004. Later, after its introduction in the region, itincreased in number continuously such that it can now be considered an invasive species. In 2015, its cell countoutcompeted the overall phytoplankton diversity and formed a bloom due to the occurrence of two extremely severe cyclonesChapala and Meghone after the other in October-November. Recalculated results also showed an increasing trend forPseudonitzschiacell counts. Forecast analysis for the next 10 years showed that Pseudonitzschia is likelyto increase( figure 4) and therefore is a potential future hazard for the region. Veraval is a known fishing centerand a busy shipping port.Apart from fishing and shippingactivities, the shipbuilding industries of Veraval also influence the water chemistry of the region. A lot of organic matter and metal wastes are disposedinto the adjacent coastal waters.The presence of Somnath temple (an old Shiva temple of historical importance)makes Veraval an important pilgrimage and tourist center of the state. Commercial activities make Veraval prone to excessive eutrophication and pollution, causing an imbalance in the proportion of essential nutrients such as nitrogen, phosphorous and potassium. Such eutrophic conditionsseemingly turnedfavourable for the growth of Pseudonitzschiaand can act as a potential trigger for the release of domoic acid in future. According to Bhat and Matondkar, 2004, pollution and nutrient enrichment due to anthropogenic activities are the major factors that trigger and stimulate the growth of bloom-forming species.Both these triggering agents prevail in Veraval and undoubtedly are responsible for the continuously increasing abundance of Pseudonitzschia.
Risk Assessment
Results of the threshold method showed that Pseudonitzschiacells were higher than 50% of the mean value for overall phytoplankton diversity. It not only formed bloom but also posed a major risk tothe human health and fishing industry of the region under concern. The dominance of Pseudonitzschia and its increasing trend is definitely an alarming situation. To understand the potential risk due to the presence of Pseudonitzschia in alarmingly high numbers, the risk assessment method proposed by the government of South Australia, Department of Education and Children'sServices was followed. The outcome of the risk assessment process showed that human health, ecosystem and fisheries industries are extremely vulnerable to possible catastrophic effectsdue toPseudonitzschiabloom in coastal waters off Gujarat. Further ahead, there is no awareness or management strategiesundertaken to handle the possible risk (table 2). Although, there were no reports of any harm by toxin release in the study area. There is anextreme risk of domoic acid production in the near future and by all means it can affect the commercial activities of the region as well as the health and well-being of the human population that consumes intoxicated fishes.
Need for research
The absence of toxication reports does not ensure that there is no production ofneurotoxins in the region. There may be neurotoxin production but in very low concentrations that it does not cause any mortality or it may be due to lack of awareness and research carried out in the region that domoic acid remains unnoticed. Detailed studies on the presence of toxic Pseudonitzschiain Indian waters and their toxin production activities are required. The toxin-producing stimulus to these species and highly sensitive protocols to detect the presence of toxins at very low concentrations are also subjects of research required in this region.
Management approaches
Occurrence of toxin-producingPseudonitzschiaspecies in significant proportion over other phytoplankton types call upon a need for continuous monitoring of Pseudonitzschia abundance, water quality, nutrient levels and anthropogenic factors that can trigger the release of domoic acid. Such monitoring can help to protect the consumers and the seafood industry. Also, fishermen and people inhabiting the coastal regions of Gujarat need to be educated about the potential hazard. Eutrophication is one of the causes of Pseudonitzschia dominance in the region. Sustainable approaches to control industries and shipping activities need to be adopted to curb the approaching hazard.
Conclusion
The coastal waters of Veraval are dominated by a potential neurotoxin-producing phytoplankton 'Pseudonitzschia'. The abundance of Pseudonitzschia cells showed an increasing trend over a decadal time period. Increasing abundance is not only disturbing the present diversity but also posing a potential risk of being detrimental to human health, the ecosystemand the fisheries industry of the state. Presently there is no awareness or report of the possible threat. There is indeed a need for awareness, research and management activities to handle the fore coming threat before it actually takes a toll on human health and finances.
Acknowledgement
The work was funded byDST-SERB, Gov. of India, under the NPDF scheme. We are grateful to Dr. Ashish Jha and Sreejith Thilakan from Central Institute of Fisheries Technologies, Veraval and the | 2023-07-12T06:36:17.653Z | 2023-06-26T00:00:00.000 | {
"year": 2023,
"sha1": "40058ec65d81d1e2fe1eaa021c6dba6f58f950de",
"oa_license": "CCBYSA",
"oa_url": "https://www.ijfmr.com/papers/2023/3/4012.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "152aa4391e88fc74105da5b6fab8b0c4e274b04f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
225064162 | pes2o/s2orc | v3-fos-license | Syngas Production from Rubberwood Biomass in Downdraft Gasifier Combined with Wet Scrubbing: Investigation of Tar and Solid Residue
Production of synthesis gas by gasification is still a challenge due to the tar in the synthesis gas (syngas). This tar needs to be eliminated by appropriate methods before using the syngas as a fuel. Moreover, the solid residue after gasification also needs to be properly managed or destroyed. Therefore, the aim of this study was to investigate tar and solid residue generated by gasification of rubberwood biomass, including rubberwood chips (RWC), rubberwood pellets (RWP), rubberwood unburned char (UBC), and their blends, in a downdraft gasifier. Waste vegetable oil (WVO) and water were used as scrubbing media. Properties of the biomass samples were characterized by proximate and ultimate analysis, as well as for the higher heating value. The downdraft gasifier was operated at 850 °C and equivalence ratio (ER) of 0.25. The concentrations of tar in syngas both before and after passing through the wet scrubber were determined. Chemical compounds in the tar were analysed by GC-MS. The solid residue remaining after gasification was separated into biochar and ash. The biochar was characterized by CHNS/O analyser, FTIR, SEM, and for the iodine number. The compounds in ash were determined by XRF. The results show that biomass type and scrubbing media affected the tar removal efficiency. Scrubbing syngas with WVO had better tar removal efficiency than scrubbing with water. The highest tar removal efficiency with WVO was 82.16%. The tar sample consisted of complex compounds as indicated by GC-MS, and these compounds depended on type of biomass feedstock. The solid residue obtained after gasification process contained biochar (unburned carbon) and ash. Some biochars can be used as solid fuels, depending on carbon content and energy content. The biochar also had a highly porous structure based on SEM imaging, and a high iodine number (930-1134 mg/g). The biochar contained the functional groups OH, C-O, and C-H, as indicated by FTIR. CaO, K2O, SiO2, and MgO were the major components in ash. The spent WVO, biochar, and ash need to be properly managed or utilized for sustainable gasification operations, and these results support that pursuit.
SEM dan nilai iodin yang tinggi (930-1134 mg/g Rapid depletion of fossil fuel resources and environmental issues caused by their fuel uses are major global challenges. To overcome these issues, biofuels and bioenergy from biomasses, such as wood, residues and wastes, are of great importance in seeking to meet the increasing global energy demands (Abdullahi et al. 2017;Plante et al. 2019;Sikarwar et al. 2017). Moreover, biofuels and bioenergy are economically beneficial to peoples and countries (Amin et al. 2019). Biomass has several advantages among the alternative renewable energy resources that include wind, hydro and solar energy. This is because biomass can be grown, stored, managed and transported, in addition to being environmentally friendly (Kirubakaran et al. 2009;Palamanit et al. 2019;Shrivastava et al. 2020;Sikarwar et al. 2017;Werther et al. 2000;Yokoyama et al. 2000). Generally, most biomasses are lignocellulosic, containing hemicellulose, cellulose and lignin. The composition of lignocellulosic biomass still varies by plant species, maturity stage of the plants, and growth environment and conditions (Anwar et al. 2014;Kumar et al. 2009). The applications of lignocellulosic biomasses as biofuels for bioenergy are of high interest partly because biomass is abundant globally (Müller-Langer & Kaltschmitt 2015;Nanda et al. 2014;Rajendran et al. 2017).
Thailand is a developing country that heavily relies on fossil fuels for energy, specifically on crude oil, natural gas, and coal. Most of these fuels need to be imported, contributing to low energy security and sustainability, in addition to the environmental impacts. Thus, Thailand has policies to improve the energy security and sustainability and to reduce the greenhouse gas emissions by increasing the utilization of renewable and alternative energy sources. Biomass is among the main candidate renewable energy resources in Thailand and can be applied as biofuels Palamanit et al. 2019;Shrivastava et al. 2020). Thailand has high availability of biomass due to the agro-industries producing rice, palm oil, natural rubber, and rubberwood. Producing these products provides biomass as side products, in the forms of organic residues and wastes. In 2018, the plantation area of rubber trees in Thailand was 3.66 million hectares (Office of Agricultural Economics 2018; Rubber Authority of Thailand 2018) mostly in southern region of the country. The replantation of rubber trees and processing of rubber wood provide many types of biomass, such as rubber tree roots, stumps, branches, leaves, sawdust and bark. These can be applied as feedstocks for bioenergy production. Aside from rubberwood biomass in its raw or unprocessed forms, unburned char or solid residue carbon that remains in bottom ash after combustion of rubberwood in a fixed bed combustor is also an alternative source of energy (James et al. 2012).
Conversion of biomass into biofuels can be performed by mechanical, thermochemical, biochemical, and combined processes (Mckendry 2002a;Shrivastava et al. 2020;Tanger et al. 2013;Tursi 2019). Gasification is a type of thermochemical conversion that is widely applied to produce synthesis gas (syngas) or producer gas (Mckendry 2002b;Molino et al. 2016;Sikarwar et al. 2017;Watson et al. 2018;Widjaya et al. 2018). Such gas is a high-grade fuel and it is relatively easy to use for heat and power generation (Awais et al. 2018;Jia et al. 2017;Kate & Chaurasia 2018). Gasification is partial oxidation that can be widely applied with many feedstocks, such as biomass, coal, and plastic waste, for producing syngas. The main components in syngas are CO, CO 2 , CH 4 , and H 2 (Abdoulmoumine et al. 2015;Lopez et al. 2018;Pereira et al. 2012;Rasmussen et al. 2020;Watson et al. 2018). The lignocellulosic components in biomass are decomposed to syngas and tar during gasification. There are many factors that influence the quality and quantity of syngas, as well as tar concentration and composition, for example biomass type and composition, type of gasifier, and operating conditions (i.e. temperature and equivalence ratio) (Farzad et al. 2016;Ku et al. 2017;Molino et al. 2016;Pereira et al. 2012;Susastriawan et al. 2017;Widjaya et al. 2018).
Tar is the main problem in commercial applications of syngas. This tar is normally a sticky and black substance containing complex compounds (Han & Kim 2008;Li & Suzuki 2009;Valderrama Rios et al. 2018). Tar needs to be eliminated from syngas before use as a gas fuel to prevent damage to pipes, blowers, burners or engines (Valderrama Rios et al. 2018). There are many methods that can be applied to reduce or eliminate tar in syngas, for example thermal treatment, wet scrubbing, bio-filtering, and catalytic treatment (Awais et al. 2018;Fuentes-Cano et al. 2020;islam 2020;Monir et al. 2020;Nakamura et al. 2015;Shen & Yoshikawa 2013;Vecchione et al. 2016). These methods have varying tar removal efficiencies. However, wet scrubbing is currently widely applied as it is not too complex, has relatively high efficiency, and is cheap and easy to maintain. nakamura et al. (2015) achieved 73.3% tar removal using bio-oil as absorbent. Another study reported 80.4% tar reduction with waste cooking oil (Tarnpradab et al. 2016). Awais et al. (2018) studied gasification of wood chips and corn cobs, and reported tar removal efficiencies in the range 35-74% when using cyclone, wet scrubber, filter and auxiliary filter. Recently, Monir et al. (2020) showed that tar reduction efficiency significantly increased from 81.87 to 97.25% when thermal treatment temperature changed from 700 to 1000 °C. Fuentes-Cano et al. (2020) reported that in long tests the catalytic conversion of biomass-derived tars over char was 64-80%. Numerous physical, thermal and chemical processes have been applied for tar elimination, but the removal or elimination of tar from syngas generated from rubberwood biomass still needs studies clarifying alternative processes, absorbing media and cleaning methods (Kaewluan & Pipatmanomai 2011a, 2011b. Moreover, the solid residue remaining after gasification needs to be properly handled and managed. The recovery of solid residues can be beneficial for economy and sustainability of the gasification process. Therefore, the objectives of this study were to investigate the tar concentration and composition in syngas obtained from gasification of rubberwood chips (RWC), rubberwood pellets (RWP), unburned char (uBC), and their blends in a downdraft gasifier; to determine the efficiency of tar removal using water or waste vegetable oil (WVO) as scrubbing media; and to characterize the solid residue remaining after gasification for potential further applications.
RAW MATERIAL PREPARATION
Isopropanol and acetone used in this study were analytical grade (purity >99%). WVO, ice and salt were purchased from a local market. The RWC was obtained from a factory that produces rubberwood chips, located in Khlongngae, Sadao District, Songkhla Province, Thailand. The size of RWC was about 20 × 35 mm. The uBC was separated from bottom ash by sieving and the size of uBC was about 10-20 mm. The RWP was obtained from wood pellet production factory located in Rattaphum District, Songkhla Province, Thailand. The RWP had 8 mm diameter and 20-40 mm length. These biomass samples were dried in a solar greenhouse dryer to reduce the moisture content. The blended samples were prepared by mixing uBC and RWC, or mixing uBC and RWP, both in 50:50 (wt. %) blend ratio. The prepared samples were kept in airtight bags until use in experiments. The waste vegetable oil (WVO) was filtered to remove suspended particles and was well stirred before use. The overall scheme is presented in Figure 1. Representative samples of biomasses and WVO are shown in Figure S1 (Supplementary data).
SETuP OF DOWnDRAFT GASiFiER AnD TAR REMOVAL PROCESS
A schematic diagram of the downdraft gasifier equipped with a tar removal system is shown in Figure S2 (Supplementary data). The downdraft gasifier type was chosen because it provides a low tar concentration in syngas, compared to an updraft or a moving bed gasifier. The downdraft gasifier was fabricated and installed at Prince of Songkla University (PSU), Hat Yai, Songkhla Province, Thailand. The downdraft gasifier was made of high grade steel and the major components of this gasifier include feeding hopper, blower, air supply ring, gas outlet, ignition point, cyclone, control valves, and solid residue collector below the grate, as shown in Figure S2 (a). A wet scrubber system was installed to eliminate tar using water or WVO as scrubbing media. The syngas leaves the gasifier chamber and flows to the cyclone that removes solid particles. Then the syngas flows to the wet scrubber in which it contacts tar absorbing medium that is continuously sprayed through three spray nozzles. The water or WVO is supplied to the nozzles on top of the wet scrubber column by an electrical pump. The tar sampling tray system shown in Figure S2 (b) consists of series of impingement glass bottles. Hot (40 °C) and cold (-20 °C) water baths were used to sample tar from the syngas both before and after passing through the wet scrubber.
The experiment started by feeding 12 kg of biomass into the gasifier via the hopper, then the lid was tightly closed. The solid residue collection port and all the valves were closed when burning was started at the ignition port. The biomass was initially ignited and left burning for approximately 5-10 min to make sure that the burning was completed. Then, the cover of ignition port was closed and the air supply valve was opened before running the blower. Air was supplied to gasifier at the equivalence ratio (ER) of 0.25 based on preliminary tests. After supplying the air, the temperature in the combustion zone continuously increased and the smoke was visible at the flare pipe, and syngas was burning as a flame. The gasifier was operated to maintain the temperature around 850 °C.
When the gasifier was stabilized at a specific temperature, syngas both before and after passing the wet scrubber was connected to the tar sampling system. While running the gasifier, water or WVO was continuously sprayed to the scrubber column. Each tar sampling train had 6 impingement glass bottles to collect tar before and after the wet scrubber. Hot and cold baths were maintained at 40 °C and -20 °C, respectively. The cold bath was prepared by mixing ice, salt and water in an appropriate ratio. The tar sampling glass bottles were filled with 50 mL of isopropanol, except for the last bottle, which was empty. The collection of tar was performed continuously for 60 min. The flow rate of sampling syngas was maintained at 2 L/min. The amount of tar in isopropanol was determined by evaporating in a rotary vacuum evaporator at 50 °C. The mass of tar in syngas before and after passing the wet scrubber was weighed and then tar removal efficiency was determined from (1). The tar concentration in syngas was determined by (2). The tar sample was diluted with 5 mL isopropanol and it was analysed for chemical compounds in a GC-MS. At the end of each experiment, the gasifier was left to cool and the solid residue was collected. The solid residue was sieved to separate ash and biochar.
AnALYTiCAL METHODS
The determination of moisture content, volatile matter, fixed carbon content, and ash content is known as proximate analysis. These components of biomass were determined using a thermogravimetric analyzer (TGA) Marco TGA 701 (LECO, uK) according to the ASTM D7582 procedure (Palamanit et al. 2019;Shrivastava et al. 2020). The elemental components carbon (C), hydrogen (H), nitrogen (N), and sulphur (S) were determined by Thermo Scientific FLASH 2000 Organic Elemental Analyzer (Thermo Scientific, italy), while the oxygen (O) content was calculated as a difference (Palamanit et al. 2019;Shrivastava et al. 2020). The chemical compounds in the tar were analyzed using Perkin Elmer 600T Gas Chromatography-Mass Spectrometer (GC-MS) equipped with NIST MS 2.0 software. The DB-5MS column used in the GC was 30 m long, with 0.25 mm diameter and 0.25 mm film thickness. Helium was used as the carrier gas at a flow rate of 1 mL/min. The oven temperature was set at 65 °C for 2 min, then it was increased to 300 °C with a heating rate of 8 °C/min, and it was maintained at this temperature for 10 min. The injection volume of each sample was 1 µL.
The functional groups of biochar were determined with a Fourier transform infrared spectrophotometer (FTiR Vortex Bruker, Germany). The samples were scanned over the wavenumber range 400-4000 cm -1 . The iodine number, which was correlated to the surface area of biochar, was determined by titrimetry. SEM imaging of biochar was done using Quanta 400 SEM. Ash was separated from biochar mechanically using a 1 mm sieve and the ash was characterized using X-ray fluorescence (XRF) spectrometry (XRF, Zetium, PANalytical, Netherland).
BiOMASS COMPOSiTiOn
The proximate analysis results and elemental compositions of biomass samples are listed in Table 1. It is seen that the moisture contents of all biomass samples were below 10% due to drying in the solar greenhouse dryer. Biomass with a low moisture content is appropriate feedstock for syngas production via gasification (Demirbaş 2005;Pereira et al. 2012;Sikarwar et al. 2017;Susastriawan et al. 2017). This is because the moisture content of biomass not only influences syngas quality and tar concentration, but it also affects thermal efficiency of the gasifier. Previous studies have reported that gasification of biomass with high moisture provided syngas with low calorific value due to incomplete pyrolysis (McKendry 2002a;Susastriawan et al. 2017). Plis and Wilk (2011) found that the content of CO in the syngas was higher in the case of dried biomass, while the CO 2 content increases with moisture in the feedstock. Additionally, a higher moisture content in the biomass also reduces the molar fraction of combustible components in the syngas, consequently lower its heating value (Antonopoulos et al. 2012). Schuster et al. (2001) showed that the gasifier temperature and syngas yield decreased, while the tar content was higher if the feedstock contained more than 30 wt. % moisture. Moreover, gasification of biomass with high moisture content also consumes extra heat to evaporate moisture (Brammer & Bridgwater 2002). In practice, gasification of biomass should be performed with feedstock that has a suitable moisture content to reduce the losses of thermal energy from the gasifier. The moisture content limits for gasifier feedstock depend on the type of gasifier used. The highest moisture content for a downdraft gasifier is generally considered to be 25% wet basis, and not higher than 50% for an updraft gasifier (Seggiani et al. 2012).
The volatile matter of RWC and RWP was 74.4% and 76%, respectively, while the uBC had very low volatile matter content (19.0%) in comparison. This is because the uBC was already decomposed during the combustion of rubberwood in a boiler system. The volatile matter of uBC used in this study was similar to the volatile matter of biochar from pyrolysis (Palamanit et al. 2019). The presence of volatile matter in biomass indicates the degree of combustibility of a solid fuel and it also indicates the gas and tar generation during gasification. It is well known that high volatile matter in biomass promotes gas and tar generation. Watson et al. (2018) reported that agricultural residues tend to produce a large amount of tar because they tend to have high volatile matter contents. Considering the fixed carbon content of biomass samples, the contents in RWC, RWP, and uBC were 16.8%, 15% and 50.6%, respectively. The low volatile matter in uBC led to a high fixed carbon content. The fixed carbon of biomass is the component that can be converted into biochar after devolatilization. The fixed carbon content of biomass also indicates the rate of gasification and the syngas yield (Basu 2010;Watson et al. 2018). The inorganic components in biomass are weight of tar before scrubber(g)-weight of tar after scrubber(g) Efficiency(%) 100 weight of tar before scrubber(g) = × (1) Biomass with a high ash content not only provides high ash amount after gasification, but also causes problems such as reactor plugging, sintering of catalyst, and need for proper management or disposal of ash residue. Di Gregorio et al. (2014) also reported that when the ash content of the feedstock increased from 17.2% to 25.1%, the gasification efficiency decreased from 63% to 33%, and the contents of H 2 and CO decreased significantly, resulting in a loss of higher heating value (HHV) of the syngas. Thus, proximate analysis helps choose appropriate operating conditions, catalysts and gasifier configurations (Watson et al. 2018).
Regarding the elemental composition of biomass samples, ultimate analysis showed that the contents of carbon, hydrogen, nitrogen Sulphur, and oxygen of RWC, RWP, and uBC were 44.8-58.5%, 1.2-5.84%, 0.2, 0.02-0.08%, and 9.65-40.22%, respectively. The uBC contained more carbon than RWC and RWP, while the hydrogen content of uBC was the lowest. This is due to the loss of volatile matter from the uBC. The results of ultimate analysis are consistent with the proximate analysis results. Ultimate analysis is generally performed to assess the potential of a biomass as solid fuel for bioenergy. Normally, biomass with high carbon and hydrogen contents provides a high HHV. The results of proximate and ultimate analysis of feedstocks in this study are similar to previous studies (Abdullahi et al. 2017;García et al. 2013;Johari et al. 2014). For gasification, high amounts of carbon and oxygen in the biomass contribute to CO 2 and CO formation during gasification, and also increase the yields of CH 4 and H 2 , if the gasifier is operated at suitable conditions. Low nitrogen and sulphur contents in biomass help avoid the formation of NO x and SO x (Mishra & Mohanty 2018). Most of the nitrogen during gasification is in the forms of organic complexes and therefore, reacts with hydrogen, forming ammonia and even hydrogen cyanide (Watson et al. 2018). A small amount of nitrogen is retained in the unreacted solid residues. In the case of sulphur, it is often emitted in the form of H 2 S, which leads to difficulty in gas treatment and separation (Watson et al. 2018). Regarding oxygen, it is seen that uBC had the lowest oxygen content, due to thermal decompostion of lignocellulosic components in the biomass. The low oxygen content of uBC led to high HHV as seen in Table 1. The HHVs of RWC, RWP, and uBC were 17.80, 17.40, and 19.30 MJ/kg.
EFFiCiEnCY
In practice, condensation of tar on cool surfaces tends to occur in ducts, heat exchangers, filters and blowers, and is the big problem to commercial syngas applications. Tar not only deposits on and fouls equipment, but it also decreases the process efficiency and increases system operational costs (Valderrama Rios et al. 2018). Thus, tar in syngas needs to be reduced or eliminated to address these problems. The amount of tar in syngas prior to its application in an internal combustion engine or a gas turbine should be below 100 and 5.0 mg/m 3 , respectively (Valderrama Rios et al. 2018). Table 2 shows the tar concentration in syngas obtained from different biomasses by gasification, at inlet and outlet of a wet scrubber using water and WVO as the scrubbing media. The results indicated that gasification of the selected biomass in a downdraft gasifier provided tar concentrations of 1.61-20.66 g/m 3 . The type of biomass affected tar concentration. Gasification of RWP and RWC showed the most tar, followed by the biomass blends. The syngas from uBC had lowest tar concentration. The tar concentration in syngas from these biomasses was attributed to their volatile matter, which is emitted as both gases and vapors, and the latter ones can be condensed. The uBC feedstock had low volatile matter content, resulting in less tar formation during gasification. When the tar was eliminated by wet scrubber, it can be observed in Table 2 that the tar concentration in syngas was significantly reduced both with water and with WVO scrubbing media, for all the feedstocks. Interestingly, syngas cleaned with WVO had less tar that after water scrubbing. The scrubbing media had different abilities to capture or adsorb specific compounds in the tar. Although the tar components have low solubility in water, the decrease in gravimetric tar due to water scrubbing can be attributed to condensation of tar when contacting sprayed water. The low temperature of water scrubber with respect to the temperature of entering gas condensed tar as a separate phase on water surfaces, and this could be observed in the water container. The high tar removal efficiency of WVO is due to the tar solubility in oil. High efficiency of tar removal on using oil scrubber has been found in many studies. Phuphuakrat et al. (2011) reported 31.8% tar removal efficiency on using water as scrubbing medium, whereas the efficiency was 60.4% with vegetable oil. Moreover, the tar removal efficiency of cooking oil or vegetable oil (fresh or waste) has been investigated in many studies, such as Ahmad et al. (2016), Bhoi et al. (2015), Nakamura et al. (2016), Paethanom et al. (2013), Tarnpradab et al. (2016), Thapa et al. (2017), and Unyaphan et al. (2017). They indicate that these oils provide high tar removal efficiencies ranging from 80% to 98%, depending on type of oil and operating conditions.
In this study, the overall impact of tar removal efficiency was calculated and the results showed that it was in the range of 48.45-82.16%, as shown in Figure 2. The highest removal efficiency of tar was obtained in case of the mixture of RWP+uBC (50:50). These results showed that wet scrubbing eliminated a large amount of tar when RWP or RWC was used as feedstock. The wet scrubber can also help remove metals and dust particles that remain in the syngas stream (Stevens 2001). Considering the tar removal efficiencies of water and WVO, it was clear that the WVO was the more efficient choice. The high tar removal efficiency of the oil-based scrubbing media is attributed to lipophilicity characteristics of oil. This helps dissolve the non-polar hydrocarbons of tar (Ahmad et al. 2016;Paethanom et al. 2012). Normally, tar compounds are lipophilic in nature and can mix well with vegetable oils, as these oils have saturated and unsaturated fatty acids (Ahmad et al. 2016). Thus, WVO was more efficient for the one-ring aromatic hydrocarbons and other light tar components. According to the results of this study, on employing WVO as the scrubbing medium the gravimetric tar removal increased to 80.84% from 68.51% for water scrubbing, with RWC feedstock. Similar efficiency in tar reduction was observed with every rubberwood feedstock, as WVO adsorbed 76.62% of the tar from RWP, which was 12% more efficient than with water scrubbing. In the case of uBC, WVO could reduce tar by 62% and was 13.79% more efficient than water.
CHEMiCAL COMPOunDS in TAR
The sample of tar in syngas before passing through the wet scrubber was subjected to GC-MS. The results showed that the main tar compounds in syngas differed by type of biomass. The main chemical compounds in tar obtained from gasification of RWC and RWP included aniline-1-(13)C, 13-docosenamide, (Z)-, phorbol, naphthalene, 2-phenyl-, 3-bromobenzoic acid, octadecyl, phenol, and benzothiazole. In the case of uBC, the tar was mainly composed of 3,5-dimethoxy-4-hydroxytoluene, phenol, 2-methoxy-4-(1-propenyl)-, 5-tert-butylpyrogallol, dimethoxy propyl and phenol, 2,6-dimethoxy-. These results showed that tar components were complex compounds, such as amines, furans and aromatics, and phenols, depending on biomass type. The composition of tar in syngas from uBC was different from RWC and RWP, because uBC had low volatile matter content as seen in proximate analysis. Some previous studies have indicated that there are five classes of compounds in tar, including GC-undetectable tar, heterocyclic, light aromatic (1 ring), light PAHs compounds (2-3 rings) and heavy PAHs compounds (4-7 rings) (Han & Kim 2008;Li & Suzuki 2009;Valderrama Rios et al. 2018). However, Corella et al. (2003) classified tar compounds to 6 categories, including benzene; one-ring compounds (excluding benzene): toluene, xylenes, styrene, indene, methyl-indene, indan, thiophene, ethyl-benzene, methylbenzene; naphthalene; two-ring compounds (excluding naphthalene): methylnaphthalenes, biphenyls, acenaphthene, acenaphthylene, fluorene, benzofurans, methyl-benzofurans; three and four-ring compounds: Anthracene, phenanthrene, fluoranthene, pyrene, dibenzofuran; and phenolic compounds such as phenols and methyl-phenols. Milne et al. (1998) reported that the primary tar compounds are generated during pyrolysis stage in gasification by the thermal decomposition of biomass, which produces acids, sugars, alcohols, ketones, aldehydes, phenols, catechols, guaiacols, syringols, furans, and oxygenates. When the temperature increased beyond 500 °C, secondary tar was formed as a result of primary tar rearrangement, forming heavier molecules such as phenols and olefins. The tar alkyl tertiary products include methyl derivatives of aromatic compounds, such as methyl acenaphthylene, methyl naphthalene, toluene and indene. The condensation of tertiary aromatics form PAHs without substituent atoms, such as benzene, naphthalene, acenaphthylene, anthracene, phenanthrene, and pyrene (Milne et al. 1998). Normally, the tar chemical compounds depend on many factors, such biomass type and compositions, gasifier type, and operating conditions (such as temperature and ER). Knowing the tar composition helps select appropriate tar removal methods.
ELEMEnTAL COMPOSiTiOn OF BiOCHAR
The solid residue remaining after gasification was separated into biochar and ash. These solid residues were characterized to identify potential applications. The elemental compositions of biochar samples are presented in Table 3. It can be observed that biochar of uBC and blended biomasses had higher carbon contents than biochars from RWC and RWP. This high carbon content indicates high potential for use as solid fuel. A high carbon content is also suitable for carbon sequestration. When uBC is already free from volatiles, the carbon in uBC and blended samples is expected to become higher on heating for gasification. Biochars can be categorized into three classes based on carbon content: 10-30%; 30-60%; and more than 60%. The biochars from uBC and blended samples are in class 3, while biochars from RWC and RWP are in class 2. The high carbon contents remaining in biochar after gasification are similar to those in biochar from pyrolysis (Palamanit et al. 2019).
The H/C ratio explains the degree of aromatization and bonding arrangement in biochar. A low value H/C ratio leads to stronger aromatization, where the carbon in biochar is predominantly unsaturated and C atoms are bonded with other carbon atoms.
The higher heating value (HHV) of a biochar is a crucial factor that needs to be considered for further biochar applications. It can be seen that biochar obtained from blends of uBC with RWC or uBC with RWP had greater HHV than those from single feedstocks. The heating value of biochar was in the range of 24-25 MJ/ kg, which is relatively high and appropriate for a solid fuel. The heating values of biochars from RWC and RWP were low but still good for solid fuel uses. However, their applications to soil amendment or wastewater treatment can be attractive alternatives.
CHARACTERiZATiOn OF BiOCHAR BY FTiR
The functional groups in biochar need to be identified if a biochar is considered for use as adsorbent. The functional groups in biochars from the various feedstocks were determined by FTiR analysis, as shown in Figure 3. The FTiR spectra show many peaks from different functional groups. The peak in the range of 3300-3400 cm -1 is for OH group (Jadhav et al. 2019;Liang et al. 2019). The peaks at 2800-2900 cm -1 are assigned to alkanes. The peaks in the range 1900-2000 cm -1 belong to nCS compounds. The peaks at 1600-1700 cm -1 are for amide, secondary amines, and nitrates. The peaks at 1300-1400 cm -1 belong to alcohols, while at 900-1000 cm -1 the peaks indicate C-O compounds (Guo & Bustin 1998). The peaks at 700-800 cm -1 correspond to aromatic C-H stretching (Hossain et al. 2011). It is observed that all biochars had some variation in the spectra, which is attributed to the gasification process. However, all biochars exhibited similar trends as regards the functional groups. The biochar surface was rich in O-containing functional groups, which are highly desired for pollutant adsorption such as in treatment of wastewater, dyes or oil (Jindo et al. 2014). Figure 4 shows SEM images of the biochar samples from gasification of RWC, RWP, uBC, RWC+uBC, and RWP+uBC. It is seen that the surfaces of biochar are rough and porous, which indicates good potential for application as adsorbents. The surface of biochar from uBC clearly shows large pores, which may be due to two high-temperature treatments. The uBC was already heated in a boiler system, which generated pores. The surface features of biochars in this study are similar to those in Bensidhom et al. (2018) and Palamanit et al. (2019). Moreover, the biochar can also be used as tar absorbing medium after treatment. Regarding the iodine number, it is another biochar property important for application as adsorbents (Bamdad et al. 2018). This property is strongly related to specific surface area. iodine numbers of the biochars from gasification of RWC, RWP, uBC, RWC+uBC, and RWP+uBC were 925, 1022, 1018, 1130 The biochar from RWC+uBC showed the highest iodine number suggesting this possibly had the highest surface area among the biochar samples. A high iodine number suggests that the biochar can be an excellent adsorbent and possibly has high specific surface. The results indicate that biochars from all types of biomass tested may be good as adsorbents, and these results are well supported by a previous study on biochar preparation and activation for use as an adsorbent (Saad et al. 2019).
CHEMiCAL COMPOSiTiOn OF ASH
The ash that was separated from biochar after finished gasification of the different feedstocks was analyzed for chemical composition, and the results are shown in Table 4. It is seen that the main chemical components in the ash samples were CaO (24.28-41.78%), SiO (7.34-14.87%), K 2 O (5.64-12.85%), and MgO (3.70-5.47%), while the Cl content was low (0.03-0.08%). These results indicate that rubberwood biomass has a high content of mineral elements, including K, Ca, Si, and Mg. High contents of these elements are found in many biomasses, of both wood-based and herbaceous types, such as wood pellets, olive husks, wheat straws, corn stalks, and eucalyptus bark (Chen et al. 2015;Li et al. 2015;nunes et al. 2016;Yao et al. 2020;Yu et al. 2014). The major and minor elements in biomass normally come from the minerals in soil or feeding nutrients. High contents of the mentioned elements are disadvantageous to combustion or gasification. A high concentration of K in the biomass tends to cause formation of chemical compounds with low melting points, which leads to severe slagging and fouling on the heating surfaces, limiting attractiveness of the biomass (Nunalapati et al. 2007;Zhu et al. 2014). In biomass gasification, operation of the gasifier at a high temperature is also risk potentially causing slagging and fouling in reaction chamber, syngas pipe, and cooling system. However, the results of ash analysis showed that most of the major and minor elements in biomass still remained in the forms of oxides (SiO 2 , Al 2 O 3 , & TiO 2 ) and alkaline oxides (CaO, MgO, na 2 O, & K 2 O). The presence of these compounds in ash indicates that this ash can be applied in construction materials or it can be applied as bio-fertilizer. Such potential applications would need to be properly tested. COnCLuSiOn Tar and solid residues were investigated, generated from gasification of rubberwood biomasses of various types in a downdraft gasifier. The biomasses tested were rubberwood chips (RWC), rubberwood pellets (RWP), rubberwood unburned char (uBC), and some blended samples. Waste vegetable oil (WVO) and water were used as alternatives in wet scrubbing. The downdraft gasifier was operated at 850 °C and an equivalence ratio (ER) of 0.25. The tar concentration in syngas before and after wet scrubbing were determined and the tar compounds were analysed by GC-MS. The remaining solid residue from gasification was separated into biochar and ash by sieving. The biochar was investigated for chemical elements, surface features (SEM imaging), functional groups (FTiR), and iodine number. The ash components were determined by XRF. The results indicate that biomass type and scrubbing medium choice affected the tar removal efficiency. WVO provided the highest tar removal efficiency (82.16%). The main chemical compounds in tar were complex compounds that depended on biomass feedstock. The solid residue remaining after gasification had both biochar (unburned carbon) and ash. Some of the biochars had potential for use as solid fuels, as indicated by carbon content and energy content. The biochars also had highly porous structures seen in SEM images and iodine numbers (930-1134 mg/g). The biochars had O-H functional groups indicated by FTiR. The oxides CaO, K 2 O, SiO 2 , and MgO were the major components in ash. These results support beneficial use rubberwood biomass via gasification to syngas, by applying WVO to eliminate tar in the syngas, as well as contributing to the management of spent WVO, biochar, and ash. | 2020-10-25T21:54:09.754Z | 2020-07-31T00:00:00.000 | {
"year": 2020,
"sha1": "fe46598c2c670f8feac3ed63219b4cfb22cbdd66",
"oa_license": null,
"oa_url": "https://doi.org/10.17576/jsm-2020-4907-23",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fe46598c2c670f8feac3ed63219b4cfb22cbdd66",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
259074971 | pes2o/s2orc | v3-fos-license | Impact of bending stiffness on ground-state conformations for semiflexible polymers
Many variants of RNA, DNA, and even proteins can be considered semiflexible polymers, where bending stiffness, as a type of energetic penalty, competes with attractive van der Waals forces in structure formation processes. Here, we systematically investigate the effect of the bending stiffness on ground-state conformations of a generic coarse-grained model for semiflexible polymers. This model possesses multiple transition barriers. Therefore, we employ advanced generalized-ensemble Monte Carlo methods to search for the lowest-energy conformations. As the formation of distinct versatile ground-state conformations, including compact globules, rod-like bundles, and toroids, strongly depends on the strength of the bending restraint, we also performed a detailed analysis of contact and distance maps.
I. INTRODUCTION
Biomolecules form distinct structures that allow them to perform specific functions in the physiological environment. Understanding the effects of different properties of these conformations is crucial in many fields, such as disease studies 1 and drug design 2 . With the recent development of computational resources and algorithms, computer simulations have become one of the most powerful tools for studies of macromolecular structures. However, atomistic or quantum level modeling is still limited by the computational power needed to properly describe complex electron distributions in the system, not to mention the thousands of "force field" parameters to be tuned in semiclassical models [3][4][5] . Moreover, such models are so specific that their results usually lack generality. Thus, coarse-grained polymer models have been widely used in recent years. Focusing on few main features, while other less relevant degrees of freedom are considered averaged out, provides a more general view at generic structural properties of polymers.
Semiflexible polymer models play an important role as they allow for studies of various classes of biopolymers [6][7][8][9][10] , for which the bending stiffness is known to be one of the key factors to be reckoned with in structure formation processes. Bending restraints help DNA strands fold in an organized way enabling efficient translation and transcription processes 11 . RNA stiffness affects self-assembly of virus particles 12 . In addition, protein stiffness has been found to be an important aspect in enzymatic catalysis processes, where proteins increase stiffness to enhance efficiency 13 .
The well-known Kratky-Porod or worm-like chain (WLC) model 14 has frequently been used in studies of basic structural and dynamic properties of semiflexible polymers. However, lack of self-interactions in this model prevents structural transitions. In this paper, we systematically study the competition between attractive interactions, which usually are caused by hydrophobic van der Waals effects in solvent, and the impact a) https://www.smsyslab.org of the bending stiffness for ground-state conformations of a coarse-grained model for semiflexible polymers by means of advanced Monte Carlo (MC) simulations.
Our study helps identify the conditions which allow semiflexible polymers to form distinct geometric structures closely knitted to their biological function. For example, sufficient bending strength of the polymer chain is necessary for the formation of toroidal shapes. Such conformations are relevant for stable DNA-protein complexes 15,16 . Also, DNA spooled into virus capsids tends to form toroidal structures, which support both optimal accommodation of DNA in a tight environment and the fast release due to the tension built up inside the capsid 17,18 .
The paper is organized as follows: Semiflexible polymer model and simulation methods are introduced in Sec. II. Results of energetic and structural analyses of lowest-energy conformations are discussed in Sec. III. The summary in Sec. IV concludes the paper.
A. Coarse-grained model for semiflexible polymers
In a generic coarse-grained model for linear homopolymers, the monomers are identical and connected by elastic bonds. Three energetic contributions are considered in the model used in our study: bonded interactions, non-bonded interactions and energetic penalty due to bending stiffness. The interaction between non-bonded monomers, which depends on the monomer-monomer distance r, is governed by the standard 12-6 Lennard-Jones (LJ) potential The energy scale is fixed by ε LJ . The potential minimum is located at r 0 = 2 1/6 σ , where σ is the van der Waals radius. A cutoff at r c = 2.5σ is applied to reduce computational cost and the potential is shifted by a constant V shift ≡ V LJ (r c ) to avoid a discontinuity.
The bond elasticity between adjacent monomers is described by the combination of Lennard-Jones and finitely extensible nonlinear elastic (FENE) potentials [27][28][29] , with the minimum located at r 0 : Here, the standard values R = (3/7)r 0 and K = (98/5)ε LJ r 2 0 are used 30 . Due to bond rigidity, the fluctuations of the bond length r are limited to the range [r 0 − R, r 0 + R].
To model the impact of chain rigidity, a bending potential is introduced. The energetic penalty accounts for the deviation of the bond angle θ from the reference angle θ 0 between neighboring bonds: where κ is the bending stiffness parameter. In this study we set θ 0 = 0. Eventually, the total energy of a polymer chain with conformation X = (r 1 , ..., r N ) is given by where r i, j = |r i − r j | represents the distance between monomers at positions r i and r j .
The length scale r 0 , the energy scale ε LJ , as well as the Boltzmann constant k B are set to unity in our simulations. The polymer chain consists of N = 55 monomers 19 .
B. Stochastic Sampling Methods
The model we have studied has a complex hyperphase diagram that exhibits a multitude of structural phases. Crossing the transition lines separating these phases in the search for ground-state conformations is a challenging task. Advanced generalized-ensemble Monte Carlo (MC) techniques have been developed to cover the entire energy range of a system, including the lowest-energy states. In this study, we primarily used the replica-exchange Monte Carlo method (parallel tempering) [20][21][22][23][24] and an extended two-dimensional version of it 9 with advanced MC update strategies.
In each parallel tempering simulation thread k, Metropolis Monte Carlo simulations are performed. The Metropolis acceptance probability that satisfies detailed balance is generally written as: where ω(X, is the ratio of microstate probabilities at temperature T k , and σ (X, X ′ ) = s (X ′ → X) /s (X → X ′ ) is the ratio of forward and backward selection probabilities for specific updates. Replicas with the total energy E k and E k+1 are exchanged between adjacent threads k and k + 1 with the standard exchange acceptance probability: where β k = (k B T k ) −1 and β k+1 = (k B T k+1 ) −1 are the corresponding inverse thermal energies. Displacement moves with adjusted box sizes for different temperatures were used to achieve about 50% acceptance rate. A combination of bondexchange moves 25 , crankshaft moves 26 , and rotational pivot updates helped to improve the sampling efficiency. In order to expand the replica exchange simulation space, the total energy of the system was decoupled, After every 1500 to 3000 sweeps (a sweep consists of N = 55 MC updates), replicas at neighboring threads (T k , κ k ) and (T k+1 , κ k+1 ) were proposed to be exchanged according to the probability 9 : Here ∆β = β k − β k+1 and ∆(β κ) = β k κ k − β k+1 κ k+1 . In selected cases, optimization methods such as Wang-Landau 31,32 , simulated annealing 33 and Energy Landscape Paving 34 were also employed to validate results obtained from the replica-exchange simulations.
III. ENERGETIC AND GEOMETRIC ANALYSIS OF PUTATIVE GROUND-STATE CONFORMATIONS
In this section, we perform a detailed analysis of the different energy contributions governing ground-state conforma-tions of semiflexible polymers and discuss geometric properties based on the gyration tensor. Eventually, we introduce monomer-distance and monomer-contact maps to investigate internal structural patterns.
Energy Contributions
Putative ground-state conformations and their energies obtained from simulations for different choices of the bending stiffness κ are listed in Tab. I. By increasing the bending stiffness κ, the semiflexible polymer folds into different classes of structures: compact globules (κ < 5), rod-like bundles (5 ≤ κ ≤ 9), as well as toroids (κ > 9).
In order to better understand the crossover from one structure type to another, we first investigate the separate contributions from LJ and bending potentials to the total ground-state energies. Since bond lengths are at almost optimal distances (≈ r 0 ), the bonded potential V FENE can be ignored in the following analysis. The main competition is between including contributions from bonded monomers, and the bending energy We also introduce the renormalized contribution from the bending potential for studying the relative impact of bending on these conformations.
The energies E, E LJ , bending energy E bend , and renormalized bending quantity ε bend are plotted for all ground-state conformations in Fig. 1. Not surprisingly, the total energy E increases as the bending stiffness κ increases. Similarly, E LJ also increases with increased bending stiffness κ, but rather step-wise. Combining these trends with the corresponding structures, it can be concluded that each major global change in ground-state conformations with increased bending stiffness leads to the reduced attraction between monomers (increase in E LJ ). Whereas the bending energy E bend does not exhibit a specific trend, the renormalized bending energy ε bend decreases step-wise as well for increased bending stiffness κ, as shown in Fig. 1(b). It is more interesting, though, to see there are clear alterations of E LJ and ε bend within the same structure type (compact globules, rod-like bundles, or toroids).
In certain κ intervals (e.g., 3 < κ < 5 and 9 < κ < 10), a rapid increase in E LJ correlates with a decrease in ε bend , which seems to be counter-intuitive. However, these are the regions, in which the structural type of the ground state changes significantly. This means a loss of energetically favorable contacts between monomers is not primarily caused by a higher bending penalty, but rather the global rearrangement of monomers. For κ = 0, 1 and 2, the overall attraction E LJ does not change much, in contrast to ε bend , suggesting that the polymer chain is able to accommodate the bending penalty without affecting energetically favorable monomer-monomer contacts.
Even though the energetic analysis provides more information about the competition between different energetic terms, conclusions about the structural behavior are still qualitative. Therefore, a more detailed structural analysis is performed in the following.
Gyration Tensor Analysis
In order to provide a quantitative description of the structural features, we calculated the gyration tensor S for the ground-state conformations with components where α, β ∈ {x, y, z} and r CM = 1 N ∑ N j=1 r j is the center of mass of the polymer. After diagonalization, S can be written as where the eigenvalues are principal moments and ordered as λ 2 x ≤ λ 2 y ≤ λ 2 z . These moments describe the effective extension of the polymer chain in the principal axial directions. Thus, different invariant shape parameters can be derived from combinations of these moments. Most commonly used for polymers, the square radius of gyration R 2 gyr is obtained from the summation of the eigenvalues: The radius of gyration describes the overall effective size of a polymer conformation. In addition, another invariant shape parameter we employed is the relative shape anisotropy A, which is defined as It is a normalized parameter, the value of which is limited to the interval A ∈ [0, 1], where A = 0 is associated with spherically symmetric polymer chains (λ x = λ y = λ z ), and A = 1 is the limit for the perfectly linear straight chain (λ x = λ y = 0, λ z > 0). Other than these two limits, A = 1/4 refers to perfectly planar conformations (λ x = 0, 0 < λ y = λ z ). Square principal components λ 2 x , λ 2 y , λ 2 z , square radius of gyration R 2 gyr , and the relative shape anisotropy A of ground-state conformations are plotted in Fig. 2 as functions of κ.
Starting with κ = 0, 1, 2 and 3, the three principal moments of the corresponding lowest-energy conformations are small and nearly equal. These are the most compact conformations we found (see Tab. I). For these structures, A < 10 −3 . Furthermore, for κ < 4, lowest-energy conformations of semiflexible polymers possess an icosahedral-like arrangement of monomers, similar to that of the purely flexible chain (κ = 0). For κ = 4, the increased bending stiffness already forces conformations to stretch out noticeably. This is reflected by the imbalance of the principal moments. Consequently, A is nonzero and the overall size of the conformations becomes larger as R 2 gyr suggests. If the bending stiffness is increased to κ = 5, 6 and 7, rod- like structures with 7 bundles are formed to minimize the total energy. One principal moment increases dramatically while the other two moments decrease. As a result, R 2 gyr reaches a higher level, but remains almost constant in this κ range. The relative shape anisotropy climbs to A ≈ 0.69, indicating that the shape straightens out further.
The number of bundles reduces to six for κ = 8 and 9, resulting in longer rod-like structures. Both R 2 gyr and A increase further, the change of which is not visually obvious in Tab. I, though.
With the bending energy even more dominant for 10 ≤ κ ≤ 14, the appearance of conformations changes significantly. Toroidal structures with up to 4 windings are energetically more favored than rod-like bundles. Instead of forming a few sharp turns to accommodate the bending penalty as in the bundled conformations, the polymer chain now takes on a rather dense toroidal shape. Successive bending angles are comparatively small. In this case, the two largest principal moments converge to an intermediate value. As a consequence of the more compact structures, R 2 gyr decreases with increased bending stiffness. The asphericity A drops below the characteristic limit 1/4, reflecting the planar symmetry of the toroidal structures.
It becomes more difficult for the polymer in the ground state to maintain the same small bending angles for increased bending stiffness values κ = 15, 16 and 17. As a result, whereas the smaller bending angles still cause similar toroidal structures as in the previously discussed case, the radius of the toroids increases and fewer windings are present. Therefore, two main principal moments increase, as well as R 2 gyr . Meanwhile, the relative shape anisotropy A approaches 1/4. Fewer windings reduce the overall thickness in the normal direction of the toroidal conformations. As can be seen from the conformations in Tab. I, these structures are stabilized by the attraction of close end monomers.
However, for κ > 17, the attraction of two end monomers is not sufficient to sustain the structure. Thus, expanding the toroid becomes an advantageous option to offset strong bending penalties. The toroidal structure is stretched out, which is clearly seen in Tab. I for κ = 18 and 19. The radius of the toroid keeps getting larger, so does R 2 gyr . We find that A keeps converging to the planar symmetry limit of 1/4.
It is expected that increasing the bending stiffness further ultimately leads to a loop-like ground state and eventually to an extended chain, in which case no energetic contacts that could maintain the internal structural symmetries are present anymore. Fig. 3, but for 6 ≤ κ ≤ 11.
Contact Map Analysis
Even though the previous gyration tensor analysis yields a reasonable quantitative description of the overall structural properties of the ground-state conformations, it does not provide insight into internal structures. Therefore, we now perform a more detailed analysis by means of monomer distance maps and contact maps.
To find the relative monomer positions, we measured the monomer distance r i, j between monomers i and j for all monomer pairs. Furthermore, we consider nonbonded monomer pairs with distances r i, j < 1.2 to be in contact. The limit, which is close to the minimum distance r 0 of the Lennard-Jones potential, allows to distinguish unique contact features of conformations while avoiding counting nonnearest-neighbor contacts. In the figures, we colored the monomers from one end to the other to visualize the chain orientation.
The combined results for κ ≤ 5 are shown in Fig. 3. For κ = 0 (flexible polymer), the structure is icosahedral, and the maps do not exhibit particularly remarkable structural features. Without the energetic penalty from bending, maximizing the number of nearest neighbors is the optimal way to gain energetic benefit. For κ = 1, the introduced small bond angle restraint already starts affecting the monomer positions. In the contact map, short anti-diagonal streaks start appearing, which indicate the existence of a U-turn like segment with two strands in contact. Interestingly, we find similar conformations for κ = 2 and κ = 3, as confirmed by similar distance and contact maps. There are fewer, but longer anti-diagonal strands, located in the interior of the compact structure. The formation of new streaks parallel to the diagonal is associated with the helical wrapping of monomers, which is visible in the colored representations. As for κ = 4, the ground-state conformation is the compromise of two tendencies. The bending stiffness neither is weak, as for κ = 3 the semiflexible polymer is still able to maintain a spherical compact structure with more turns, nor is it particularly strong as for κ = 5, where the polymer forms a rod-like bundle structure. Therefore, the lowest-energy conformations shown in Fig. 3 contain only helical turns trying to minimize the size, as indicated by several diagonal streaks in the contact map. For κ = 5, the polymer mediates the bending penalty by allowing only a few sharp turns between the rods. For the 7-bundle structure, the randomness completely disappears in both distance and contact maps. The blue square areas in the distance map mark the separation of monomer groups belonging to the two ends of a bundle. Furthermore, the diagonal streaks indicate the contact of two parallel bundles while the turns of the chain form antidiagonal streaks. It is also worth mentioning that in this case the two end monomers are located on opposite sides. The results for 6 ≤ κ ≤ 11 are shown in Fig. 4. Similar to κ = 5, the polymer still forms a 7-bundle rod-like structure for κ = 6 and κ = 7. The anti-diagonal symmetry in maps for κ = 6 and κ = 7 is only a consequence of opposite indexing of monomers. For κ = 8 and κ = 9, the increased bending stiffness leads to a decrease in the number of sharp turns from 7 to 6, where the two end monomers are now located on the same side. The relative positions of monomers are almost identical for κ = 8 and κ = 9 as seen in their distance maps. However, the difference in contact maps is caused by the way the straight rods following the sharp turns are aligned. For κ = 8, four monomers (the orange turn in the colored presentation in Fig. 4 for κ = 8) form the sharp turn. This allows the rods to align closer compared to the κ = 9 case, where only 3 monomers are located in the turn that holds two parallel rods (blue shades). For κ = 10, 11, the optimal way to pack monomers is by toroidal wrapping. Thus, the contact maps exhibit only three diagonal streaks.
Results for κ ≥ 11 are shown in Fig. 5. Contact maps for κ = 12, 13 and 14 still feature three diagonal streaks. However, for κ = 15, 16, and 17, the increased bending stiffness causes a larger radius of the toroidal structure and the two end monomers are stabilized by Lennard-Jones attraction. Thus, the number of parallel diagonals reduces to two and the attraction of two end monomers is marked in the corners of the maps. Finally, for polymers with even larger bending stiffness, i.e., κ = 18 and κ = 19, the contact between the two end monomers breaks and the whole structure stretches out even more. As a result, the distance map for κ = 19 contains extended sections of increased monomer distances. At the same time, the contact map still shows two streaks slightly shifted to the right, indicating a reduction in the number of contacts.
IV. SUMMARY
In this study, we have examined the effect of bending stiffness on ground-state conformations of semiflexible polymers by using a coarse-grained model. In order to obtain estimates of the ground-state energies, we employed an extended version of parallel tempering Monte Carlo and verified our results by means of global optimization algorithms. We find that the semiflexible polymer folds into compact globules for relatively small bending stiffness, rod-like bundles for intermediate bending strengths, as well as toroids for sufficiently large bending restraints. Eventually, we performed energetic and structural analyses to study the impact of the bending stiffness on the formation of ground-state structures.
We decomposed the energy contributions to gain more insight into the competition between attractive van der Waals forces and the bending restraint. The total energy of groundstate conformations increases smoothly with increased bending stiffness, but not the attraction and bending potentials. Interestingly, renormalizing the bending energy reveals that local bending effects of ground-state conformations actually reduce for increased bending stiffness.
The structural analysis by means of gyration tensor and invariant shape parameters provided a general picture regarding the size and shape changes of conformations under different bending restraints. In a further step, studying distance maps and contact maps exposed details of internal structure ordering and helped distinguish conformations, especially for small values of the bending stiffness, where the gyration tensor analysis has been inconclusive. Contact map analysis also caught slight differences, where different structure types are almost degenerate.
In conclusion, the bending stiffness significantly influences the formation of low-energy structures for semiflexible polymers. Varying the bending stiffness parameter in our model results in shapes like compact globules, rod-like bundles, and toroids with abundant internal arrangements. Semiflexible polymer structures remain stable within a certain range of bending strengths, which makes them obvious candidates for functional macromolecules. Monomer-monomer attraction provides stability and bending stiffness adaptability to allow semiflexible polymers to form distinct structures under diverse physiological conditions 35 .
ACKNOWLEDGMENTS
This study was supported in part by resources and technical expertise from the Georgia Advanced Computing Resource Center (GACRC).
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2023-06-06T06:17:50.540Z | 2023-06-05T00:00:00.000 | {
"year": 2023,
"sha1": "3aaddb17057da90fff2a9c99b8f49dbf7dcc577d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4557acfb1e5d9423daab1b4630aab21c78b4b865",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
253167461 | pes2o/s2orc | v3-fos-license | Utilization of Lead Slag as In Situ Iron Source for Arsenic Removal by Forming Iron Arsenate
In situ treatment of acidic arsenic-containing wastewater from the non-ferrous metal smelting industry has been a great challenge for cleaner production in smelters. Scorodite and iron arsenate have been proved to be good arsenic-fixing minerals; thus, we used lead slag as an iron source to remove arsenic from wastewater by forming iron arsenate and scorodite. As the main contaminant in wastewater, As(III) was oxidized to As(V) by H2O2, which was further mineralized to low-crystalline iron arsenate by Fe(III) and Fe(II) released by lead slag (in situ generated). The calcium ions released from the dissolved lead slag combined with sulfate to form well-crystallized gypsum, which co-precipitated with iron arsenate and provided attachment sites for iron arsenate. In addition, a silicate colloid was generated from dissolved silicate minerals wrapped around the As-bearing precipitate particles, which reduced the arsenic-leaching toxicity. A 99.95% removal efficiency of arsenic with initial concentration of 6500 mg/L was reached when the solid–liquid ratio was 1:10 and after 12 h of reaction at room temperature. Moreover, the leaching toxicity of As-bearing precipitate was 3.36 mg/L (As) and 2.93 mg/L (Pb), lower than the leaching threshold (5 mg/L). This work can promote the joint treatment of slag and wastewater in smelters, which is conducive to the long-term development of resource utilization and clean production.
Introduction
The anthropogenic activities associated with massive enterprises like mining and metallurgical operations result in large amounts of strongly acidic wastewater (pH < 2) with significant concentrations of arsenic and sulfuric acid [1][2][3]. Large amounts of As 2 O 3 fumes produced during the smelting process enter wastewater during the flue-gas scrubbing acid-generation step [4,5]. As a result, arsenic in the wastewater primarily takes the form of H 3 AsO 3 [6,7], which is far more hazardous and difficult to remove than pentavalent arsenic. Although much research has been done in the field of wastewater treatment by adsorption [8,9], ion exchange [10], biological treatment [11], and membrane filtration [12], the most widely used method for arsenic-containing wastewater is still the neutralization precipitation method [13], in which arsenic is eliminated as calcium arsenate (Ca 3 (AsO 4 ) 2 ), calcium arsenite (Ca 3 (AsO 3 ) 2 ), and arsenic sulfide (As 2 S 3 ). The effective and harmless treatment of acidic arsenic-containing wastewater has received extensive attention in recent years, and targeted research work has also been carried out [14]. According to research by Kong et al. [15], using UV light to accelerate the release of hydrogen sulfide from thiosulfate resulted in 99.9% of the arsenic being removed, and there was no hydrogen sulfide (H 2 S) pollution. To prevent the creation of hazardous waste, they devised a UV/formic acid (UV/HCOOH) technique for the reductive recovery of arsenic from extremely acidic wastewater in the form of a monolithic arsenic (As (0)) product [16]. These studies have shown positive findings for the removal of arsenic, but due to cost and process constraints, large-scale applications are difficult to implement. Studies have also been done on the elimination of arsenic by hydrothermal scorodite (FeAsO 4 .2H 2 O) synthesis using in situ iron sources like magnetite [17], limonite [18], and hematite [19,20]. Scorodite is regarded as the optimal arsenic-fixing mineral [21] due to its high arsenic-loading capacity (20-30%) and low solubility in extremely acidic conditions [22]. Synthetic scorodite is often generated in an acidic solution and high temperature conditions [23,24]. However, it has been demonstrated that lower reaction temperatures can also encourage the generation of scorodite under adequate pH and iron supersaturation circumstances [25]. As the scorodite method of arsenic removal necessitates the introduction of a significant amount of iron sources, the development of the non-ferrous smelting industry can be better supported if effective arsenic removal efficiency can be attained using bulk industrial solid waste. As a by-product of lead smelting, lead slag is often disposed of as solid waste due to its low utilization value. However, its high iron content and strong alkalinity make it acceptable for use as an in situ iron source for the treatment of acidic arsenic-containing wastewater. Li et al. [26] used LZSS (Lead-Zinc smelting slag) as an in situ Fe donor and neutralizer to remove arsenic from wastewater in the form of scorodite under 90 • C, and a 98.42% removal efficiency of arsenic was achieved with an initial As concentration of 7530 mg/L and H 2 SO 4 concentration of 53,420 mg/L. In addition, we successfully obtained 99.95% As removal efficiency at room temperature with an initial As concentration of 6500 mg/L and H 2 SO 4 concentration of 56,000 mg/L. The As-bearing precipitate exhibited stable leaching characteristics with an As concentration of 3.36 mg/L and Pb concentration of 2.93 mg/L due to semi-encapsulation by ferric hydroxide and silicate colloid. Therefore, we suggested using lead smelting slag as an in situ iron source and neutralizer in the wastewater treatment process for acidic arsenic-containing wastewater. On the basis of previous research, the effects of oxidant dosage, solid-to-liquid ratio, temperature, and reaction time on the precipitate's properties of arsenic removal efficiency were investigated. Meanwhile, the mechanism of arsenic removal and fixation by co-precipitation, precipitation, adsorption, and semi-encapsulation of lead slag at room temperature was established by analyzing the phase structure, chemical composition, valence conversion, and morphological transformation of the As-bearing precipitate using X-ray diffraction (XRD, Bruker Corp., Karlsruhe, Germany), Fourier transform infrared spectroscopy (FTIR, Thermo Fisher Scientific, Waltham, MA, USA), scanning electron microscopy-energy dispersive spectroscopy (SEM-EDS, TESCAN MIRA LMS, Brno, Czech Republic), X-ray photoelectron spectroscopy (XPS, Thermo Fisher Scientific K-Alpha, Waltham, MA, USA), and transmission electron microscopy-energy dispersive spectroscopy (TEM-EDS, Thermo Fisher Scientific, Talos F200X, Waltham, MA, USA).
Materials Characterization
The lead slag and smelting wastewater used in the study was collected from a lead smelting plant in Gejiu, Yunnan Province, China. Before the experiment, the lead slag was taken for air-drying and ground to pass through a 75µm sieve. The mineral constituents of the lead slag were determined by X-ray powder diffraction (XRD) using a pressed sample ground to < 75µm. Chemical composition of the lead slag was analyzed by X-ray fluorescence (XRF, PANalytical Axios, Almelo, The Netherlands) as shown in Table 1. The concentration of the main component of the arsenic wastewater was measured by inductively coupled plasma-optical emission spectrometer (ICP-OES), sulfuric acid concentration was titrated with NaOH [6], and the water sample was diluted 100 times and measured three times to take the average value and calculate the standard deviation, as shown in Table 2. The chemical agents used in the experiment, including H 2 O 2 (AR, 30 wt.%, Aladdin, Shanghai, China), H 2 SO 4 (AR, Huihong, Changsha, China), and NaOH (≥96%, Macklin, Shanghai, China), were laboratory-grade and all solutions were prepared with deionized water (DI) at standard atmospheric pressure and room temperature.
Experiment Procedures and Methods
Batch leaching experiments were carried out to investigate the main influencing factors of arsenic removal efficiency by lead slag, including oxidant addition dosage, solid-liquid ratio, reaction time, and temperature. The lead slag and arsenic-containing wastewater were mixed in a conical flask, with oxidant dosage ranging from 0-10%, solid-liquid ratio ranging from 1:15-1:3 g/mL, and reaction temperature ranging from 25 • C to 85 • C. All batch experiments were conducted in a constant-temperature water bath magnetic stirring pot, and the stirring speed was 240 rpm. ICP-OES was used to determine each element's concentration in filtered water samples. After filtration, the precipitates were dried for 12 h at 80 • C before being further examined for chemical composition, leaching stability, and morphological change. In order to ensure the reliability of the experimental results, all reactions were set up in triplicate and performed simultaneously. The removal efficiency of arsenic and leaching efficiency of zinc were calculated by Equation (1).
where V 0 (mL) is the initial volume of the wastewater, C 0 (mg/L) is the arsenic concentration of the untreated wastewater, V 1 (mL) is the final volume of the wastewater after treated by lead slag, and C 1 (mg/L) is the final arsenic concentration of the treated wastewater. Arsenic-bearing solid precipitates produced in the batch experiments were further tested for stability according to the United States Environmental Protection Agency's Toxicity Characteristic Leaching Procedure (TCLP) test [27]. The pH value of the leaching standard solution was 2.88 ± 0. 05. It was mixed with the precipitate at a liquid-solid ratio of 20 mL/g, and shaken continuously for 18 h at 25 • C. After the reaction, the supernatant was taken and filtered with a 0.22 µm membrane filter, and the concentration of heavy metal ions in the filtrate was detected by inductively coupled plasma-optical emission spectrometer (ICP-OES, PerkinElmer Optima 5300 DV, Waltham, MA, USA).
Chemical Analysis of Solid Phase
XRD spectra were collected to analyze the crystalline phase of the solid samples in the experiments using an X-ray diffractometer (Advance D8, Bruker Corp, Karlsruhe, Germany) equipped with Cu-Kα radiation at a scanning rate of 5 • /min in the scanning angle (2θ) range of 5-80 • . The chemical and mineral properties of the precipitate were examined using a Nicolet IS 10 spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) to collect FTIR spectra (KBr mode) in the range of 400-4000 cm −1 with a set resolution of 4 cm −1 and 30 scans. The morphological features and structural composition of solids were observed by scanning electron microscopy (SEM, TESCAN MIRA LMS, Brno, Czech Republic) coupled with energy-dispersive X-ray spectroscopy (EDS, Xplore). The XPS spectra were obtained using a monochromatic Al-K X-ray source (Thermo Fisher Scientific K-Alpha, Waltham, MA, USA) with a pass energy of 50 eV and a step size of 0.05 eV. The binding energy of C1s = 284.80 eV was used as the energy standard for charge adjustment. The microstructure of the precipitate was observed through a transmission electron microscope (TEM, Thermo Fisher Scientific, Talos F200X, Waltham, MA, USA) coupled with energy-dispersive X-ray spectroscopy (EDS), with an LaB6 filament, operating at 200 kV on a JEOL JEM-2100 microscope. Table 1; the content of iron and silicon is 36.93% and 12.28%, respectively, and the sum of the calcium and magnesium content is about 12%. A paste pH experiment [28] of lead slag was conducted in deionized water (DI) at a 1:2.5 g/mL solid-liquid ratio and stirred for an hour, and the pH value of leachate was 7.82, indicating that alkaline oxides in the lead slag may have a neutralizing effect on the wastewater and facilitate the removal of arsenic.
Results and discussion
angle (2θ) range of 5-80°. The chemical and mineral properties of the precipitate were examined using a Nicolet IS 10 spectrometer (Thermo Fisher Scientific, Waltham, MA USA) to collect FTIR spectra (KBr mode) in the range of 400-4000 cm −1 with a set resolution of 4 cm −1 and 30 scans. The morphological features and structural composition of solids were observed by scanning electron microscopy (SEM, TESCAN MIRA LMS, Brno, Czech Republic) coupled with energy-dispersive X-ray spectroscopy (EDS, Xplore). The XPS spectra were obtained using a monochromatic Al-K X-ray source (Thermo Fisher Scientific K-Alpha, Waltham, MA, USA) with a pass energy of 50 eV and a step size of 0.05 eV. The binding energy of C1s = 284.80 eV was used as the energy standard for charge adjustment The microstructure of the precipitate was observed through a transmission electron microscope (TEM, Thermo Fisher Scientific, Talos F200X, Waltham, MA, USA) coupled with energy-dispersive X-ray spectroscopy (EDS), with an LaB6 filament, operating at 200 kV on a JEOL JEM-2100 microscope.
Characterization of Lead Slag
The lead slag was mainly composed of wustite (Fe0.9536O), calcite (CaCO3) gypsum (CaSO4.H2O), and magnetite (Fe3O4), as shown in the XRD pattern ( Figure 1). The chemica composition of the lead slag is showed in Table 1; the content of iron and silicon is 36.93% and 12.28%, respectively, and the sum of the calcium and magnesium content is about 12%. A paste pH experiment [28] of lead slag was conducted in deionized water (DI) at a 1:2.5 g/mL solid-liquid ratio and stirred for an hour, and the pH value of leachate was 7.82, indicating that alkaline oxides in the lead slag may have a neutralizing effect on the wastewater and facilitate the removal of arsenic.
Thermodynamic Analysis
To forecast the dissolution behavior of lead slag and the precipitation of iron arsenate in acidic arsenic-containing wastewater, the electrochemical potential versus pH (Eh-pH diagram) of the Fe-As-H2O system was plotted using Fact Sage™. Figure 2 shows the stability fields for the Fe-As-H2O system at 25 °C and atmospheric pressure. According to thermodynamic calculations, pH levels have a major impact on the types of iron and arsenic compounds. Iron and arsenic generate iron arsenate at pH 0-8 with excess iron in the form of Fe(III), and excess iron in the form of Fe(OH)3 at pH 0.8-4. Arsenate can be immobilized by Fe(OH)3 by adsorption when the pH is greater than 4.3, when iron and
Thermodynamic Analysis
To forecast the dissolution behavior of lead slag and the precipitation of iron arsenate in acidic arsenic-containing wastewater, the electrochemical potential versus pH (Eh-pH diagram) of the Fe-As-H 2 O system was plotted using Fact Sage™. Figure 2 shows the stability fields for the Fe-As-H 2 O system at 25 • C and atmospheric pressure. According to thermodynamic calculations, pH levels have a major impact on the types of iron and arsenic compounds. Iron and arsenic generate iron arsenate at pH 0-8 with excess iron in the form of Fe(III), and excess iron in the form of Fe(OH) 3 at pH 0.8-4. Arsenate can be immobilized by Fe(OH) 3 by adsorption when the pH is greater than 4.3, when iron and arsenic no longer precipitate as iron arsenate. The valence state of iron and arsenic is similarly influenced by the redox potential. Arsenic exists in the form of As 2 O 3 molecules in a strongly acidic environment with a low redox potential [29], whereas iron exists as divalent ions. Neither electrostatic adsorption nor the generation of iron arsenate precipitation make it easy to remove arsenic in this condition. As the redox potential and pH increase, arsenic is gradually oxidized and ionized, enabling the precipitation of iron arsenate or scorodite. To remove arsenic in the first stage, it is important to elevate redox potential and keep the initial solution pH below 4.3 by generating FeAsO 4 precipitate, and then to remove arsenic thoroughly in the second stage by Fe(OH) 3 adsorption [30]. arsenic no longer precipitate as iron arsenate. The valence state of iron and arsenic is si ilarly influenced by the redox potential. Arsenic exists in the form of As2O3 molecules a strongly acidic environment with a low redox potential [29], whereas iron exists as valent ions. Neither electrostatic adsorption nor the generation of iron arsenate precipi tion make it easy to remove arsenic in this condition. As the redox potential and pH crease, arsenic is gradually oxidized and ionized, enabling the precipitation of iron ar nate or scorodite. To remove arsenic in the first stage, it is important to elevate redox p tential and keep the initial solution pH below 4.3 by generating FeAsO4 precipitate, a then to remove arsenic thoroughly in the second stage by Fe(OH)3 adsorption [30].
Effect of Dosage of H2O2
The source of arsenic in smelting wastewater is As2O3-bearing flue dust from sme ing operations; thus, trivalent arsenic is predominant, with a ratio of As(III)/As (tot equal to 67% in lead smelting wastewater [7,31]. The significance of pre-oxidation is th the mobility and toxicity of As(III) are much greater than those of As(V) [32]. It w demonstrated in the last section that As(III) exists mainly in molecular form in strong acidic environment, so it is difficult for iron oxides to adsorb it by electrostatic interactio As the pH increases, arsenate molecules ionize and adsorb onto the iron oxide surfa however, the Langmuir constant KL values were higher for As(V) than for As(III), in cating that As(V) formed stronger bonds with Fe(III) than did As(III) [33]. However, it difficult to transform trivalent arsenic to pentavalent arsenic under atmospheric pressu and an oxidant needs to be introduced to accelerate the oxidation process. Previous stu has proven that H2O2 dosage affects iron arsenate precipitate and controls the oxidizati rate of Fe(II) and As(III) for scorodite synthesis and crystal growth [34]; therefore, in th study, we chose H2O2 as an oxidant to promote the oxidation of As(III) in wastewater.
The effect of the dosage of H2O2 on the arsenic removal from wastewater was studi at H2O2/wastewater volume ratios ranging from 0 to 10% for 12 h reaction in room-te
Effect of Dosage of H 2 O 2
The source of arsenic in smelting wastewater is As 2 O 3 -bearing flue dust from smelting operations; thus, trivalent arsenic is predominant, with a ratio of As(III)/As (total) equal to 67% in lead smelting wastewater [7,31]. The significance of pre-oxidation is that the mobility and toxicity of As(III) are much greater than those of As(V) [32]. It was demonstrated in the last section that As(III) exists mainly in molecular form in strongly acidic environment, so it is difficult for iron oxides to adsorb it by electrostatic interaction. As the pH increases, arsenate molecules ionize and adsorb onto the iron oxide surface; however, the Langmuir constant KL values were higher for As(V) than for As(III), indicating that As(V) formed stronger bonds with Fe(III) than did As(III) [33]. However, it is difficult to transform trivalent arsenic to pentavalent arsenic under atmospheric pressure, and an oxidant needs to be introduced to accelerate the oxidation process. Previous study has proven that H 2 O 2 dosage affects iron arsenate precipitate and controls the oxidization rate of Fe(II) and As(III) for scorodite synthesis and crystal growth [34]; therefore, in this study, we chose H 2 O 2 as an oxidant to promote the oxidation of As(III) in wastewater.
The effect of the dosage of H 2 O 2 on the arsenic removal from wastewater was studied at H 2 O 2 /wastewater volume ratios ranging from 0 to 10% for 12 h reaction in roomtemperature batch experiments, with a solid-liquid ratio of 1 g/5 mL. As shown in Figure 3a, when the H 2 O 2 /wastewater volume ratio was 10%, the residual arsenic concentration in wastewater decreased sharply from an initial value of 6500 mg/L to 0.59 mg/L, corresponding to an arsenic removal efficiency of nearly 100%. When there was no addition of H 2 O 2 , the removal efficiency of arsenic was only 34.9%, and the residual arsenic concentration in the wastewater was 4035 mg/L. The results showed that higher H 2 O 2 dosage leads to higher arsenic removal efficiency; this might imply that the pathway of arsenic precipitate was related to the valence of arsenic, with As(V) forming precipitate more readily with iron or adsorbing to iron oxide surfaces for removal. With the increasing of H 2 O 2 dosage, the concentration of residual iron decreased (Figure 3c), and the toxicity of the precipitates in TCLP test decreased ( Figure 3b); the reason may be that the increase of H 2 O 2 dosage led to the oxidation of As(III) and Fe(II), promoting the generation of iron arsenate precipitate, while the hydroxide formed by the hydrolysis of Fe(III) had a strong adsorption capacity for arsenic [35]. The higher dosage of H 2 O 2 promoted the oxidation of Fe(II), and the Fe(III) generated was hydrolyzed; the acidity generated by hydrolysis then led to a decrease in wastewater pH [36]. when the H2O2/wastewater volume ratio was 10%, the residual arsenic concentration in wastewater decreased sharply from an initial value of 6500 mg/L to 0.59 mg/L, corresponding to an arsenic removal efficiency of nearly 100%. When there was no addition of H2O2, the removal efficiency of arsenic was only 34.9%, and the residual arsenic concentration in the wastewater was 4035 mg/L. The results showed that higher H2O2 dosage leads to higher arsenic removal efficiency; this might imply that the pathway of arsenic precipitate was related to the valence of arsenic, with As(V) forming precipitate more readily with iron or adsorbing to iron oxide surfaces for removal. With the increasing of H2O2 dosage, the concentration of residual iron decreased (Figure 3c), and the toxicity of the precipitates in TCLP test decreased ( Figure 3b); the reason may be that the increase of H2O2 dosage led to the oxidation of As(III) and Fe(II), promoting the generation of iron arsenate precipitate, while the hydroxide formed by the hydrolysis of Fe(III) had a strong adsorption capacity for arsenic [35]. The higher dosage of H2O2 promoted the oxidation of Fe(II), and the Fe(III) generated was hydrolyzed; the acidity generated by hydrolysis then led to a decrease in wastewater pH [36].
Effect of the Solid-Liquid Ratio
Lead slag is the iron source for arsenic removal by iron arsenate precipitation; it will be dissolved in the acidic wastewater, and its dosage will directly affect the pH value of the wastewater and concentrations of Fe and Ca. The undissolved particles will act as nucleation sites for gypsum and scorodite, so the dosage of lead slag may also affect the size and crystallinity of the precipitate [19]. A batch experiment at 25 °C under
Effect of the Solid-Liquid Ratio
Lead slag is the iron source for arsenic removal by iron arsenate precipitation; it will be dissolved in the acidic wastewater, and its dosage will directly affect the pH value of the wastewater and concentrations of Fe and Ca. The undissolved particles will act as nucleation sites for gypsum and scorodite, so the dosage of lead slag may also affect the size and crystallinity of the precipitate [19]. A batch experiment at 25 • C under atmospheric pressure was conducted to explore the influence of the dosage of lead slag on arsenic removal efficiency and leaching stability. The dosage of lead slag was controlled by adjusting the solid-liquid ratio to 1:3, 1:5, 1:7, 1:10, and 1:15 g/mL. Figure 4a shows that the residual concentration of arsenic sharply decreased from 2931.54 to 3.35 mg/L. When the solid-liquid ratio raised from 1:15 to 1:10 g/mL, removal efficiency increased from 56.57% to 92.73%. As a result, after the solid-liquid ratio reached 1:10, the removal efficiency no longer changed drastically, and the leaching toxicity of arsenic-containing precipitate also dropped to below 5 mg/L (Figure 4b), lower than national standards for leaching toxicity [37]. Notably, the leaching toxicity of As-bearing precipitate in the TCLP test decreased significantly with the increase of the solid-liquid ratio; when the solid-liquid ratio was 1:15, the leaching toxicity was 13.24 mg/L, and when the solid-liquid ratio increased to 1:10, the leaching toxicity was 3.36 mg/L, which meets the threshold value of designating hazardous waste. The concentration of residual iron ions in the wastewater showed an opposite trend with an increasing solid-liquid ratio, as seen in Figure 4c. The acidity of wastewater can leach limited iron ions, while-with the increase of the solidliquid ratio-the alkalinity of lead slag can raise the pH of wastewater and promote the hydrolysis of iron ions. Considering the economic cost and environmental safety factors, we believe that a solid-liquid ratio of 1:10 is the best condition for arsenic removal and stabilization from wastewater by lead slag. atmospheric pressure was conducted to explore the influence of the dosage of lead slag on arsenic removal efficiency and leaching stability. The dosage of lead slag was controlled by adjusting the solid-liquid ratio to 1:3, 1:5, 1:7, 1:10, and 1:15 g/mL. Figure 4a shows that the residual concentration of arsenic sharply decreased from 2931.54 to 3.35 mg/L. When the solid-liquid ratio raised from 1:15 to 1:10 g/mL, removal efficiency increased from 56.57% to 92.73%. As a result, after the solid-liquid ratio reached 1:10, the removal efficiency no longer changed drastically, and the leaching toxicity of arsenic-containing precipitate also dropped to below 5 mg/L (Figure 4b), lower than national standards for leaching toxicity [37]. Notably, the leaching toxicity of As-bearing precipitate in the TCLP test decreased significantly with the increase of the solid-liquid ratio; when the solid-liquid ratio was 1:15, the leaching toxicity was 13.24 mg/L, and when the solid-liquid ratio increased to 1:10, the leaching toxicity was 3.36 mg/L, which meets the threshold value of designating hazardous waste. The concentration of residual iron ions in the wastewater showed an opposite trend with an increasing solid-liquid ratio, as seen in Figure 4c. The acidity of wastewater can leach limited iron ions, while-with the increase of the solid-liquid ratio-the alkalinity of lead slag can raise the pH of wastewater and promote the hydrolysis of iron ions. Considering the economic cost and environmental safety factors, we believe that a solid-liquid ratio of 1:10 is the best condition for arsenic removal and stabilization from wastewater by lead slag.
Effect of Reaction Time
Under room temperature conditions, the effect of reaction time on wastewater component and precipitate properties was explored with a solid-liquid ratio of 1:10 g/mL and
Effect of Reaction Time
Under room temperature conditions, the effect of reaction time on wastewater component and precipitate properties was explored with a solid-liquid ratio of 1:10 g/mL and an H 2 O 2 volume ratio of 10%. The arsenic concentration decreased drastically within 2 h (Figure 5a), which was related to the significant increase in pH of the wastewater as shown in Figure 5b. Correspondingly, the leaching concentrations of Ca, Mg, and Al increased steeply within 2 h, as shown in Figure 5c, and iron followed the same trend. One possible theory is that the iron oxides and alkaline oxides in lead slag dissolved under the acid corrosion of sulfuric acid in wastewater, releasing a large amount of iron, calcium, and magnesium ions and raising the pH, which promoted the formation of poorly crystalline iron arsenate or other co-precipitates associated with Ca-Mg-Al in weakly acidic solutions [38]. It has been demonstrated that in acidic arsenic-containing solutions, the association of Ca(II)-Fe(III)-As(V) caused the precipitation of poorly crystalline arsenic precipitate or co-precipitation products from As(V)-Fe(III) solutions, which were then adsorbed on the surface of hydrated iron oxide through surface coordination. The reaction can be split into three periods based on the trend of decreasing arsenic concentration in wastewater: (1) quick period, (2) slow period, and (3)
Effect of Reaction Temperature
Reaction temperature affects the rate of nucleation and crystal growth [44], and t scorodite synthesized at a high reaction temperature has larger particle size [21]. Althou studies have demonstrated that scorodite crystals grow better under high temperatu and pressure settings [45], there is experimental evidence that it can be formed around The quick period of reaction was 0-2 h, during which the dissolution of alkaline oxide raised the pH of wastewater to 2.54, and a large amount of iron, calcium, and magnesium ions promoted the formation of iron arsenate precipitate and co-precipitate, lowering the concentration of arsenic to 1000 mg/L, which can be described with the following reactions: According to previous research, not all arsenic in wastewater can be eliminated by precipitation and co-precipitation; in reality, part of arsenic will be adsorbed on the surface of ferrihydrite and progressively changed into poorly crystalline iron arsenate precipitate [39][40][41]. The transformation progression can be described with the following reactions [42]: where (≡) represents the ferrihydrite surface; (ads) represents adsorption; (p-c-surfprec) represents poorly crystalline surface precipitate; (crys) represents crystalline. After 12 h of reaction, the concentration of arsenic in the wastewater decreased to 3.35 ± 1.94 mg/L, and the removal efficiency reached 99.95%. As the reaction proceeded, the stability of the precipitate increased gradually, and the leaching toxicity of arsenic decreased from 28.02 at 1 h to 3.36 at 12 h. From the XRD results (Figure 5d), the main components of the precipitate were gypsum and a small amount of undissolved iron oxide; the source of newly formed gypsum was calcite in lead slag, which dissolves in acid wastewater and then combines with sulfate. No diffraction peaks of scorodite or iron arsenate were detected in XRD analysis, probably because the low crystallinity of the iron arsenate precipitate or the weaker signal of amorphous iron arsenate were shielded by gypsum. These findings are consistent with the results of Duan and Li [26,43]. The difference is that the diffraction peaks of scorodite were observed in the precipitation of the reaction for 12 h by Li et al., indicating that the formation of scorodite crystals takes some time; however, in our study, the formation of scorodite was not observed, which may be caused by the different reaction conditions.
Effect of Reaction Temperature
Reaction temperature affects the rate of nucleation and crystal growth [44], and the scorodite synthesized at a high reaction temperature has larger particle size [21]. Although studies have demonstrated that scorodite crystals grow better under high temperature and pressure settings [45], there is experimental evidence that it can be formed around 30 • C [21,46]. In this studythe precipitate obtained at 85 • C had a higher crystallinity than the one obtained at 25 • C according to SEM analysis ( Figure 6). A small number of fusiform crystals can be seen on the surface of the precipitate obtained at 85 • C; this is probably scorodite or other iron arsenate crystalline matter. As a result, precipitate with high crystallinity was more safe, with arsenic leaching toxicity of 0.61 mg/L (Figure 7b), which was lower than precipitates obtained at 25 • C (3.36 mg/L). As shown in Figure 7c, a high temperature promoted the hydrolysis of iron ions, resulting in a decrease in the concentration of residual iron in the wastewater with increasing temperature. However, no scorodite diffraction peaks were found in the precipitates collected after 12 h of reaction at 25-85 °C Nevertheless, the removal efficiency of arsenic was hardly affected by temperature, and a very ideal arsenic removal effect could be achieved at room temperature (Fig. 7a). Therefore, we believe that ambient conditions are sufficient for the efficient removal and stabilization of arsenic from wastewater. As shown in Figure 7c, a high temperature promoted the hydrolysis of iron ions, resulting in a decrease in the concentration of residual iron in the wastewater with increasing temperature. However, no scorodite diffraction peaks were found in the precipitates collected after 12 h of reaction at 25-85 °C Nevertheless, the removal efficiency of arsenic was hardly affected by temperature, and a very ideal arsenic removal effect could be achieved at room temperature (Fig. 7a). Therefore, we believe that ambient conditions are sufficient for the efficient removal and stabilization of arsenic from wastewater. As shown in Figure 7c, a high temperature promoted the hydrolysis of iron ions, resulting in a decrease in the concentration of residual iron in the wastewater with increasing temperature. However, no scorodite diffraction peaks were found in the precipitates collected after 12 h of reaction at 25-85 • C Nevertheless, the removal efficiency of arsenic was hardly affected by temperature, and a very ideal arsenic removal effect could be achieved at room temperature (Figure 7a). Therefore, we believe that ambient conditions are sufficient for the efficient removal and stabilization of arsenic from wastewater.
Arsenic Removal Mechanism by Lead Slag
The FTIR spectra were used to evaluate the surface properties of arsenic-bearing precipitates obtained at different times ( Figure 8). The peaks at 3543 and 3402 cm −1 can be assigned to the O-H stretching vibrations of the two water molecules of gypsum [39,47], whereas the O-H bending vibration from the water encapsulated in gypsum or iron arsenate is responsible for the peak at 1622 cm −1 . The band at 827 cm −1 is ascribed to the As-O stretching vibration of the As-O-Fe coordination of iron arsenate precipitate [39,48,49]. As the reaction proceeded, the band width gradually decreased, indicating an increase of crystallinity. The As-O-Fe bidentate-binuclear coordinating to ferrihydrite is attributed to this band, demonstrating arsenate adsorption on ferrihydrite [43]. A broad peak appeared at 1136 cm −1 is assigned to structural SO 4 2ions coordinating with CaSO4.2H 2 O or ferric sulfate compounds [50][51][52]. Additionally, the peak at 673 cm −1 is incorporated into the forming of CaSO 4 .2H 2 O or poorly crystalline iron arsenate by substitution of AsO 4 3ions during synthesis from sulfate medium [14,53,54]. The peak at 607 cm −1 corresponds to Fe-O derived from undissolved Fe 3 O 4 [55]. Another peak at 474 cm −1 is probably attributed to the Si-O asymmetric stretching vibration coordinating with the newly formed silicates [6,51].
Arsenic Removal Mechanism by Lead Slag
The FTIR spectra were used to evaluate the surface properties of arsenic-bearing precipitates obtained at different times ( Figure 8). The peaks at 3543 and 3402 cm −1 can be assigned to the O-H stretching vibrations of the two water molecules of gypsum [39,47], whereas the O-H bending vibration from the water encapsulated in gypsum or iron arsenate is responsible for the peak at 1622 cm −1 . The band at 827 cm −1 is ascribed to the As-O stretching vibration of the As-O-Fe coordination of iron arsenate precipitate [39,48,49]. As the reaction proceeded, the band width gradually decreased, indicating an increase of crystallinity. The As-O-Fe bidentate-binuclear coordinating to ferrihydrite is attributed to this band, demonstrating arsenate adsorption on ferrihydrite [43]. A broad peak appeared at 1136 cm −1 is assigned to structural SO4 2-ions coordinating with CaSO4.2H2O or ferric sulfate compounds [50][51][52]. Additionally, the peak at 673 cm −1 is incorporated into the forming of CaSO4.2H2O or poorly crystalline iron arsenate by substitution of AsO4 3ions during synthesis from sulfate medium [14,53,54]. The peak at 607 cm −1 corresponds to Fe-O derived from undissolved Fe3O4 [55]. Another peak at 474 cm −1 is probably attributed to the Si-O asymmetric stretching vibration coordinating with the newly formed silicates [6,51]. In order to gain further insight into the mechanism of arsenic removal from arseniccontaining wastewater by lead slag, XPS scanning were performed on lead slag and arsenic-bearing precipitates obtained at 25 °C and 85 °C (Figure 9). The evolution of the valence state of precipitates was further demonstrated by the XPS narrow-scan spectra of Fe (2p3/2) and As (3d5/2), illustrated in Figure 9, and their corresponding peak fitting parameters were summarized in Tables 3-5. In order to gain further insight into the mechanism of arsenic removal from arseniccontaining wastewater by lead slag, XPS scanning were performed on lead slag and arsenic-bearing precipitates obtained at 25 • C and 85 • C (Figure 9). The evolution of the valence state of precipitates was further demonstrated by the XPS narrow-scan spectra of Fe (2p 3/2 ) and As (3d 5/2 ), illustrated in Figure 9, and their corresponding peak fitting parameters were summarized in Tables 3-5. As shown in Figure 9a, due to the presence of small amounts of arsenic, the characteristic peak of arsenic was also detected in the lead slag. However, compared with the As3d peak of lead slag, the As3d peaks of As-bearing precipitates obviously shifted to higher energy levels, indicating that arsenic in the wastewater was oxidized and precipitated in the form of As(V). Peaks at 43.18, 43.7, 44.31, and 44.8 (Fe-As-O) [20,39], as well as 45.39 and 46.16 eV (As-O-OH) [56], were observed, indicating the formation of iron arsenate or scorodite and that a portion of arsenic is adsorbed on the iron hydroxide surface.
The spectrum of Fe (2p3/2) shown in Figure 9b indicates that the precipitate's peak was clearly shifted to a higher energy level compared to the lead slag. The percentage of Fe(II)-O in fresh lead slag was 18.58%, and the Fe(II) could effectively control the supersaturation of iron during dissolution, while no Fe(II)-O was detected in the precipitate, indicating that it had been oxidized. The peaks at 710.07, 710.85, 711.73, 712.86, and 713.32eV represent scorodite, iron-oxyhydroxide, hematite, and jarosite, respectively [20,57,58]. Fe-As(V)-O in the Fe2p peak corresponds to Fe-As(V)-O in the As3d peak, further demonstrating the formation of iron arsenate or scorodite precipitates, which agrees with the FTIR results. As shown in Figure 9a, due to the presence of small amounts of arsenic, the characteristic peak of arsenic was also detected in the lead slag. However, compared with the As3d peak of lead slag, the As3d peaks of As-bearing precipitates obviously shifted to higher energy levels, indicating that arsenic in the wastewater was oxidized and precipitated in the form of As(V). Peaks at 43.18, 43.7, 44.31, and 44.8 (Fe-As-O) [20,39], as well as 45.39 and 46.16 eV (As-O-OH) [56], were observed, indicating the formation of iron arsenate or scorodite and that a portion of arsenic is adsorbed on the iron hydroxide surface.
The spectrum of Fe (2p 3/2 ) shown in Figure 9b indicates that the precipitate's peak was clearly shifted to a higher energy level compared to the lead slag. The percentage of Fe(II)-O in fresh lead slag was 18.58%, and the Fe(II) could effectively control the supersaturation of iron during dissolution, while no Fe(II)-O was detected in the precipitate, indicating that it had been oxidized. The peaks at 710.07, 710.85, 711.73, 712.86, and 713.32eV represent scorodite, iron-oxyhydroxide, hematite, and jarosite, respectively [20,57,58]. Fe-As(V)-O in the Fe2p peak corresponds to Fe-As(V)-O in the As3d peak, further demonstrating the formation of iron arsenate or scorodite precipitates, which agrees with the FTIR results.
The morphology and elemental composition of the precipitates generated at different reaction times were examined using SEM-EDS, as shown in Figure 10 and Table 6. The lead slag particles become smaller after 1 h of reaction due to the dissolving impact of acidic wastewater, and the previously relatively flat surface (Figure 10a,b) was wrapped in fine strips and irregular fine particles, as shown in Figure 10c,d. As the reaction continued for 6 h, the precipitate amount increased, and sediment adherence caused the surface to become denser. According to EDS data, the strips were primarily made up of Ca, S, and O, most likely gypsum phase, which also agrees with the XRD findings ( Figure 5d). The gypsum observed in SEM by Li et al. had a similar striped structure. Most of the microscopic particles adhered to the surface of gypsum were composed of Fe, As, and O, and were most likely co-precipitate-doped amorphous iron arsenate; no diffraction peak was observed in XRD because of the low crystallinity. The precipitate contained fine granular iron arsenate and streaks or plates of gypsum, as shown by the EDS results in Figure 10g,i, as well as agglomerated flocs adsorbed on its surface that may represent ferric hydroxide or silica gel created by the dissolution of silica oxides. After 12 h of reaction, the amount of arsenic fixed in precipitate increased noticeably from 3.45 to 7.91 at wt.%, showing that the second half of the reaction is equally crucial for the removal of arsenic.
Transmission electron microscopy was used to examine the internal structure of the arsenic-containing precipitate formed after 12 h of reaction at room temperature, as shown in Figure 11. The precipitate's exterior was encapsulated in dispersed, erratic gelatinous material, while its inside was composed of more regularly formed particles. The results of the energy spectrum show that the inner particles are iron arsenate, and the outer part may be iron hydroxide and silica gel generated by the dissolution of silicon oxide. Due to the semi-encapsulation and adsorption of surface colloids, the precipitation performed well in the toxin leaching process, reducing the risk of secondary contamination. Transmission electron microscopy was used to examine the internal structure of the arsenic-containing precipitate formed after 12 h of reaction at room temperature, as shown in Figure 11. The precipitate's exterior was encapsulated in dispersed, erratic gelatinous the semi-encapsulation and adsorption of surface colloids, the precipitation perform well in the toxin leaching process, reducing the risk of secondary contamination. Based on the experimental data and characterization results, we propose a react mechanism of arsenic removal and stabilization from wastewater by lead slag, as sho in Figure 12. Lead slag dissolved in acidic arsenic-containing wastewater, and alkal oxide dissolution raised the pH value of the wastewater and provided Ca 2+ , Fe 2+ , F Mg 2+ , Al 3+ , and SiO3 2− . Ca 2+ first combined with sulfate in the wastewater to fo CaSO4.2H2O; Fe 2+ and As 3+ were oxidized by H2O2, combined to form iron arsenate, a gradually transformed to the scorodite. Gypsum and iron arsenate used undissolved le slag particles as growth sites. Notably, the precipitate's surface was covered with ir hydroxide and silicate colloid formed by hydrolysis; semi-encapsulation and adsorpt further enhanced the leaching stability of the precipitate. Based on the experimental data and characterization results, we propose a reaction mechanism of arsenic removal and stabilization from wastewater by lead slag, as shown in Figure 12. Lead slag dissolved in acidic arsenic-containing wastewater, and alkaline oxide dissolution raised the pH value of the wastewater and provided Ca 2+ , Fe 2+ , Fe 3+ , Mg 2+ , Al 3+ , and SiO 3 2− . Ca 2+ first combined with sulfate in the wastewater to form CaSO 4 .2H 2 O; Fe 2+ and As 3+ were oxidized by H 2 O 2 , combined to form iron arsenate, and gradually transformed to the scorodite. Gypsum and iron arsenate used undissolved lead slag particles as growth sites. Notably, the precipitate's surface was covered with iron hydroxide and silicate colloid formed by hydrolysis; semi-encapsulation and adsorption further enhanced the leaching stability of the precipitate.
Prospects of Treating Wastewater Using Lead Slag
Our method calls for a low temperature, and good arsenic removal efficiency can be achieved at room temperature. The room-temperature precipitate performed well in leaching toxicity tests, and fulfills the concentration limit of less than 5 mg/L despite having little crystallinity and no scorodite production. The environmental safety of lead slag as a wastewater treatment agent is also reliable; as shown in Table 7, after 12 h of reaction, the presence of lead was no longer detectable in the wastewater, and the Pb leaching concentration of the precipitate was less than 5 mg/L limit in "Identification Standard for Identification of Hazardous Wastes" (GB 5085. , as shown in Table 8. If lead slag can be used to treat acidic arsenic-containing wastewater, the disposal of slag and wastewater in the smelter can be solved at the same time, and the wastewater treatment cost and raw material transportation cost will be greatly reduced. This method will provide a reference value for the waste-disposal problem of non-ferrous metal smelting, and for the sake of understanding, a prospective process was designed as shown in Figure 13.
Prospects of Treating Wastewater Using Lead Slag
Our method calls for a low temperature, and good arsenic removal efficiency can be achieved at room temperature. The room-temperature precipitate performed well in leaching toxicity tests, and fulfills the concentration limit of less than 5 mg/L despite having little crystallinity and no scorodite production. The environmental safety of lead slag as a wastewater treatment agent is also reliable; as shown in Table 7, after 12 h of reaction, the presence of lead was no longer detectable in the wastewater, and the Pb leaching concentration of the precipitate was less than 5 mg/L limit in "Identification Standard for Identification of Hazardous Wastes" (GB 5085. , as shown in Table 8. If lead slag can be used to treat acidic arsenic-containing wastewater, the disposal of slag and wastewater in the smelter can be solved at the same time, and the wastewater treatment cost and raw material transportation cost will be greatly reduced. This method will provide a reference value for the waste-disposal problem of non-ferrous metal smelting, and for the sake of understanding, a prospective process was designed as shown in Figure 13.
Conclusions
In this work, lead slag was used as in situ iron source and neutralizer to eliminate arsenic in the form of iron arsenate and arsenic-doped gypsum. The dissolution of lea provides alkalinity to neutralize the acidity of the wastewater, and releases large amou of Ca, Fe, Al, Mg, and Si ions. As(III) and Fe(II) was oxidized by H2O2, and the arsenic w mainly eliminated in the form of low-crystalline iron arsenate. In addition, there was large amount of strip gypsum in the precipitate, which immobilized part of the arsenic b co-precipitation during the formation process and provided attachment sites for the iro arsenate. A 99.95% arsenic removal efficiency was achieved with a solid-liquid ratio 1:10 g/mL, a 10% H2O2 volume ratio, and a 12 h reaction duration at room temperatur Semi-encapsulation of As-bearing precipitate by silicate colloid and ferric hydroxide e hanced the leaching stability, with As concentration of 3.36 mg/L and Pb concentration 2.93 mg/L, which are lower than the leaching threshold (5 mg/L) in "Identification Stan ard for Identification of Hazardous Wastes" (GB 5085. . Lead slag has great pote tial in the treatment of acidic arsenic-containing wastewater and can form a joint slag wastewater treatment and water recycling system in the smelter, paving a new directio for non-ferrous metal smelting wastewater treatment.
Conflicts of Interest:
We declare that we have no financial and personal relationships with oth people or organizations that can inappropriately influence our work. There is no professional
Conclusions
In this work, lead slag was used as in situ iron source and neutralizer to eliminated arsenic in the form of iron arsenate and arsenic-doped gypsum. The dissolution of lead provides alkalinity to neutralize the acidity of the wastewater, and releases large amount of Ca, Fe, Al, Mg, and Si ions. As(III) and Fe(II) was oxidized by H 2 O 2 , and the arsenic was mainly eliminated in the form of low-crystalline iron arsenate. In addition, there was a large amount of strip gypsum in the precipitate, which immobilized part of the arsenic by co-precipitation during the formation process and provided attachment sites for the iron arsenate. A 99.95% arsenic removal efficiency was achieved with a solidliquid ratio of 1:10 g/mL, a 10% H 2 O 2 volume ratio, and a 12 h reaction duration at room temperature. Semi-encapsulation of As-bearing precipitate by silicate colloid and ferric hydroxide enhanced the leaching stability, with As concentration of 3.36 mg/L and Pb concentration of 2.93 mg/L, which are lower than the leaching threshold (5 mg/L) in "Identification Standard for Identification of Hazardous Wastes" (GB 5085. . Lead slag has great potential in the treatment of acidic arsenic-containing wastewater and can form a joint slag-wastewater treatment and water recycling system in the smelter, paving a new direction for non-ferrous metal smelting wastewater treatment.
Conflicts of Interest:
We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work. There is no professional or other personal interest of any product, service, and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled "Utilization of lead slag as in situ iron source for arsenic removal by forming iron arsenate". | 2022-10-28T15:14:24.075Z | 2022-10-25T00:00:00.000 | {
"year": 2022,
"sha1": "43d9d5bbd29f48ec86e21e13a898a47875d061f0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/21/7471/pdf?version=1667362091",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8ca6322c5e8b4c4a7569bcf1718732a21fb1709",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266214589 | pes2o/s2orc | v3-fos-license | De-identification of free text data containing personal health information: a scoping review of reviews
Abstract Introduction Using data in research often requires that the data first be de-identified, particularly in the case of health data, which often include Personal Identifiable Information (PII) and/or Personal Health Identifying Information (PHII). There are established procedures for de-identifying structured data, but de-identifying clinical notes, electronic health records, and other records that include free text data is more complex. Several different ways to achieve this are documented in the literature. This scoping review identifies categories of de-identification methods that can be used for free text data. Methods We adopted an established scoping review methodology to examine review articles published up to May 9, 2022, in Ovid MEDLINE; Ovid Embase; Scopus; the ACM Digital Library; IEEE Explore; and Compendex. Our research question was: What methods are used to de-identify free text data? Two independent reviewers conducted title and abstract screening and full-text article screening using the online review management tool Covidence. Results The initial literature search retrieved 3,312 articles, most of which focused primarily on structured data. Eighteen publications describing methods of de-identification of free text data met the inclusion criteria for our review. The majority of the included articles focused on removing categories of personal health information identified by the Health Insurance Portability and Accountability Act (HIPAA). The de-identification methods they described combined rule-based methods or machine learning with other strategies such as deep learning. Conclusion Our review identifies and categorises de-identification methods for free text data as rule-based methods, machine learning, deep learning and a combination of these and other approaches. Most of the articles we found in our search refer to de-identification methods that target some or all categories of PHII. Our review also highlights how de-identification systems for free text data have evolved over time and points to hybrid approaches as the most promising approach for the future.
Introduction
Using data in research often requires that the data first be de-identified, particularly in the case of health data, which often include Personal Identifiable Information (PII) and/or Personal Health Identifying Information (PHII).There are established procedures for de-identifying structured data, but de-identifying clinical notes, electronic health records, and other records that include free text data is more complex.Several different ways to achieve this are documented in the literature.This scoping review identifies categories of de-identification methods that can be used for free text data.
Methods
We adopted an established scoping review methodology to examine review articles published up to May 9, 2022, in Ovid MEDLINE; Ovid Embase; Scopus; the ACM Digital Library; IEEE Explore; and Compendex.Our research question was: What methods are used to de-identify free text data?Two independent reviewers conducted title and abstract screening and full-text article screening using the online review management tool Covidence.
Results
The initial literature search retrieved 3,312 articles, most of which focused primarily on structured data.Eighteen publications describing methods of de-identification of free text data met the inclusion criteria for our review.The majority of the included articles focused on removing categories of personal health information identified by the Health Insurance Portability and Accountability Act (HIPAA).The de-identification methods they described combined rule-based methods or machine learning with other strategies such as deep learning.
Conclusion
Our review identifies and categorises de-identification methods for free text data as rule-based methods, machine learning, deep learning and a combination of these and other approaches.Most of the articles we found in our search refer to de-identification methods that target some or all categories of PHII.Our review also highlights how de-identification systems for free text data have evolved over time and points to hybrid approaches as the most promising approach for the future.Keywords de-identification; Health Insurance Portability and Accountability Act; electronic medical records; machine learning; personal health information
Introduction
The production, collection and use of population data for research is becoming more prevalent across multiple sectors, but particularly in health and healthcare [1][2][3].For example, the use of electronic health records has seen a significant increase among researchers and clinicians [4,5].However, population datasets often contain Personal Identifiable Information (PII) and/or Personal Health Identifying Information (PHII), which researchers have the responsibility to keep confidential.In Canada, the use of population data containing PII and PHII in research is governed by the Canadian Tri-Council Policy Statement on Ethical Conduct for Research Involving Humans, which includes three core principles: respect for persons, concern for welfare, and justice [6].One effective way to preserve privacy and abide by this ethical framework is to de-identify the data before it is used in research.De-identification refers to the removal or masking of PII/PHII in a dataset and for research purposes, it may be preferable to anonymisation, a process that eliminates all identifying details in a data record with no way of back-tracking to link related data records together [7].When a record number, file number or other encrypted linkage tool is retained in the original data, the data are not referred to as 'anonymised' but are instead 'de-identified' and can be used in data linkage applications.
The federally mandated Freedom of Information and Protection of Privacy Act (FIPPA) and the provincially mandated Personal Health Information Act (PHIA) provide definitions of PII and PHII and set out guidelines to inform the process of de-identifying structured data [7].Structured data are organised into specific value sets and are typically stored in a database [8].Meanwhile, unstructured or free text data do not have pre-defined values; for example, reports created by physicians may contain free text data that vary widely in structure and content [8].Currently, there is very little formal guidance available on how to de-identify free text data, and none that we could find that differentiates PII and PHII in the de-identification process.In this matter, the distinction between PII (recorded information that could identify an individual or groups of individuals) and PHII (specific health information about an individual or groups of individuals) is important, because specific approaches for de-identification are needed if health information is present in the data [8] (see Table 1 for examples [9]).
The natural language processing (NLP) research community has made great strides in developing methods for automatically de-identifying data.There are currently two primary approaches in use: 1. Rule-based methods use pattern-matching with set conditions to satisfy a rule [7].A rule-based method could, for example, be used to find names, addresses, or email addresses in data records.Advantages to rulebased methods include that they are relatively simple to create and do not require labelled data [10], and certain sub-types (e.g., generalisation, suppression, and data perturbation) can also be used to prevent individual records from being traced back and re-identified, if this is an important aspect of the research study [10].However, developing rules can be time-consuming since it is difficult to include all possible examples in the rules, and the experts who design the rules may make assumptions about the data that could limit the effectiveness of the de-identification process [11].
2. Machine learning (ML)/statistical learning methods use probabilistic or classification modelling to describe the structure of data or generate predictions based on inputs from a dataset.Machine/statistical learning algorithms are classified as either supervised or unsupervised.Supervised ML requires that a sample of data be labelled manually to support the model.The advantage of supervised ML approaches is that they automatically learn sophisticated pattern recognition [12].However, it can be more difficult to identify sources of error in unsupervised deep learning ML model than in a rule-based approach.In addition, when it comes to rare types of information, ML methods can also have a lower performance compared to rule-based methods [11].
These two methods can be used for de-identification of both structured and free text data.De-identifying structured data is a relatively straight-forward process compared to deidentifying free text data, because structured data typically have a limited number of clearly identified fields; in addition, there is some literature to inform and guide the process [7].
De-identifying free text data, however, may necessitate a more sophisticated approach, since identifying information may occur anywhere in the free text and may include either PII or PHII or both.Some researchers have developed hybrid approaches in an attempt to combine the advantages of rule-based and ML methods for de-identification of PHII [11].The hybrid approaches take advantage of the fact that certain types of PHII exhibit predictable lexical patterns and thus lend themselves well to de-identification via rule-based methods, whereas other frequently encountered PHII types, particularly those with unpredictable lexical variations, are more amenable to machine learning approaches [13].More recently, the use of deep learning methods has been explored to de-identify electronic health records [14].These methods have the ability to learn the most relevant features from the raw data, minimising the need for human input and making the pre-processing and feature engineering steps less time consuming [15].Data de-identification techniques are advancing quickly and have a growing number of applications in research settings.In this scoping review, we provide an overview of what is known about NLP methods used to de-identify free text data.
Methods
We based our scoping review approach on Arksey and O'Malley's (2005) well-established scoping review framework, which comprises five stages [16]: identifying the research question; identifying relevant studies; study selection; charting the data; and collating, summarising, and reporting the results.Our research question was: What methods are used to de-identify free text data?Table 1: Examples of personal identifiable information and personal health identifying information Personal Identifiable Information (PII): • Name, contact information • Age, sex, sexual orientation, martial or family status • Ancestry, race, colour, nationality, national or ethnic origin • Religion, creed, religious belief, association, or activity Personal Health Identifying Information (PHII): • An individual's health or health care history, including genetic information about the individual • The provision of health care to the individual • Payment for health care provided to the individual, including personal health information number (PHIN) and any other identifying number, symbol or particular assigned to an individual • Any identifying information about the individual collected in the course of, and incidental to, the provision of health care or payment for health care
Article screening and selection
Using the study selection criteria in Table 2, two independent reviewers examined the titles and abstracts of the search results.Articles that were ambiguous were discussed with the research coordinator and a consensus decision was reached on whether or not to include them in full-text article screening.The two reviewers then completed full-text article screening on the selected articles.
Data extraction and analysis
The data categories we extracted are presented in Table 3.We analysed and summarised the results in accordance with the PRISMA-ScR reporting checklist [18].The data analysis was designed to provide an overview of methods used for de-identifying free text data.
Article screening and selection
As shown in Figure 1, we identified 3,312 articles in the initial search, and removed 329 duplicates.The two reviewers had a 95.5% agreement rate during title and abstract screening; 4.2% (124) of the initial search results were included at this stage.After full-text article screening, 14.5% (18) of the 124 articles met the specified criteria and were included in the scoping review.
Article characteristics
Of the 18 included articles, twelve were from the computer science literature.Most (83%) were literature reviews.
The other information we planned to extract was scantily available -only three of the 18 articles mentioned the databases and registers the authors used for their searches, two articles provided information regarding the year of publication of the primary articles, number of articles or the percentage of the articles included in their review, and another three articles indicated what inclusion/exclusion criteria the authors used.Table 4 presents more details on these latter articles.Australia and New Zealand -see Table 5 for more details.
Types of PII and PHII
Seventeen articles mentioned different types of PII and PHII (Table 6).Eight of these articles identified methods that deidentified protected health information according to all 18 categories of HIPAA [15, 19-21, 23, 24, 26, 29] while others identified some of the HIPAA categories (Table 7).
Methods of de-identification for free text data
The de-identification approaches for free text data we found in the literature can be categorised into four overlapping groups: rule-based methods, ML methods, deep learning (a subset of machine learning) methods, and hybrid methods.The non-automated rule-based learning approaches used are summarised in Table 8, and all other de-identification approaches and system/software packages mentioned are presented in Table 9. Individuals' identifiers (such as credit card records) and interaction privacy (e.g., use of voice/fingerprint) [30] Key attributes (e.g., ID, name, social security), quasi-identifiers (e.g., birth date, zip code, position, job, blood type), sensitive attributes (e.g., salary, medical examinations, credit card releases) [12,[31][32][33] 7 types of PHII, including personal names, ages, geographical locations, hospitals and healthcare organisations, dates, contact information, IDs [19] PHI: patient name, phone number, physician name, medical history.PII: names, addresses, contact numbers [22] 18 categories of PHI according to HIPAA, quasi-identifiers, 9 categories of personal information according to China Civil Code (name, birthday, ID number, biometric information, home address, phone number, email address, health condition information, and personal tracking information) [23] PHI according to HIPAA, doctor's name and years extracted from dates [20] Direct identifiers (e.g., name, mailing address, email, social security number, phone number or driver's license number) and indirect identifiers (e.g., birth date, postal code, and sex) [27,28] Table 7: HIPAA categories Nine articles referred to methods like rule-based automated learning, i.e., methods created to de-identify text data automatically using HMS Scrubber, an open-source deidentification tool that employs a three-step process to remove PHII from medical documents [36], and DE-ID rulebased automated system that uses sets of rules, patternmatching algorithms, and dictionaries to identify PHII in medical documents [19][20][21].Machine learning approaches such as MIST (MITRE Identification Scrubber Toolkit, software that uses samples of de-identified text that enable it to learn contextual features that are necessary for accuracy) were mentioned in four articles, [19][20][21]24] the Health Information De-identification (HIDE) system was mentioned in two articles [19,20].
System/software packages containing de-identification methods can also be further divided into specific heuristic, pattern-based and statistical learning-based systems.The systems based on deep learning use a combination of specific de-identification approaches.Some articles also mentioned hybrid systems that achieved outstanding results in various natural language processing challenges pertaining to deidentification.For example, systems developed for the 2014 i2b2 challenge is a hybrid system based on machine learning and rule-based methods [13,[37][38][39].
Discussion
Free text data contain a wealth of information that is valuable in research.To take full advantage of this information, de-identification approaches for free text data must ensure the privacy and confidentiality of individuals described in the data.The discussion of de-identification of data in health research previously focused on structured data.The growth and importance of free text data in health records and health research has resulted in the need for advances in de-identification approaches.This scoping review of reviews identifies published de-identification methods for free text data.We have categorized the methods as rulebased methods, machine learning, deep learning and a combination of these and other approaches.Most of the articles we found in our search refer to de-identification methods (primarily rule-based and machine learning methods) that target some or all categories of PHII defined by HIPAA.
In general, experts in the field are using rule-based methods with anonymisation models to de-identify data; in particular, they use K-anonymity, I-diversity and t-closeness.Sakpere et al. (2014) assert that K-anonymity methods are best suited for data stream anonymity, such as phone numbers [31].However, Senosi et al. (2017) found that researchers only give anonymisation strategies an average rating for protecting privacy [32].Additionally, Stubbs et al. (2015) observes that even if automated rule-based solutions are beneficial, some PHII is still included in the data since the success of the de-identification process depends on the dictionaries used [25].Yogarajan et al. (2020) argues that machine learning methods for de-identification need to improve in areas such as maintaining correctness and usability of data [26].Meystre et al. (2010) states that machine learning methods combined with rule-based approaches such as HMS Scrubber perform better than a single method at de-identification of free text data [19].
Recently published articles reviewed a number of approaches, including systems based on machine learning and hybrid systems that use a combination of different deidentification methods, including deep learning methods (e.g., NeuroNER and Bidirectional Encoder Representations from Transformers (BERT) [22,26].Shickel et al. (2018) found systems based on deep learning performed better than other methods on lexical features [15].However, deep learning techniques require large datasets to perform effectively [15].Deep learning methods also make validating accuracy challenging due to the nature of the method.While they do represent significant progress in de-identification, the size of the required datasets for acceptable performance is an important limitation.
Conclusion
This scoping review provides an overview of de-identification methods for free text data.As computation power and the availability of free text from electronic health records have increased, the importance of de-identification methods in advancing the use of text data for research has also grown.While this review sought to classify de-identification techniques, no single approach or rule-based method was found to meet the high standards required to address the needs of research privacy regulators in protecting the privacy of patients since no single approach could reliably de-identify all PHII in population data records [20].The combination of multiple tools in a hybrid format appears to be the most promising future direction.
•Figure 1 :
Figure 1: PRISMA diagram -article search and selection process • Names • All geographical subdivisions smaller than a state except the first two digits of the zip code • All elements of dates (except year) • Telephone numbers • Fax numbers • Electronic mail addresses • Social security numbers • Medical record numbers • Health plan numbers • Account numbers • Certificate/license numbers • Vehicle identifiers or serial numbers, including plate numbers • Device identifiers or serial numbers • Web URLs • Internet protocol addresses • Biometric identifiers • Full-face photographs and comparable images • Any other unique identifying number, characteristic, or code [48,49,64] • System based on the combination of convolutional neural network, Bi-LSTM, and CRF [78] • System for the 2014 i2b2 deidentification challenge (based on combination of CRF and rulebased approaches) [13, 37-39] • System for the 2016 i2b2 deidentification challenge [13, 20, 58] • Multilevel Hybrid Semi-Supervised Learning Approach (MLHSLA) [62] • System based on mDEID and CliDEID [79] • System for the 2016 i2b2 deidentification challenge (based on Bi-LSTM, CRF, and rule-based approaches) [80] • System based on Bi-LSTM and human-engineered features from EHRs [81] commonly used in the computer science literature such as recall and precision while others used terms that have the same meaning from epidemiology such as sensitivity and specificity.Additionally, while the articles discuss the same metrics, some of them use different formulas in varying contexts.For instance, in Kushida et al. (2012), the term precision is employed to evaluate the performance of Stat De-id, a statistical learning-based system originally introduced in Uzuner et al. (2008) [20, 65].However, in Meystre et al.
Table 2 :
Study selection criteria
Table 4 :
Characteristics of the articles that reported inclusion and exclusion criteria
Table 5 :
Legal frameworks or guidelines referred to in the articles
Table 6 :
Types of PII/PHII referred to in the articles
Table 8 :
Rule-based de-identification approaches in the included articles
Table 9 :
Additional categories of de-identification approaches in the included articles, including systems and software
Table 10 :
NLP metrics mentioned in the included articles | 2023-12-15T16:14:33.193Z | 2023-12-12T00:00:00.000 | {
"year": 2023,
"sha1": "1b68ed5e3fe8285aa9ef3c12d84b079ba4a15ece",
"oa_license": "CCBYNCND",
"oa_url": "https://ijpds.org/article/download/2153/4975",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5035ca09e634c4d80906fee0645fd3e879cc91f3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9261359 | pes2o/s2orc | v3-fos-license | A Study of LoRa: Long Range & Low Power Networks for the Internet of Things
LoRa is a long-range, low-power, low-bitrate, wireless telecommunications system, promoted as an infrastructure solution for the Internet of Things: end-devices use LoRa across a single wireless hop to communicate to gateway(s), connected to the Internet and which act as transparent bridges and relay messages between these end-devices and a central network server. This paper provides an overview of LoRa and an in-depth analysis of its functional components. The physical and data link layer performance is evaluated by field tests and simulations. Based on the analysis and evaluations, some possible solutions for performance enhancements are proposed.
Introduction
The essential difference between "the Internet" and "the Internet of Things" (IoT) [1] is that in the IoT, there is just "less of everything" available in a given device or network device: less memory, less processing power, less bandwidth, etc.; and of course, less available energy. This is either because "things" are battery driven and maximizing lifetime is a priority or because their number is expected to be massive (it is estimated that there will be 50 billion connected devices by 2020 [2]). This drive to "do more with less" leads to constraints that limit the applicability of traditional cellular networks, as well as of technologies, such as WiFi, due to energy and scalability requirements.
Another range of protocols and technologies has emerged to fulfill the communication requirements of the IoT: Low-Power Wide Area Networks (LPWAN). Colloquially speaking, an LPWAN is supposed to be to the IoT what WiFi was to consumer networking: offering radio coverage over a (very) large area by way of base stations and adapting transmission rates, transmission power, modulation, duty cycles, etc., such that end-devices incur a very low energy consumption due to their being connected.
LoRa (LoRa Alliance, https://lora-alliance.org) is one such LPWAN protocol and the subject of study for this paper. LoRa targets deployments where end-devices have limited energy (for example, battery-powered), where end-devices do not need to transmit more than a few bytes at a time [3] and where data traffic can be initiated either by the end-device (such as when the end-device is a sensor) or by an external entity wishing to communicate with the end-device (such as when the end-device is an actuator). The long-range and low-power nature of LoRa makes it an interesting candidate for smart sensing technology in civil infrastructures (such as health monitoring, smart metering, environment monitoring, etc.), as well as in industrial applications.
Bluetooth/LE
Released in 1999 by a consortium led by Ericsson, Nokia and Intel, Bluetooth v1.0 was initially designed to, wirelessly, replace cables to connect devices typically used together, such as cell phones, laptops, headsets, keyboards, etc., offering a lower data rate (1-Mbps raw data rate, max) and a relatively short range (in theory, officially up to 100 m, at maximum transmission power, realistically, 5-10 m) while also a low power consumption.
Several revisions of Bluetooth later, Bluetooth 4.0 was completed in 2010. Fully compatible with Bluetooth 1.0, this revision supports a higher data rate (24-Mbps raw data rate, based on WiFi) and includes a "low energy" extension (called Bluetooth/LE or "Smart"). As compared with the "non-LE version", Bluetooth/LE provides rapid link establishment functions (simpler pairing) and further trades off the data rate (approximately 200 kbps) for lower energy consumption, with the target to run a wireless sensor for at least one year on a single coin cell (approximately 200 mAHr) [ [7,8] provides a wireless LAN standard that operates at sub-1-GHz license-exempt bands. The work is conducted by the IEEE 802.11 ah Task Group (TGah). Compared to IEEE 802.11 (operating at 2.4 GHz and 5 GHz), 802.11 ah supports a longer transmission range up to 1 km at the default transmission power of 200 mW. Depending on the bandwidth assigned, 802.11 ah can operate at 4 Mbps or 7.8 Mbps. If the channel condition is good enough, 802.11 ah can provide a hundreds of Mpbs data rate, thanks to the novel modulation and coding schemes brought from 802.11 ac.
Sigfox
Sigfox (http://www.sigfox.com) is a variation of the cellular system that enables remote devices to connect to ab access point with Ultra Narrow Band (UNB). A proprietary technology, developed and delivered by the French company Sigfox, no detailed public specification is available. Sigfox operates on the 868-MHz frequency band, with the spectrum divided into 400 channels of 100 Hz [9].
Each end-device can send up to 140 messages per day, with a payload size of 12 octets, at a data rate up to 100 bps. Sigfox claims that each access point can handle up to a million end-devices, with a coverage area of 30-50 km in rural areas and 3-10 km in urban areas. Sigfox's claim to being a low power technology stems, in no small part, from end-devices being heavily duty-cycled due to an assumption of the nature of the data traffic patterns in the IoT: when an end-device has a message to send, the Sigfox interface circuitry wakes up, and the message is transmitted "uplink", from the end-device; then, the end-device listens for a short duration in case there are data being sent "downlink", to the end-device. In other words, downlink traffic is supported by the end-device actively polling, which makes Sigfox an interesting choice for data acquisition, but perhaps less so for command-and-control scenarios.
DASH7 [10] is a wireless sensor and actuator full Open Systems Interconnection (OSI) stack protocol that operates in the 433-MHz, 868-MHz and 915-MHz unlicensed ISM band/SRD band. It originates from the ISO 18000-7 standard [11] for active RFID, intended by the U.S. Department of Defense for container inventory. DASH7 inherits from ISO/IEC 18000-7 the default parameters of the active air interface communication at 433 MHz, an asynchronous MAC and a presentation layer using highly structured data elements. Furthermore, DASH7 extends and defines the protocol stack from the physical layer up to the application layer.
DASH7 aims at providing communication in the range of up to 2 km, low latency, mobility support, multi-layer battery life, AES 128-bit shared key encryption support and a data rate up to 167 kbit/s. In [12], a more detailed survey of different technologies, including 3GPP LTE Rel-13, Nokia's narrow-band LTE-M, Neul/Huawei's narrow-band proposal, Sigfox, etc., is provided.
Statement of Purpose
There have been a few articles related to LoRa in the literature. In [13,14], different long-range technologies, including LoRa, are compared. Petajajarvi et al. [15] studied the coverage of LoRa and proposed a channel attenuation model. In [16], the authors analyzed the LoRa capacity and proposed LoRaBlink to support multi-hop communications.
In complementing the work of these articles, the goal of this paper is three-fold: (i) given the semi-proprietary nature of LoRA (parts of the protocol are well documented; other parts are not), to provide an overview and functional description of LoRa and to present as much information as could be (experimentally and otherwise) gathered; (ii) to independently provide a quantification and evaluation of the performance of LoRA and of LoRaWAN, especially the spreading factor; and (iii) based on the analysis and performance evaluation, to propose possible solutions for performance enhancement.
The remainder of this paper is organized as follows: Section 2 provides a functional overview of LoRa, followed by Section 3, which describes and analyzes the LoRa physical layer in detail and provides experimental performance studies hereof. Following, the LoRaWAN MAC protocol is described in Section 4, with Section 5 presenting the evaluation hereof for LoRaWAN. Section 6 concludes this paper.
LoRa Overview
This section gives an overview of the LoRa protocol stack and basic network architecture.
LoRa Protocol Stack
LoRa, which stands for "Long Range", is a long-range wireless communications system, promoted by the LoRa Alliance. This system aims at being usable in long-lived battery-powered devices, where the energy consumption is of paramount importance. LoRa can commonly refer to two distinct layers: (i) a physical layer using the Chirp Spread Spectrum (CSS) [17] radio modulation technique; and (ii) a MAC layer protocol (LoRaWAN), although the LoRa communications system also implies a specific access network architecture.
The LoRa physical layer, developed by Semtech, allows for long-range, low-power and low-throughput communications. It operates on the 433-, 868-or 915-MHz ISM bands, depending on the region in which it is deployed. The payload of each transmission can range from 2-255 octets, and the data rate can reach up to 50 Kbps when channel aggregation is employed. The modulation technique is a proprietary technology from Semtech.
LoRaWAN provides a medium access control mechanism, enabling many end-devices to communicate with a gateway using the LoRa modulation. While the LoRa modulation is proprietary, the LoRaWAN is an open standard being developed by the LoRa Alliance.
LoRa Network Architecture
A typical LoRa network is "a star-of-stars topology", which includes three different types of devices, as shown in Figure 1. The basic architecture of a LoRaWAN network is as follows: end-devices communicate with gateways using LoRa with LoRaWAN. Gateways forward raw LoRaWAN frames from devices to a network server over a backhaul interface with a higher throughput, typically Ethernet or 3G. Consequently, gateways are only bidirectional relays, or protocol converters, with the network server being responsible for decoding the packets sent by the devices and generating the packets that should be sent back to the devices. There are three classes of LoRa end-devices, which differ only with regards to the downlink scheduling.
Overview of the Physical Layer
LoRa is a chirp spread spectrum modulation [18], which uses frequency chirps with a linear variation of frequency over time in order to encode information. Because of the linearity of the chirp pulses, frequency offsets between the receiver and the transmitter are equivalent to timing offsets, easily eliminated in the decoder. This also makes this modulation immune to the Doppler effect, equivalent to a frequency offset. The frequency offset between the transmitter and the receiver can reach 20% of the bandwidth without impacting decoding performance [19]. This helps with reducing the price of LoRa transmitters, as the crystals embedded in the transmitters do not need to be manufactured to extreme accuracy. LoRa receivers are able to lock on to the frequency chirps received, offering a sensitivity of the order of −130 dBm [19,20].
As the LoRa symbol duration is longer than the typical bursts of AMinterference generated by Frequency Hopping Spread Spectrum (FHSS) systems, errors generated by such interference are easily corrected through Forward Error-correction Codes (FECs). The typical out-of-channel selectivity (the maximum ratio of power between an interferer in a neighboring band and the LoRa signal) and co-channel rejection (the maximal ratio of power between an interferer in the same channel and the LoRa signal) of LoRa receivers is respectively 90 dB and 20 dB [19,20]. This outperforms traditional modulation schemes, such as Frequency-Shift Keying (FSK), and makes LoRa well suited to low-power and long-range transmissions.
Parameters of the Physical Layer
Several parameters are available for the customization of the LoRa modulation: Bandwidth (BW), Spreading Factor (SF) and Code Rate (CR). LoRa uses an unconventional definition of the spreading factor as the logarithm, in base 2, of the number of chirps per symbol. For the sake of simplicity, this article will stick to this definition. Theses parameters influence the effective bitrate of the modulation, its resistance to interference noise and its ease of decoding.
The bandwidth is the most important parameter of the LoRa modulation. A LoRa symbol is composed of 2 SF chirps, which cover the entire frequency band. It starts with a series of upward chirps. When the maximum frequency of the band is reached, the frequency wraps around, and the increase in frequency starts again from the minimum frequency. Figure 2 gives an example of a LoRa transmission in the frequency variation over time. The position of this discontinuity in frequency is what encodes the information transmitted. As there are 2 SF chirps in a symbol, a symbol can effectively encode SF bits of information. [21]. f c is the central frequency of the channel, and BW is the bandwidth.
In LoRa, the chirp rate depends only on the bandwidth: the chirp rate is equal to the bandwidth (one chirp per second per Hertz of bandwidth). This has several consequences on the modulation: an increase of one of the spreading factor will divide the frequency span of a chirp by two (as 2 SF chirps cover the whole bandwidth) and multiply the duration of a symbol by two, also. It will not, however, divide the bit rate by two, as one more bit will be transmitted in each symbol. Moreover, the symbol rate and the bit rate at a given spreading factor are proportional to the frequency bandwidth, so a doubling of the bandwidth will effectively double the transmission rate. This is translated in Equation (1), which links the duration of a symbol (T S ) to the bandwidth and the spreading factor.
Moreover, LoRa includes a forward error correction code. The code rate (CR) equals 4/(4 + n), with n ∈ {1, 2, 3, 4}. Taking this into account, as well as the fact that SF bits of information are transmitted per symbol, the Equation (2) allows one to compute the useful bit rate (R b ).
For example, a setting with BW = 125 kHz, SF = 7, CR = 4/5 gives a bit rate of R b = 5.5 kbps. These parameters also influence decoder sensitivity. Generally speaking, an increase of bandwidth lowers the receiver sensitivity, whereas an increase of the spreading factor increases the receiver sensitivity. Decreasing the code rate helps reduce the Packet Error Rate (PER) in the presence of short bursts of interference, i.e., a packet transmitted with a code rate of 4/8 will be more tolerant to interference than a signal transmitted with a code rate of 4/5. The figures in Table 1, taken from the SX1276 datasheet, are given as an indication. Table 1. Semtech SX1276 LoRa receiver sensitivity in dBm at different bandwidths and spreading factors, taken from [19]. Another parameter of the LoRa modulation, which is implemented in Semtech's transceivers, is the low data rate optimization. This parameter is mandatory in LoRa when using spreading factors of 11 and 12 with a bandwidth of 125 kHz or lower. The effect of this parameter is not documented; however, Equation (3) shows that it reduces the number of bits transmitted per symbol by two.
Physical Frame Format
Although the LoRa modulation can be used to transmit arbitrary frames, a physical frame format is specified and implemented in Semtech's transmitters and receivers. The bandwidth and spreading factor are constant for a frame.
A LoRa frame begins with a preamble. The preamble starts with a sequence of constant upchirps that cover the whole frequency band. The last two upchirps encode the sync word. The sync word is a one-byte value that is used to differentiate LoRa networks that use the same frequency bands. A device configured with a given sync word will stop listening to a transmission if the decoded sync word does not match its configuration. The sync word is followed by two and a quarter downchirps, for a duration of 2.25 symbols. The total duration of this preamble can be configured between 10.25 and 65,539.25 symbols. The structure of the preamble can be seen in Figure 2.
After the preamble, there is an optional header. When it is present, this header is transmitted with a code rate of 4/8. This indicates the size of the payload (in bytes), the code rate used for the end of the transmission and whether or not a 16-bit CRCfor the payload is present at the end of the frame. The header also includes a CRC to allow the receiver to discard packets with invalid headers. The payload size is stored using one byte, limiting the size of the payload to 255 bytes. The header is optional to allow disabling it in situations where it is not necessary, for instance when the payload length, coding rate and CRC presence are known in advance.
The payload is sent after the header, and at the end of the frame is the optional CRC. A schematic summarizing the frame format can be seen in Figure 3. Equation (3), derived from Semtech's datasheets [19,20], gives the number of symbols required to transmit a payload n s , as a function of all of these parameters. This number should be added to the number of symbols of the preamble, in order to compute the total size of the packet in symbols. In this equation, PL is the payload size in bytes, CRC is 16 if the CRC is enabled and zero otherwise, H is 20 when the header is enabled and zero otherwise and DE is two when the low data rate optimization is enabled and zero otherwise. This equation also shows that the minimum size of a packet is eight symbols.
Performance Evaluation
To verify whether the specified performance of LoRa receivers is reached in practice, a LoRa testbed is built. The Freescale KRDM-KL25Z development board with Semtech SX1276 MBED shield (Figure 4a) is used as the end-device, and a Cisco 910 industrial router is used as the gateway (Figure 4b).
The gateway is connected to the network server provided by Thingpark (https://actility.thingpark.com) through Ethernet, so that the packet received can be monitored on the server side.
Receiver Sensitivity
As there are many models and evaluations of the propagation of radio signals at the frequencies used by LoRa in various environments [22], this experiment is focused on checking the decoding performance of LoRa receivers.
To this end, around 10,000 packets were sent from a LoRa device to the gateway, and the Received Signal Strength Indicators (RSSI) of received packets were recorded while moving the end-device.
The gateway was placed indoors, and the device was outdoors, in an urban environment. All packets were sent with a bandwidth of 125 kHz and a code rate of 4/5. The transmit power of the device was set to the minimum (2 dBm, with a 3-dBi antenna) in order to limit the distance to cover before reaching low RSSIs. The order of magnitude of the distance between the end-device and the gateway at which packets started to get lost was 100 m. The minimal observed RSSIs are depicted in Figure 5. These measured results are slightly above the specified values, and the expected decrease with the increase of the spreading factor is not observed. However, the packets achieving the lowest RSSIs were also received with a high SINR, close to 20 dB. This is likely due to the gateway being indoors, leading to additional shadowing.
It should be noted that the observed RSSIs are already 6 dB lower than the specified RSSIs when using FSK [19].
Network Coverage
This experiment aims at testing the network coverage of LoRa. Tests were conducted in a suburb of Paris, with mainly low-rise residential dwellings. The temperature was 15 • C, and the ambient humidity was 55%. The gateway was located on the second floor of a house, outside the window. Five different test points were chosen, with the distance to the gateway as shown in Figure 6. The end-device was in a car during the tests.
The transmission power of the end-device was set to 14 dBm, which is the default value as specified by [23]. To test the performance of different spreading factors, the packet acknowledgment and retransmission was turned off. The link check was also disabled so that the spreading factor will not change even if there is packet loss; by default, LoRa will adapt the spreading factor according to the link quality. Spreading factors of 7, 9 and 12 were chosen for the tests. Figure 7 shows the packet delivery ratio of different spreading factors with various distances. About 100 packets are transmitted to the network server in each test with a sequence number. The higher spreading factors have better coverage, as discussed in Section 3.2: for a spreading factor of 12, more than 80% of packets were received at Point D (2800 m), while no packet was received when using a spreading factor of seven. It is worth noting that the gateway was located in the second floor, which was about 5 m above the ground (normally, such a base station would be located at a higher altitude to achieve better coverage), and the test Point D was right behind a building of seven floors. The high delivery ratio using the high spreading factor has the cost of a much lower bit rate, as shown in Equation (2). On the other hand, the network coverage with low spreading factors is much lower. It is important to note that the purpose of the tests above is to test the coverage of the LoRa physical layer using different spreading factors. In a real LoRa network with the LoRaWAN protocol, the end-devices are able to automatically increase the spreading factor if the transmission with the lower spreading factor fails. Furthermore, retransmission is also used if necessary. Therefore, in a network with LoRaWAN, a higher delivery ratio can be achieved.
The LoRaWAN Protocol
LoRaWAN is a MAC protocol, built to use the LoRa physical layer. It is designed mainly for sensor networks, wherein sensors exchange packets with the server with a low data rate and relatively long time intervals (one transmission per hour or even days). This section describes the LoRaWAN V1.0 specification [23], as released in January 2015.
Components of a LoRaWAN Network
Several components of the network are defined in the LoRaWAN specification and are required to form a LoRaWAN network: end-devices, gateways (i.e., base stations) and the network server.
•
End-device: the low-power consumption sensors that communicate with gateways using LoRa.
• Gateway: the intermediate devices that forward packets coming from end-devices to a network server over an IP backhaul interface allowing a bigger throughput, such as Ethernet or 3G. There can be multiple gateways in a LoRa deployment, and the same data packet can be received (and forwarded) by more than one gateway. • Network server: responsible for de-duplicating and decoding the packets sent by the devices and generating the packets that should be sent back to the devices.
Unlike traditional cellular networks, the end-devices are not associated with a particular gateway in order to have access to the network. The gateways serve simply as a link layer relay and forward the packet received from the end-devices to the network server after adding information regarding the reception quality. Thus, an end-device is associated with a network server, which is responsible for detecting duplicate packets, choosing the appropriate gateway for sending a reply (if any), consequently for sending back packets to the end-devices. Logically, gateways are transparent to the end-devices.
LoRaWAN has three different classes of end-devices to address the various needs of applications: • Class A, bi-directional: Class A end-devices can schedule an uplink transmission based on their own needs, with a small jitter (random variation before transmission). This class of devices allows bi-directional communications, whereby each uplink transmission is followed by two short downlink receive windows. Downlink transmission from the server at any other time has to wait until the next uplink transmission occurs. Class A devices have the lowest power consumption, but also offer less flexibility on downlink transmissions. • Class B, bi-directional with scheduled receive slots: Class B end-devices open extra receive windows at scheduled times. A synchronized beacon from the gateway is thus required, so that the network server is able to know when the end-device is listening. • Class C, bi-directional with maximal receive slots: Class C end-devices have almost continuous receive windows. They thus have maximum power consumption.
It should be noted that LoRaWAN does not enable device-to-device communications: packets can only be transmitted from an end-device to the network server, or vice-versa. Device-to-device communication, if required, must thus be sling-shot through the network server (and consequently, by way of two gateway transmissions).
The LoRaWAN specification states that LoRaWAN networks should use ISM frequency bands. These bands are subject to regulations regarding the maximum transmission power and the duty cycle. These duty cycle limitations translate into delays between the successive frames sent by a device. If the limitation is at 1%, the device will have to wait 100-times the duration of the last frame before sending again in the same channel.
LoRaWAN Message Format
LoRaWAN uses the physical frame format described in Section 3.3. The header and CRC are mandatory for uplink messages, which makes it impossible to use a spreading factor of six in LoRaWAN. Downlink messages have the header, but not the CRC. The code rate that should be used is not specified and neither is when the end-devices should use the low data rate optimization.
The message format is detailed in Figure 8. DevAddr is the short address of the device. FPort is a multiplexing port field. The value zero means that the payload contains only MAC commands. When this is the case, the FOptsLen field must be zero. FCnt is a frame counter. MIC is a cryptographic message integrity code, computed over the fields MHDR, FHDR, FPort and the encrypted FRMPayload. MType is the message type, indicating among other things whether it is an uplink or a downlink message and whether or not it is a confirmed message. Acknowledgments are requested for confirmed messages. Major is the LoRaWAN version; currently, only a value of zero is valid. ADR and ADRAckReq control the data rate adaptation mechanism by the network server. ACK acknowledges the last received frame. FPending indicates that the network server has additional data to send and that the end-device should send another frame as soon as possible so that it opens receive windows. FOptsLen is the length of the FOpts field in bytes. FOpts is used to piggyback MAC commands on a data message. CID is the MAC command identifier, and Args are the optional arguments of the command. FRMPayload is the payload, which is encrypted using AES with a key length of 128 bits. The minimal size of the MAC header is 13 bytes; its maximal size is 28 bytes. Knowing this, it is possible to compute the maximum channel capacity available for application data payloads with given modulation parameters thanks to Equations (1) and (3). As packets are sent from a device to the network server and vice versa, there is no destination address on uplink packets, and there is no source address on downlink packets.
End-Device Setup
In order to participate in a LoRaWAN network, an end-device must be activated. LoRaWAN provided two ways to activate an end-device: Over-The-Air Activation (OTAA) and Activation By Personalization (ABP).
The activation process should give the following information to an end-device: • End-device address (DevAddr): A 32-bit identifier of the end-device. Seven bits are used as the network identifier, and 25 bits are used as the network address of the end-device.
•
Application identifier (AppEUI): A global application ID in the IEEE EUI64 address space that uniquely identifies the owner of the end-device.
•
Network session key (NwkSKey): A key used by the network server and the end-device to calculate and verify the message integrity code of all data messages to ensure data integrity.
•
Application session key (AppSKey): A key used by the network server and end-device to encrypt and decrypt the payload field of data messages.
For OTAA, a join procedure with a join-request and a join-accept message exchange is used for each new session. Based on the join-accept message, the end-devices are able to obtain the new session keys (NwkSkey and AppSKey). For the ABP, the two session keys are directly stored into the end-devices.
LoRaWAN MAC Commands
LoRaWAN defines many MAC commands that allow customizing end-device parameters [23]. One of them, LinkCheckReq, can be sent by an end-device to test its connectivity. All of the others are sent by the network server. These commands can control the data rate and output power used by the device, as well as the number of times each unconfirmed packet should be sent (LinkADRReq), the global duty cycle of the device (DutyCycleReq), changing parameters of the receive windows (RXTimingSetupReq, RXParamSetupReq) and changing the channels used by the device (NewChannelReq). One command is used to query the battery level and reception quality of a device (DevStatusReq).
LoRaWAN Analysis
This section analyzes and discusses the performance of LoRaWAN by way of experiments and simulations. As in the previous section, all of this study is based on [23].
Single Device Maximal throughput and MTU
The goal of this experiment is to evaluate the maximal throughput that a single device can obtain. This depends more on the physical layer than on the MAC protocol, but it gives an idea of what is possible when using LoRaWAN. The experiment was conducted by having a device send data as soon as the channel limitations and the protocol allow it. Tests were conducted with six channels of 125 kHz and using spreading factors from 7-12. No MAC commands were sent, so the size of the MAC header was always 13 bytes. The results, depending on the payload size, are visible in Figure 9, which are measured over about 100 packets transmitted in each test. Fifty-one bytes are the maximum payload size allowed by the implementation used for the tests. This experiment revealed that at low packet sizes, the limiting factor was not the channel duty cycle limitations, as could have been expected, but the duration of the receive windows. Indeed, the device has to wait for the two downlink receive windows following the transmission to be over before sending another packet. However, this situation is not the use case LoRaWAN was designed for: the goal of LoRaWAN is rather to manage large quantities of devices that send a few bytes of data from time to time.
In the tests above, the MAC header is always 13 bytes. However, in practice, the LoRaWAN header can be a variable size between 13 and 28 bytes. Moreover, the maximum size of the frame depends on the data rate used [23], and LoRaWAN does not have a mechanism to split large payloads over multiple frames. As of the current specification, the application above LoRaWAN has no way of knowing what the maximal size of the packet that it will be able to send in the next transmission is, which might be problematic. A conservative approach is to never try to send more than the smallest maximum payload size, which is 36 bytes, but this results in a loss of capacity if a large amount of data has to be sent, as well as lower throughput, as shown in the results in Figure 9. This is relatively easy to address in a future LoRaWAN specification revision, either by adding a fragmentation mechanism or by informing the upper layer of the MTU from MAC protocol.
Total Capacity and Channel Load
The total capacity of the network is not only related to payload size. As two transmissions on the same frequency, but at different spreading factors, can be decoded simultaneously, in what follows, a logical channel is defined by a pair (frequency band, spreading factor).
The total transmission capacity of a LoRaWAN network is the sum of the capacities of all of the logical channels. In a 125-kHz frequency band, there are six possible spreading factors (from 7 to 12), which brings the total capacity of a 125-kHz channel to 12,025 bps.
In the EU frequency band, the set of mandatory channels contains three 125-kHz channels [23], which make the minimum total capacity of the network 36 kbps. Networks operators are free to add more channels (sent to the devices using NewChannelReq commands), thus increasing the capacity of the network.
As the transmission bit rate is dependent on the spreading factor, not all logical channels have the same capacity. In what follows, the load for a logical channel is defined by the time average of the numbers of LoRa devices trying to send data. This coincides with the natural definition of the load: in optimal conditions, i.e., with a perfect synchronization of the devices, a load of one can be reached, saturating the channel.
Estimation of the Collision Rate
As of the current specification, the devices and the gateways can transmit at any time. There is no listen-before-talk or CSMAmechanism. This makes LoRaWAN very similar to ALOHA [24], but contrary to ALOHA with a variable packet length.
Because of the legal duty-cycle limitations of 1% in the EU region where this analysis took place, 100 devices would have been needed to emulate a load of one, and this number would have grown proportionally to the maximum link load we wanted to test. As there were not this many devices on hand, simulations are used to evaluate LoRaWAN's behavior under load.
A simulator was built to simulate the random process of packet emissions. Five-hundred-thousand packets were simulated for each data point. If the transmission time of two packets overlaps, we consider that a collision happens and that none of the two packets reaches the gateway. The collision rate is the number of packets that collided, divided by the total number of packets sent during the simulation. The channel capacity usage is computed as the amount of data that is successfully transferred during the simulation, divided by the theoretical maximum amount of data that could have been sent in the channel, which is the channel capacity multiplied by the simulation duration. The channel load is as defined in the previous Section 5.2, or equivalently, the sum of the duration of all of the packets sent during the simulation, divided by the duration of the simulation.
The duration of the packets for the different payload sizes was computed using Semtech's LoRa Calculator, for a spreading factor of seven, a bandwidth of 125 kHz, a code rate of 5/4 and six symbols in the preamble.
Assuming the packet arrivals are following a Poisson law and a uniform distribution of the payloads lengths between one and 51 bytes, the expected capacity usage and collision rate depending on the load for one logical channel can be plotted. The result is shown in Figure 10.
The variable packet length does not greatly impact the performance of LoRaWAN, and all said and done, the observed behavior is very close to that of pure ALOHA. The maximum capacity usage is 18% of the channel capacity and is reached for a link load of 0.48. However, at this load, around 60% of the packets transmitted are dropped because of collisions.
This may be an issue, because if the devices are not using confirmed messages, some messages will be lost (and increasing the number of times each message is sent by the devices is a bad solution, as it will increase the load on the link), and if the devices are using confirmed messages, they will have to retransmit most packets several times, which will in addition impact the battery life of the devices. LoRaWAN confirmed messages sent by the devices must be acknowledged by a packet sent during one of the two receive windows following the transmission, while confirmed messages sent by the gateway will be acknowledged during the next uplink transmission. The acknowledgment is only a flag in the packet header, and the setting of this flag acknowledges the last message received. As such, when using confirmed messages, a new packet should not be sent before the acknowledgment of the previous packet was received; otherwise, it will be impossible to know to which packet the next acknowledgment will be referring.
The drawback of this mechanism is that a confirmed message requires two successive transmissions in order to be successful, thus increasing the collision probability with other messages and the number of retransmissions needed. As above, the probability of success and the link capacity usage when end-devices are sending confirmed messages can be plotted. For this simulation, we consider that the gateway does not send MAC commands to the device, so the acknowledgment message is always using a 13-byte MAC header and no payload. We also take these messages into account in the computation of the load, i.e., when the sum of the durations of all of the messages and their acknowledgments is equal to the duration of the simulation, the value of the load is one. The result is shown in Figure 11. Figure 11. Link capacity usage and packet collision rate for a LoRaWAN network when using confirmed messages. The load is as defined in Section 5.2. As expected, the success rate is significantly lower than without confirmed messages. However, this is a relatively efficient way of implementing this functionality, because two successful transmissions are necessary anyway.
The results show that LoRaWAN is extremely sensitive to the channel load, similar to ALOHA. The solution implemented by usual network protocols, such as 802.11 or cellular networks, to help mitigate this problem is CSMA [25]. In order to ensure the scalability of LoRaWAN, it could be interesting to study the feasibility of the implementation of a CSMA mechanism into LoRaWAN. A possible issue is the duty cycle limitation that applies to the gateway, and that would prevent it from sending messages too often; another is the potential non-transitivity of the channel (i.e., an end-device may or may not be able to "carrier sense" if another end-device is transmitting to the same gateway). If the current architecture is kept, the CSMA mechanism would have to be controlled by the network server, which would put even more load on it. Alas, a CSMA mechanism could also remove the risk of collision of the acknowledgment for confirmed messages, by making it happen during a contention-free period.
The current LoRaWAN specification does not have any means to enforce quality of service, and thus, it should not be used for critical applications or applications where the delay between the first time at which the device tries to send a message and the time at which it is received is important. Adjusting the number of times a device sends its packets may increase the chance of these packets going through, but it does so at the expense of more collisions with transmissions from other nodes and does not provide any hard guarantee.
LoRaWAN currently uses ISM bands, which have the advantage of being free and not requiring a license. However, these bands are more and more used by LoRaWAN's competitors. Even if LoRa is very resistant to interferences, these bands have a finite capacity, and it is not guaranteed that the capacity of this band is sufficient. Moreover, it is perfectly legal for a malicious individual to emit random LoRa symbols, which will jam LoRa transmissions. Using a proprietary frequency band would have the advantage to remove most interferences, as well as remove the duty cycle cap, possibly making the implementation of a CSMA mechanism easier.
The Network Server Role
LoRaWAN specifies the behavior of the devices, but not the behavior of the network server. As shown in Section 5.3, it is important to keep the load on the network low, and the network server has to enforce this by sending MAC commands to the devices. However, as this is not part of the specification and as there is no open source reference implementation (as of the writing of this article), a correct behavior of the network server is hard to be evaluated.
The network server can easily degrade the performance of the network. For instance, it can use the LinkADRReq command to configure the number of times a device will send each data frame. This parameter is advertised as a way to control the quality of service for a device. Setting this parameter to more than one will increase the load on the network, increasing the amount of collisions, and thus, should be done very cautiously.
Moreover, LoRaWAN networks are advertised to be able to handle millions of devices. The network server will be responsible for the optimization of all of these nodes. Even if the event rate in sensor networks is significantly lower than in traditional networks, the performance of the network server should be carefully evaluated by the network operators, to ensure the scaling of the network.
The Gateway Role
The current specification states that the gateway is only a relay. This is linked to the fact that the packets sent by the devices have no destination address (which saves a few bytes) and that there is no association between a device and a gateway. Indeed, as several gateways can receive the same message from a device, only one of them should reply to it. It falls back to the network server to choose the best gateway.
The only task that should be handled by the gateways is the timing of the downlink messages. This timing should be accurate so that the device receives the message in its receive window. It is not specified whether the gateways receive a message to send from the server along with the time at which it should be sent or if the gateway sends the message received from the server as soon as it receives it, and it is unclear which solution is implemented in existing gateways. As the round trip time of the backhaul interface of the gateways cannot be controlled, the first solution should be implemented. It would also allow one to synchronize the transmissions of the different gateways, avoiding collisions between them.
In the current specification, each gateway is dedicated to a specific network server, as shown in Figure 1. This means both the gateways and the data collected are "owned" by the entity that runs the only network server. In the future, it would be interesting to extend the function of the gateways so that they can forward the packet to specific network servers, as shown in Figure 12. This may effectively reduce the expense of devices and network deployment.
Conclusions
LoRa is a long-range and low-power telecommunication systems for the "Internet of Things". The physical layer uses the LoRa modulation, a proprietary technology with a MAC protocol.
LoRaWAN is an open standard with the specification available free of charge [23]. This paper gives a comprehensive analysis of the LoRa modulation, including the data rate, frame format, spreading factor, receiver sensitivity, etc. A testbed has been built, to experimentally study the network performance, documented in this paper . The results show that LoRa modulation, thanks to the chirp spread spectrum modulation and high receiver sensitivity, offers good resistance to interference. Field tests show that LoRa can offer satisfactory network coverage up to 3 km in a suburban area with dense residential dwellings. The spreading factor has significant impact on the network coverage, as does the data rate. LoRa is thus well suited to low-power, low-throughput and long-range networks.
This paper has also shown that LoRaWAN is an LPWAN protocol very similar to ALOHA. Its performance thus degrades quickly when the load on the link increases. | 2016-09-10T08:43:00.142Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "7e78f4d96a9c27d0ae6f3685999c3c4470cab1f1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/16/9/1466/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e78f4d96a9c27d0ae6f3685999c3c4470cab1f1",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Engineering",
"Computer Science",
"Medicine"
]
} |
246684190 | pes2o/s2orc | v3-fos-license | Social Emotional Learning Competencies in Belize Children: Psychometric Validation Through Exploratory Structural Equation Modeling
In the nation of Belize, and in particular the south side of Belize City, the main metropolitan area of the nation, significant economic disparities have led to child and adolescent exposure to high rates of violent crime, gang activity, unsafe neighborhoods, sexual, and physical violence. Problems associated with poor Social-Emotional Character Development are especially prevalent among boys. Consequently, valid culture-relevant measures are required that identify problematic behavior for policy-based intervention and evaluation of educational programs designed to ameliorate this problem. The present study demonstrates the application of Exploratory Structural Equation Modeling to existing measures through the investigation of structural validity and generalizability of the Social-Emotional and Character Development Scale with a large sample of children from schools in Belize (N = 1,877, Ages 10–13). Exploratory structural equation modeling results demonstrate the original factor correlations were reduced, providing less biased estimates than confirmatory factor analysis (CFA). Moreover, a multi-group Exploratory Structural Equation Modeling analysis illuminates significant differences between latent factor scores of males and females for most factors. Using this newer factor analytic procedure, original factors are reconceptualized to better situate the Social Emotional Character Development Scales into the larger body of Social-Emotional Learning (SEL) competencies literature.
INTRODUCTION
Social-emotional learning (SEL) programs emerged in response to school programs designed to target specific problem youth behaviors such as violence and substance abuse. SEL define as "the process through which all young people and adults acquire and apply the knowledge, skills, and attitudes to develop healthy identities, manage emotions and achieve personal and collective goals, feel and show empathy for others, establish and maintain supportive relationships, and make responsible and caring decisions" (CASEL, 2021b).
Instead of focusing on the resulting problem behavior, SEL provides a preventative framework for addressing underlying causes of negative youth behaviors while also supporting academic improvement (Greenberg et al., 2003;Damon, 2004;Weissberg and O'Brien, 2004). Although several frameworks exist in the literature, SEL generally addresses a set of five inter-related cognitive, affective, and behavioral competencies: self-awareness, social awareness, responsible decision making, self-management, and relationship management (Weissberg and O'Brien, 2004;Zins et al., 2004;CASEL, 2017), as described in Table 1.
An analysis of leading SEL programs, Stephanie Jones, said "We really got to weave in those social and emotional supports early and spend time on it, so kids begin to feel safe, secure, comfortable, excited. And then the learning stuff will happen." With quarantine situation and schools opening and closing as a result of unstable pandemic time of these recent years, kids need additional care, whatever SEL is called (CASEL, 2021a).
Although decades of empirical research surrounding the effects of SEL and character development have been published, issues regarding instruments to measure social, emotional, and character development (SECD) skills remain unresolved. In a report issued by the Society for Prevention Research intended to standardize the criteria for identifying prevention programs which have been sufficiently empirically tested, a standard was set to include measures which were psychometrically sound, meaning the measures demonstrate construct validity and reliability (Flay et al., 2005). Greenberg's (2004) suggestions for future research in prevention science called for the construction of easily utilized, valid and reliable assessments of social, emotional, ethical, and health outcomes. More specifically, Greenberg highlighted the need to develop meaningful and easily understood assessments of social and emotional competence. A meta-analysis by Durlak et al. (2011) concluded 24% of the examined empirical studies on SEL programs did not use reliable outcome measures and 50% did not use valid outcome measures. Likewise, Wigelsworth et al. (2010) called for examination of the psychometric properties and application of SEL measures across varying populations and ethnicities. In a systematic review of 187 currently used SEL instruments, Humphrey et al. (2011) concluded the majority of measures have been developed only with American populations and there is little analysis of the applicability of the measures across different groups (e.g., ethnicity, gender). As a further limitation, SEL surveys offer limited evidence of criterian-related validity (Durlak et al., 2011).
Children and Youth in Belize
Belize is a small country of nearly 400,000 inhabitants in Central America located just south of Mexico's Yucatan peninsula on the Caribbean sea, bordered to the West and South by Guatemala. Prior to 1992, the nation was a British colony known as British Honduras. In Belize, child neglect is more than twice that of any country in Latin America and the Caribbean, ranking as one of the countries in the region with the highest levels of severe physical discipline (Klevens and Ports, 2017); and the overall acceptance of physical punishment is common (Cappa and Khan, 2011), where more than 50% of households surveyed said that physical and psychological aggression was prevalent as part of disciplinary practices (Beatriz and Salhi, 2019).
In order to address developmental concerns related to child maltreatment and exposure to traumatic childhood experiences, interventions utilizing a positive youth developmental approach in Belize (Hull et al., 2021) utilized the SECDS as a dependent outcome variable to investigate the effects of the intervention.
Exploratory Structural Equation Modeling
Given the limitations of published psychometric evaluations of SEL instruments, there is a need to establish systematic procedures for investigating psychometric properties which also include generalizability, invariance, and criteria-related evidence. Traditional approaches using CFA have been criticized as being too restrictive for more complex multi-faceted constructs (Muthèn and Asparouhov, 2012). An integration of exploratory factor analysis (EFA), CFA, and structural equation modeling (SEM), exploratory structural equation modeling (ESEM) was developed to help alleviate commonly encountered CFA problems associated with goodness of fit, differentiation of factors, measurement invariance across time or groups, and differential item functioning (Asparouhov and Muthèn, 2009;Marsh et al., 2009Marsh et al., , 2010. As illustrated in Figure 1, instead of associating each item with only one factor and constraining all other non-target loadings to zero as is typical in the highly restrictive independent clusters model (ICM), ESEM allows for less restrictive models in which all factor loadings are estimated and where items are free to cross-load on other factors within the same set of factors (Asparouhov and Muthèn, 2009;Marsh et al., 2011). Instead of calculating structure coefficients in a separate analysis as authors such as Thompson (1997) demonstrate, ESEM includes the structure coefficient parameter estimation along with the standard errors for the structure coefficients. ESEM retains the capability of rotating factors and also comparing model fit through comparing model fit statistics. ESEM's more flexible approach to modeling complex structures has shown to provide better model fit and unbiased interfactor correlations across a variety of social science measures including personality, motivation, bullying, efficacy, and emotional intelligence scales (e.g., Marsh et al., 2010Marsh et al., , 2011Caro et al., 2014;Guay et al., 2014;Perera, 2015).
The aim of this study is to demonstrate the utility of ESEM to investigate the psychometric properties of a more recently developed SEL instrument in a large Belize schoolage sample following Ji et al. (2013). More specifically, the present study extends the validity literature for SEL measures by investigating the structural validity and generalizability of the Social-Emotional and Character Development Scale (SECDS) by comparing traditional (CFA) and more recently utilized factor analytic tools (ESEM). A demonstration of multi-group and time invariance using ESEM is also provided. In order to achieve both the substantive and methodological purposes of this study, the psychometric investigation serves four purposes: (1) utilize traditional ICM-CFA approach to provide generalizability evidence for use of the SECDS with a Caribbean population; (2) extend the structural evidence of the SECDS through use of ESEM methods; (3) employ ESEM to demonstrate invariance evidence of the SECDS across time and male/female groups; and (4) situate the six SECDS factor constructs into the broader SEL competencies as defined by CASEL (2017).
MATERIALS AND METHODS
Initial psychometric investigation of the SECDS demonstrated structural validity through traditional ICM-CFA model comparisons in a longitudinal sample of U.S. youth from 14 urban elementary schools (Ji et al., 2013). While Ji et al. (2013) demonstrated model fit parameters within accepted parameters, the specified models did not account for structure coefficients. The six proposed factors in Ji et al.'s (2013) study exhibit factor correlations as high as 0.74, but as Marsh et al. (2013) proposes, misspecification of structural models by not including item crossloadings can result in upwardly biased inter-factor correlations in ICM-CFA models. The high inter-factor correlations also prevent the SECDS from exhibiting discriminant validity for six-distinct factors. Furthermore, in an effort to provide cross-cultural validity evidence for the SECDS and demonstrate the applied use of exploratory structural equation modeling for evaluating SEL data, the present study utilizes data from the developing country of Belize. Situated in Central America and bordered by Mexico, Guatemala, and the Caribbean Sea, Belize has 8,800 square miles of land and a population of 334,060 (United Nations, 2013). With a GDP-per capita of $8,900 (2012 U.S. dollars), Belize has the second highest per capita income in Central America; however, 4 out of 10 people still live in poverty.
Sample
Data for the present study were collected from a sample of 24 schools which were randomly selected from the Belize District. At the time of the study, within the Belize District, 54 schools serving primary schools students formed the population of available primary schools, including private schools, government schools, and government-aided schools. The sample of 24 schools for the present study were selected using a random number generator in Excel. A full description of random assignment of the sample with details on school demographics is provided in Hull et al. (2018). Students in Standards 4 through 6 (approximate ages 10-13) were administered the SECDS. A total of 1,877 students provided SECD scale data for at least one of two waves of measurement. Of the represented upper elementary students, 36% were Standard 4, 33% were Standard 5, and 31% were Standard 6. The demographics of the students with completed demographic information (n = 1,781) were as follows: 51% male, 49% female; Creole 55%, Metizo 25%, Garifuna 6%, Maya 2%, and 6% other ethnicity. Students were administered the SECDS at the beginning of the school year, and again at the end of the school year. The data for the present study includes only data collected at the beginning of the school year (pre-test).
Measure
Meant to address the need for a multi-dimensional SEL instrument which captures both social and emotional skills, the Social Emotional and Character Development Scale (SECDS) includes 29 Likert scale items designed to assess skills and behaviors with likely relevance to both SEL and character development programs. The six SECDS constructs were intended to capture school-related aspects of the five larger SEL competencies presented in Table 1 (Ji et al., 2013). The SECD constructs and number of associated items are as follows: Prosocial Behavior (6 items), Self-Control (5 items), Respect at School (5 items), Respect at Home (4 items), Honesty (5 items), and Self-Development (4 items). The SECDS question stem is, "How MUCH OF THE TIME do you do the following things?" Items were rated on a 4-point scale (NONE, SOME, MOST, ALL) and coded, where higher scores indicated higher levels of social-emotional skills and character.
Data Analysis
In order to produce less biased estimates, missing data were handled using multiple imputation (Enders, 2010). Data were considered missing at random (MAR) and 20 item-level imputed datasets were generated at the time of each SEM analysis using MPlus Version 6.12 (Muthèn and Muthèn, 2010). For the purposes of comparing models where the chisquare DIFFTEST function (which does not allow for multiple imputation) was utilized, data were considered MAR and models were estimated using a four-step estimation method which utilizes maximum likelihood estimation for the first two steps (Muthèn and Muthèn, 2010).
MPlus Version 6.12 was used to conduct all CFA and ESEM models. Since responses to the SECDS included ordered categorical data from a 4-point Likert scale, CFAs employed weighted least squares estimation using a diagonal weight matrix with standard errors and mean and variance adjusted chi-square test statistic using a full weight matrix (WLSMV; Muthèn and Muthèn, 2010). Model fit was evaluated using indices which are adjusted for sample-size: Root Mean Square Error of Approximation (RMSEA), Comparative Fit Index (CFI), the Tucker-Lewis Index (TLI). Criteria for assessing model fit when using categorical data were followed as recommended by Schreiber et al. (2006) where resulting indices falling between recommended ranges are indicators of acceptable model fit: RMSEA 0.06-0.08, CFI 0.90-0.95, and TLI 0.90-0.96. When comparing the fit of nested models, suggestions by Chen (2007) will be followed where a less than 0.01 decrease in incremental model fit indices (e.g., CFI < −0.01) and a RMSEA increase of less than 0.015 supports retaining the more parsimonious model ( RMSEA < 0.015). In addition, the Satorra-Bentler scaled chi-square difference (DIFFTEST in MPlus) was used to compare the fit of the hypothesized model to alternative models (Dimitrov, 2010;Muthèn and Muthèn, 2010). A statistically significant DIFFTEST result indicates the more parsimonious (more restrictive) model to be a worse fit for the data (H0 is rejected).
Phase 1: Generalizability of Structural Validity
CFA was used to evaluate the degree to which the SECDS responses were consistent with the theorized multidimensional, hierarchical conceptualization of social-emotional skills and character. In order to initially test this conceptualization, a hypothesized higher order model and three comparative models were fit to the data (see Figure 2). The hypothesized model included all 29 items assigned to their respective SECD dimension with all six of the dimensions or sub-factors nested within a higher-order SECD factor. The first order factors were not correlated. The first alternative model included all 29 items assigned to a single SECD factor. The second alternative model associated all 29 items with the respective dimensions; however, in lieu of a higher order factor, all factors were specified to correlate. The third alternative model included all items as indicators for a single first order factor.
Phase 2: Extending Structural Validity
In phase two, the factor structure of the SECDS was examined using exploratory structural equation modeling (ESEM). Since previous evaluation of the SECDS scale indicated some of the SECDS factors were correlated at 0.7 or higher (Ji et al., 2013), the CFA factor structure was examined under an oblique target rotation where all non-target loadings were set to be influenced toward zero.
Phase 3: Generalizability Across Sex and Time
Utilizing the final measurement model retained from Phase One and Two, the multi-group factorial invariance and time invariance was assessed using ESEM procedures. Testing factorial invariance followed a sequential constraint imposition procedure comparing a set of partially nested models ranging from the least restrictive model with no parameters constrained to be invariant to a model with complete factorial invariance with all parameters constrained to be invariant (Dimitrov, 2010;Marsh et al., 2011;Byrne, 2012;Guay et al., 2014). This forward approach to testing factorial invariance provides for examining configural, measurement, and structural invariance. Table 3 provides the taxonomy of the multiple-group confirmatory factor analysis (MGCFA) models included in the factorial invariance analyses. Since the 4-point likert scale model indicators were considered categorical, the theta parameterization was utilized in order to include uniqueness as a point of constraint among the two groups. In addition, in lieu of item intercepts, categorical indicators warrant the calculation of item thresholds which is the point at which an individual transitions from a response of 0 to a response of 1 on the categorical outcome.
Similar to testing invariance across groups, the six invariance models can be adapted to evaluate test-re-test instrument performance (Marsh et al., 2011). One adaptation is the inclusion of correlated uniqueness (CU) for the same indicator between time one and time two. Failure to include the correlated uniqueness between the same items in two different testing periods is likely to inflate test-re-test correlations (Marsh et al., 2004(Marsh et al., , 2011; therefore in addition to the nested time invariance models, a comparison between models estimating CU and not estimating CU was conducted. The DIFFTEST, CFI ( CFI < −0.01) and RMSEA ( RMSEA = 0.015) were used to compare all invariance models (Chen, 2007;Dimitrov, 2010).
Phase 1: Confirmatory Factor Analysis
For the purposes of replicating construct validity procedures as demonstrated by Ji et al. (2013), CFAs comparing the hypothesized higher order model and three comparative models (Figure 2) were fit to the first wave of data. Table 4 presents the model fit indices for the four compared models. While the hypothesized higher order factor model provides reasonably good fit, comparisons of model fit indicates Alternative Two: six-correlated factor model ( CFI = 0.008, RMSEA = −0.008) to be a slightly better fit. The DIFFTEST comparing the hypothesized Higher Order CFA nested within the alternative 6 Correlated Factor CFA suggests the addition of a higher order factor provided decrement in model fit (H 0 : Higher Order v. H 1 : 6 Correlated Factors; MD χ 2 = 180.862, df = 9, p < 0.001). Table 2 includes the factor loadings, structure coefficients, and factor correlations for the six-correlated factors model. The target factor loadings for all factors are substantial (0.51-0.745). However, the structure coefficients for all non-target loadings indicate the factors are not distinct as is required for the independent cluster model CFA (ICM-CFA) where all non-target cross loadings are predetermined to be zero. As would be expected, the factor correlations are also high (0.629-0.909) indicating the factors are highly related even though the higher-order factor model does not provide a substantially better fit.
Phase 2: Exploratory Structural Equation Modeling
As emphasized by Marsh et al. (2010Marsh et al. ( , 2011 and Morin et al. (2013), the first step in conducting an ESEM analysis is to compare the a priori factor model with the hypothesis that the ESEM model provides a better fit over the more restrictive ICM-CFA model. Table 4 includes model fit indices for the CFA and ESEM models. As noted in Phase One, the six-factor model provided the most appropriate fit of the ICM-CFA models. Comparison with model fit indices from the six-factor model warrants retention of the less parsimonious ESEM model ( CFI = 0.035, Chen, 2007). Additionally the DIFFTEST indicates the ESEM model fits the responses at least somewhat better (MD χ 2 = 985.876, df = 115, p < 0.001).
When considering the ESEM solution with target rotation's factor pattern coefficients shown in Table 2, the Prosocial Behavior, Respect for Teacher, Respect for Parent and Self-Development factors show higher coefficients on target loadings (0.883-0.229) with lower loadings on non-target factors. For the Self-Control factor, only two of the target items show the highest factor pattern on Self-Control: Item 2-I keep my temper when I have an argument with other kids; Item 3-I ignore other children when they tease me or call me bad names. These two items seem to focus on peer relations. The other two target indicators show higher factor patterns on the Respect for Teacher factor: Item 1-I wait my turn in line patiently; Item 3-I follow the rules even when nobody is watching. Both of these items could be associated with school related tasks. For the Honesty factor, only three of the target items show the highest factor pattern coefficient on the target factor: Item 2-I tell the truth when I have done something wrong; Item 3-I tell others the truth; Item 5-I admit my mistakes. The other two Honesty target items load higher on other factors. Item 1 (I apologize when I have done something wrong) exhibits a higher association (P = 0.342) with the Self-Control factor, which as discussed previously seems to be associated with peer relations. Item 4 (I keep promises I make to others) has a higher association (P = 0.305) with Prosocial Behavior.
When comparing target and non-target loadings of the ICM-CFA and the ESEM models, the profile similarity index (PSI = correlation between ICM-CFA loadings where nontarget loadings are constrained to 0 and the ESEM loadings) indicates an overall similarity of 0.698 which illustrates the factor patterns are somewhat similar. However, when just considering the more distinct Prosocial Behavior, Respect for Teacher, Respect for Parent, and Self-Development factors, the PSI increases to 0.744 indicating higher similarity between loadings after removing the factors with the highest crossloadings. Examination of the inter-factor correlations indicates a critical advantage of the ESEM model over the ICM-CFA. Although the patterns of loadings are moderately similar, the factor correlations in the ESEM model (−0.024 to 0.433) are much lower than the ICM-CFA (0.629-0.909). The decrease in factor correlations from the ICM-CFA to the ESEM is indicative of misspecifing all ICM-CFA non-target loadings to zero, a problem which is further illustrated by the high ICM-CFA structure coefficients.
Gender Invariance
Model fit indices for the six gender invariance models are shown in Table 6.
SECDS factors SEL competencies Items
Self-control Self-management-Control I keep my temper when I have an argument with other kids.
Filter negative input I ignore other children when they tease me or call me bad names.
Impulse control I apologize when I have done something wrong.
Regulate emotions and behavior I play nicely with others.
Pro-social Peer relationship Mgmt and social awareness I play nicely with others.
Builds relationships I do things that are good for the group.
Relationships with diverse individuals I treat my friends the way I like to be treated.
Working cooperatively I am nice to kids who are different from me.
Respect for others I try to cheer up other kids if they are feeling sad.
Empathy and perspective taking I am a good friend to others.
Appreciating diversity I think about how others feel.
I keep promises I make to others.
Respect teacher
Responsible decision making I speak politely to my teacher.
Respectful choices I obey my teacher.
Obey and follow rules I follow the directions of my teacher.
I listen (without interrupting) to my teacher.
I follow school rules.
I wait my turn in line patiently.
I follow the rules even when nobody is watching.
Respect parents
Adult relationship management I speak politely to my parents.
Respect for others I obey my parents.
I listen (without interrupting) to my parents.
I follow the rules at home.
I speak politely to my teacher.
Honesty
Moral and ethical decision making I apologize when I have done something wrong.
Moral and ethical responsibility I tell the truth when I have done something wrong.
Evaluation and reflection I tell others the truth.
I admit my mistakes.
Self-development Self-management-Improvement I make myself a better person.
Goal setting I keep trying at something until I succeed.
Self-motivation I set goals for myself (make plans for the future).
Improving self I try to be my best.
Strong Measurement Invariance: Model 2 vs. Model 3
Strong measurement invariance is determined by comparing models where, in addition to pattern coefficients, item thresholds are estimated freely (Model 2) vs. models where the item thresholds are constrained to be equal across groups (Model 3). Comparisons between Model 2 and Model 3 support retention of the more parsimonious Model 3 ( CFI = −0.001, RMSEA = < 0.001). When considering the DIFFTEST and testing at an alpha of 0.01 as is appropriate when dealing with large sample sizes, the more constrained model would not be considered a decrease in model fit (MD χ 2 = 77.233, df = 52, p = 0.013). Support of the more constrained Model 3 provides evidence for lack of differential item functioning or strong measurement invariance which justifies comparison of the latent means across gender.
Strict Measurement Invariance: Model 3 vs. Model 4
Strict measurement invariance is determined by comparing Model 3 where the indicator uniqueness is freely estimated across groups vs. Model 4 where uniqueness is constrained to be equal. Comparisons between Model 3 and Model 4 support retention of the more restrictive Model 4 ( CFI = < 0.001, RMSEA = −0.001). Likewise, the DIFFTEST supports retention of the more constrained Model 4 (MD χ 2 = 48.685, df = 29, p = 0.013). Support of strict measurement invariance indicates measurement error is similar across groups and therefore manifest scores could be reasonably compared.
Model 5
Factor variance-covariance (FVCV) invariance is determined by comparing Model 4 where the FVCV is freely estimated (Guay et al., 2014). Where FL, factor loading; THOLD, thresholds; UNIQ, indicator uniqueness/residual; FVCV, factor variance/covariance; FMN, factor means. across groups to Model 5 where the FVCV is constrained to be equal. Comparisons between Model 4 and Model 5 provide evidence for retaining the more parsimonious constrained Model 4 ( CFI = 0.008, RMSEA = −0.008). The DIFFTEST also provides evidence for adopting the more constrained Model 5 (MD χ 2 = 24.585, df = 21, p = 0.266). Determining FVCV invariance across groups is important to being able to compare correlations between the SECDS and other criteria measures. Based on the evidence of FVCV invariance, comparison of correlations between SECDS manifest variables and other criteria measures is warranted.
Latent Factor Mean Comparison Across Gender: Model 5 vs. Model 6
Invariance across latent means can be determined by comparing Model 5 where the FVCV, thresholds, uniqueness, and pattern coefficients are constrained but the latent factor means are freely estimated to Model 6 where all elements are constrained to be equal across groups. Comparison of the model fit indices supports retention of the less parsimonious Model 5 ( CFI = −0.022, RMSEA = 0.017). In other words, constraining the latent means to be equal across groups resulted in decreased model fit. Retention of Model 5 where latent factor means are freely estimated provides evidence for gender differences between the latent means. Since previous multi-group model comparisons provided evidence for strong measurement invariance, the differences indicate latent means vary systematically between boys and girls. Table 7 includes latent means for boys as expressed in SD units from girls' means. When compared to the girls' means which are set at 0 for identification purposes, the boys' means are statistically significantly lower on all factors with the exception of Respect for Parent. The greatest difference in means between girls and boys occurs on the Self-Development factor where boys' mean is 0.522 standard deviations lower than girls' mean (M = −0.522, SE = 0.065, p < 0.001). The Respect for Parent factor showed the lowest gender-based differences (M = −0.108, SE = 0.06, p = 0.069).
Time Invariance
In order to evaluate the potential impact of omitting correlated uniqueness between time periods, two configural models were compared. Model 1 included estimating the correlated uniqueness while Model 1a did not. Comparisons of model fit indices shown in Table 8 indicate while although the model fit does not decrease substantially (Chen, 2007), the RMSEA confidence intervals do not overlap which suggests there are Where FL, factor loading; THOLD, thresholds; UNIQ, indicator uniqueness/residual; FVCV, factor variance/covariance; FMN, factor means.
indeed at least some identifiable differences between the two models. Table 9 compares factor correlations in Model 1a and 1. Although there appears to be no systematic decrease in factor correlations across all factors, the mean of all correlations does decrease slightly (M = 0.330, SD = 0.287 vs. M = 0.266, SD = 0.213), and the factor correlations differ greatly in some comparisons. For example, under Model 1a the test-re-test correlation for Respect Teacher is 0.590 while under Model 1 the test-re-test correlation is only 0.121. Because of the potential impact on future test-re-test analysis, the a' priori correlated uniquenesses were included in all further time invariance models-even though inclusions of these additional parameters increase model complexity. Similar to the protocol for testing multigroup invariance, time invariance models evaluate the stability of components over waves of data instead of groups. Model fit indices for the time invariance models are shown in Table 8. Weak factorial invariance is evidenced by comparison of fit indices for Model 1 and Model 2. Comparison of Model 2 and Model 3 provides evidence of strong measurement invariance which in turn justifies comparison of latent means over time. Strict measurement invariance where uniqueness is held constant is demonstrated by Model 3 and 4 comparisons. Invariance of the factor variance-covariance matrix is supported by Model 4 and 5 comparisons. Comparison of Model 5 where latent means are freely estimated vs. Model 6 where latent means are constrained to be equal indicates the more parsimonious constrained model provides an equivalent fit to the data. This can be further interpreted to indicate factor means do not differ systematically over time. It is interesting to note the DIFFTEST probability values indicated differences between all models comparisons except when comparing Model 2 and Model 3 (MD χ 2 = 76.772, df = 52, p = 0.014). However, evaluation of the RMSEA CIs between models show clear overlap-and in the instance of Model 2 and 3, complete overlap. In lieu of any published simulation studies investigating the sensitivity of DIFFTEST, it is assumed the discrepancy between interpretation based on model fit indices and interpretation of DIFFTEST significance could be attributed to the large sample size.
DISCUSSION
In the present study, the validity of the SECDS was examined through a three phase investigation. Phase I examined the (Ji et al., 2013). Phase II extended the structural evidence of construct validity by examining the SECDS measurement model under the ESEM framework. Phase III sought to extend the generalizability evidence of the SECDS construct validity through multi-group and time invariance ESEM models. In Phase 1, the replication of the structural model as demonstrated by Ji et al. (2013) seemed to fit the Belize sample data. Although the hypothesized higher-order factor model met acceptable fit standards where model fit indices are concerned, the Belize data were slightly better fitted to the six-correlated factor model. Since recent SEL and Character Development reviews call for instruments which measure multiple distinguishable facets of the SEL constructs, retention, and further examination of the six-factor model was substantively warranted (Wigelsworth et al., 2010;Humphrey et al., 2011). Similar to Ji et al.'s (2013) findings, examination of the ICM-CFA six factor structure revealed high factor correlations as well as high structure coefficients. As Asparouhov and Muthèn (2009),Marsh et al. (2010, and Morin et al. (2013), and others point out, misspecification of non-target zero loadings in ICM-CFA models can lead to over inflation of factor correlations which in turn can lead to biased estimates in further examined SEM models. In addition, high factor correlations are indicative of low discriminant validity, rendering the SECDS factors virtually indistinguishable as separate constructs. The ICM-CFA high factor correlations and high structure coefficients provide substantive cause for further investigation of the SECDS under the ESEM framework.
In Phase 2 the structural evidence of construct validity was extended through evaluation of the SECDS under the ESEM framework. Consistent with demonstrations in recently published ESEM literature, the ESEM six-factor structure of the SECDS provided a slightly better fit and suggests that the magnitude of inter-factor correlations is lower (Marsh et al., 2011;Guay et al., 2014). Substantively speaking, the reduction in factor correlations greatly improves the viability of the SECDS by helping distinguish between factors associated with different SEL programming components. While in many instances factor loadings show similar patterning to the ICM-CFA loadings, the ESEM model allowed for expression of some very notable cross-loadings.
In addition to methodological advantages of the ESEM model, inclusion of non-target loadings indicates the need for a substantive change in how the SECDS factors are being defined. Table 5 shows the alignment of the SECDS six factor structure with the generalized SEL competencies as defined by CASEL (2013). As noted in the table, the items in italics include those with high cross-loadings as discovered through the ESEM model and are not items included with the original SECDS structural configuration. The SEL competencies of Social Awareness, Responsible Decision Making, Self-Management, and Relationship Management seem to be reflected in the manifestation of original SECDS factors when considering the prominent cross-loadings among the SECDS factors. As such, the SECDS factors could be reinterpreted or defined to reflect core SEL competencies.
Self-Management Competency
The Self-Management Competency appears to manifest in the SECDS as having two facets: Self-Improvement and Self-Control. The SECDS Self-Development factor aligns well with the SEL Self-Management-Improvement facet to include goal setting, motivation, and improvement of self. No additional indicators loaded heavily on the SECDS Self-Development construct which would seem to indicate a certain degree of discriminant validity. Two items from the SECDS Self-Control factor along with high cross-loading items from Honesty and Pro-Social are relatively analogous to the SEL Self-Management-Control facet in that the indicators involve regulating emotions, filtering negative input, and impulse control.
Decision Making Competency
Instead of retaining only a single SEL Decision Making competency, evaluation of the items loading on Respect for Teacher and Honesty seem to key in on two facets: Rule-Following and Morality. The SECDS Honesty factor aligns with the SEL Decision Making competency but more specifically concerning moral and ethical decision making or Responsible Decision Making-Morality. Items which loaded on the original SECDS Respect Teacher factor congregate around the theme of following rules and making respectful choices-or rather Responsible Decision Making-Rule-Following.
Relationship Management Competency
Similarly, instead of a single SEL Relationship Management competency, the cross-loadings on the SECDS Pro-Social and Respect Parents factors provide for interpretation of separate facets: Peer and Adult. The high cross-loadings of Teacher Respect indicators on the Parent Respect items point specifically to a Relationship Management-Adult facet as the indicators pertain to interactions with "parents" and "teachers." While the highly loaded items on the SECDS Pro-Social factor are specific to Relationship Management-Peer facet since the indicators relate to "friends" and "kids." Likewise, the SECDS Pro-Social items seem to reflect characteristics associated with the SEL Social-Awareness competencies implying a district association of social awareness in relation to peer interactions.
Considering the re-conceptualization of the SECDS factor structure under the ESEM framework, the six factor structure can be considered to fit more generally into the larger conceptualization of the SEL competencies while also retaining is applicability to the specific Positive Action program components (Zins et al., 2004;Positive Action, 2013). Retaining the original six factors, yet re-defining the factors under the findings of the ESEM model increases the utility of the SECDS and helps meet a noted need in the SEL literature for instruments designed to measure unified concepts across multiple programs (Humphrey et al., 2011). Further psychometric investigation could justify applicability of the SECDS data for its original purpose of capturing six factors associated with school-related characteristic related to a particular program or for responses to be recalculated to make scores more relatable to the broader SEL competencies. In other words, these preliminarily analyses point to a potential for a dual-purpose, flexible factor structure depending on need either to relate to units of a specific program or to related to the broader definition of SEL competencies.
Phase 3 extended the generalizability evidence of the SECDS over time and gender. The series of ESEM models examining the invariance of components across gender indicates the SECDS held up to strict measurement invariance as well as factor variance-covariance invariance. As a result, the latent mean differences discovered in the final model comparison can be interpreted as systematic differences in the latent mean scores of boys and girls. Similar results, where males exhibit lower SEL and Character Development manifest means scores have been noted by other authors (e.g., Taylor et al., 2002;Endrulat et al., 2010).
The occurrence of varied gender-based latent mean differences on the six factors provides additional evidence of discriminant validity provided by examination of the SECDS under the ESEM framework. In opposition, under the ICM-CFA model with high correlations between factors variations of the latent mean differences for the different SECDS factor would likely not be noticed since the high correlations render the factors essentially identical mathematically. Being able to detect the variation in gender-based latent mean differences across constructs is an additional benefit of examining the SECDS under the ESEM framework. Following a similar protocol to evaluating group differences, the time invariance models demonstrate the SECDS to exhibit strict invariance across time in addition to indicating there are no systematic latent mean differences between time one and time two.
CONCLUSION
The SECDS exhibits structural and generalizability evidence of construct validity when examined under the ESEM framework. While the initial higher order SECD factor with six secondary factors provided acceptable fit to the Belize sample data, the ESEM six factor structure provided both substantive and methodological advantages. The ESEM six-factor structure decreased the high factor correlations as seen under the ICM-CFA model and allowed for the expression of high cross-factor loadings. The lower factor correlations provide at least some level of discriminant validity, which renders the six factors usable in larger SEM models designed to compare the SEL facets to other purported criteria-related constructs. Interpretation of the SECDS factors under the ESEM framework allows for fitting of the SECDS into the larger body of SEL literature. In addition, the ESEM SECDS six-factor structure exhibits generalizability evidence over both gender and time.
While evaluation of the SECDS under the ESEM framework poses significant substantive advantages and exhibits structural and generalizability evidence of construct validity, this initial investigation utilizing a Belizean sample does not warrant cessation of further examination of the SECDS under the ICM-CFA framework. Instead the current findings demonstrate the need to expand the construct validation of the SECDS and other similar SEL instruments to include evaluation under both ICM-CFA and ESEM frameworks. As shown with the SECDS, examination under the more flexible ESEM framework could allow previously developed SEL instruments to be redefined or expanded to include the more generally accepted SEL competency constructs.
LIMITATIONS AND FUTURE WORK
Being a more recently utilized method in the construct validity literature, the methodological limitations surrounding the use of ESEM are numerous. One of the more obvious areas for future work in the area of comparing ESEM models includes further investigation of best practice concerning model fit indices. For example, while previous studies have established general guidelines for comparison of model fit indices for nested models which included continuous indicators, no published literature establishes guidelines for use of the model fit comparisons in models with categorical indicators. In addition, no model fit indices have been developed for comparison over multiple imputed datasets. Other limitations include the lack of MPlus software's capabilities to evaluate ESEM measurement models under multilevel design or to include the ESEM measurement model in higher order factor models.
The present investigation examined the structure of the SECDS under the ESEM framework using only data gathered from a sample of Belizean children ages 9-13; therefore the results cannot be generalized to other populations. The currently assessed self-reported SECDS version could also be impacted by students engaging in socially desirable response patterns. A multigroup analysis evaluating model fit over both Belizian and U.S. samples should be conducted under the ESEM framework. In addition, further investigation surrounding the SECDS's discriminant validity is needed. For example, an ESEM-MTMM as outlined by Morin et al. (2013) would further elucidate the differences between SECDS factors and other related constructs as called for by Wigelsworth et al. (2010). Since the SECDS also includes a yet unexamined teacher report version, efforts should be made to establish the SECDS as a multiple-reporter cross-validated instrument, another need noted in Wigelsworth et al.'s (2010) review of current SEL measures. Although the SECDS has been subjected to brief evaluation of reliability under classical test theory applications, no published literature has included an examination of SECDS indicators' performance under modern test theory using a structural equation modeling framework (ex: IRT applications). Since SEL instruments seek to measure levels of SEL construct competencies over all levels (as opposed to establishing a cutoff score), it is important to add IRT indicator performance into consideration when establishing reliabilities instead of interpreting solely the omnibus alpha coefficient.
AUTHOR'S NOTE
This study is registered at ClinicalTrials.gov NCT03026335.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
Ethical approval was obtained from the the second author's institutional review board of the University of North Texas and the Ministry of Education in Belize prior to the start of the study. All protocols were followed in accordance with the ethical proposals submitted and approved by the two review agencies. Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
KH contributed to data analysis and initial manuscript draft. DH contributed to research design, data collection, conceptualization, and manuscript editing. EN-H contributed to manuscript development on policy implications and facilitation of data collection. MM contributed to final editing and revisions. All authors contributed to the article and approved the submitted version. | 2022-02-10T14:44:22.130Z | 2022-02-09T00:00:00.000 | {
"year": 2021,
"sha1": "4bad999ad23d486254d93f8d85300553c2bafd90",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.770501/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "4bad999ad23d486254d93f8d85300553c2bafd90",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251745504 | pes2o/s2orc | v3-fos-license | Hunting for the cause: Evidence for prion-like mechanisms in Huntington’s disease
The hypothesis that pathogenic protein aggregates associated with neurodegenerative diseases spread from cell-to-cell in the brain in a manner akin to infectious prions has gained substantial momentum due to an explosion of research in the past 10–15 years. Here, we review current evidence supporting the existence of prion-like mechanisms in Huntington’s disease (HD), an autosomal dominant neurodegenerative disease caused by expansion of a CAG repeat tract in exon 1 of the huntingtin (HTT) gene. We summarize information gained from human studies and in vivo and in vitro models of HD that strongly support prion-like features of the mutant HTT (mHTT) protein, including potential involvement of molecular features of mHTT seeds, synaptic structures and connectivity, endocytic and exocytic mechanisms, tunneling nanotubes, and nonneuronal cells in mHTT propagation in the brain. We discuss mechanisms by which mHTT aggregate spreading and neurotoxicity could be causally linked and the potential benefits of targeting prion-like mechanisms in the search for new disease-modifying therapies for HD and other fatal neurodegenerative diseases.
Introduction
Huntington's disease (HD) is a rare monogenic neurodegenerative disease characterized by motor, cognitive, and psychiatric deficits that typically develop in patients 30-50 years old and progress until death 10-15 years after clinical symptom onset. HD belongs to a family of nine dominantly-inherited neurodegenerative disorders collectively known as polyglutamine (polyQ) diseases, each caused by expansion of a CAG triplet repeat region that encodes a polyQ tract in a specific gene. HD is caused by expansion of a CAG repeat region in exon 1 of the huntingtin (HTT) gene located on chromosome 4 beyond a pathogenic threshold of at least 37 CAGs (Figure 1A), with inheritance of 40 or more CAGs in this stretch associated with 100% disease penetrance (Ross and Tabrizi, 2011;Bates et al., 2015;Saudou and Humbert, 2016). HD exhibits genetic anticipation due to increased instability of expanded CAG repeats, and there exists a strong inverse relationship between CAG repeat length and age of symptom onset, with inheritance of >60 CAG repeats associated with highly-aggressive, juvenile-onset HD (Duyao et al., 1993;Wexler, 2004). Treatments currently available to HD patients can temporarily relieve motor or psychiatric symptoms, but effective disease-modifying therapies have yet to be developed. Although HD is caused by inheritance of at least one mutant HTT allele, additional genetic factors that modify HD age-ofonset and severity are emerging (Gusella et al., 1983;Lee et al., 2015; Genetic Modifiers of Huntington's Disease Consortium, 2019) and are being explored as potential therapeutic targets.
The mammalian HTT protein is large (>3,000 amino acid residues and ∼350 kDa; Figure 1A) and ubiquitously expressed, with highest levels detected in the central nervous system (CNS) and testes (Li et al., 1993;Guo et al., 2003). The CAG repeat expansion mutation that causes HD leads to expression of mutant HTT (mHTT) proteins containing expanded N-terminal polyQ tracts, which directly cause mHTT misfolding. Misfolded mHTT proteins accumulate in amyloid aggregates and appear as intranuclear inclusion bodies that are a defining diagnostic feature in HD brains. HTT misfolding and aggregation disrupt many cellular processes, and there is substantial evidence to suggest that mHTT oligomers and/or insoluble fibrils directly cause neurodegeneration in HD (Takahashi et al., 2008;Lajoie and Snapp, 2010;Leitman et al., 2013;Ramdzan et al., 2017). HTT knockout is embryonic lethal (Zeitlin et al., 1995), and conditional knockout leads to progressive neurodegenerative phenotypes in adult mice (Dragatsis et al., 2000), suggesting that HTT expression is required for normal development of the CNS. The majority of the large HTT protein consists of ten regions containing 36 HEAT repeats, named for HTT, Elongation factor 3, protein phosphatase 2A, and TOR1 ( Figure 1A). Each HEAT repeat is ∼40 residues long and has a characteristic structure of two helical domains flanking a non-helical region. HEAT repeat-containing proteins adopt an overall solenoidlike structure that can accommodate dynamic interactions with many different proteins, suggesting primary functions as scaffolding proteins (Kobe et al., 1999). Compelling evidence points to a role for wild-type HTT (wtHTT) as a scaffold in selective autophagy (Ochaba et al., 2014;Rui et al., 2015), possibly accounting for accumulation of cytotoxic material due to loss of normal HTT function in HD. HTT has also been reported to play roles in transcriptional regulation, intracellular signal transduction, endocytosis, vesicle transport, and apoptotic signaling pathways by binding to over 350 different proteins (Saudou and Humbert, 2016). At least 45 amino acids, including >20 residues within the N-terminal domain encoded by exon 1, can be post-translationally modified by phosphorylation, acetylation, ubiquitination, sumoylation, fatty acylation, and/or proteolytic cleavage, suggesting complex and dynamic regulation of HTT functions in the cell (Saudou and Humbert, 2016). mHTT-induced neurodegeneration in HD likely involves loss-of-function phenotypes associated with HTT protein misfolding and sequestration in insoluble aggregates, and gain-of-toxic functions caused by dysregulation of proteinprotein interactions or post-translational modifications, and/or disruption of key pathways that regulate cell homeostasis and survival (Ratovitski et al., 2012).
HTT proteins undergo post-translational processing by caspases, calpains, cathepsins, and matrix metalloproteinases that dramatically alter HTT function and/or subcellular localization ( Figure 1A) (Weber et al., 2014). N-terminal fragments of mHTT generated by caspase cleavage (Graham et al., 2006) or aberrant splicing (Sathasivam et al., 2013;Neueder et al., 2017) are highly prone to oligomerizing, are cytotoxic, and have been identified in insoluble protein aggregates in in vivo HD models and HD patient brains, suggesting an active role for these fragments in disease pathogenesis (Myers et al., 1991;DiFiglia et al., 1997). Expression or exposure to aggregated N-terminal HTT fragments encoded by exon 1 (HTT ex1 ) or caspase-6 cleavage products (HTT ex1−12 ) are sufficient to induce neurotoxicity in cell, invertebrate, and vertebrate models (Wellington et al., 2000;Lunkes et al., 2002;Warby et al., 2008;Landles et al., 2010;El-Daher et al., 2015). Animal models expressing these N-terminal mHTT fragments recapitulate many of the behavioral and cellular pathologies seen in HD patients and have been invaluable for expanding our understanding of the molecular mechanisms that drive HD pathogenesis.
Pathological aggregation of mutant huntingtin
HTT protein aggregation follows an amyloid nucleation mechanism, with oligomeric "seeds" that form during an extended lag phase, followed by rapid recruitment of HTT monomers into insoluble, β-sheet-rich fibrils during an exponential growth phase ( Figure 1B) (Wetzel, 2020). The kinetics of mHTT amyloid formation are highly dependent on the length of the polyQ tract and flanking sequences contained within HTT ex1 , especially the 17 residues N-terminal to the polyQ tract (Cooper et al., 1998;Thakur et al., 2009;Lakhani et al., 2010). The length of the aggregation lag phase can be substantially shortened by the addition of preformed HTT "seeds" or through secondary nucleation events, suggesting that mHTT aggregation is propagated through templated conversion of soluble monomers ( Figure 1B). As for most neurodegenerative diseases, the identity of the toxic aggregate HTT structure and aggregation mechanism. (A) Primary protein structure of full-length human HTT highlighting the N-terminal polyQ tract encoded by exon 1, calpain (clp) and caspase (casp) 3 and 6 cleavage sites, and 7 regions containing 36 HEAT repeat (HR). PolyQ tract lengths associated with wtHTT (n ≤ 36) or mHTT (n ≥ 37) proteins are indicated by green and red, respectively, in and below the protein structure. (B) mHTT aggregation occurs via nucleated growth polymerization. wtHTT proteins achieve their native, functional fold, whereas expanded polyQ tracts cause mHTT proteins to misfold and a stabilize once a critical nucleus is achieved. This rate-limiting step is followed by rapid addition of mHTT monomers via templated misfolding to form soluble oligomers and, ultimately, insoluble, β-sheet-rich, amyloid fibrils. Prion-like conversion of wtHTT also occurs via templated conformational stabilization of natively-folded wtHTT proteins by mHTT aggregate seeds. species in HD remains largely elusive. However, emerging evidence points to a strong correlation between soluble oligomers and increased pathogenicity (Takahashi et al., 2008;Leitman et al., 2013;Kim et al., 2016). Furthermore, formation of insoluble mHTT inclusions may actually serve a neuroprotective role by sequestering toxic mHTT proteins from key cell survival pathways (Arrasate et al., 2004;Slow et al., 2005).
Despite ubiquitous expression of HTT, there is striking regional development of neuropathology in the brains of HD patients. HD pathology is most prominent in the basal ganglia, where GABAergic medium spiny neurons (MSNs) in the striatum undergo massive degeneration, followed by degeneration of neurons in the cortex and other brain regions (Plotkin and Surmeier, 2015;McColgan et al., 2017). Intriguingly, striatal interneurons remain relatively spared as HD progresses, suggesting an intrinsic susceptibility of MSNs to mHTT-induced toxicity (Galvan et al., 2012;Morigaki and Goto, 2017). This selective vulnerability of MSNs likely results from a combination of mHTT-induced cell autonomous and noncell autonomous toxicity, especially considering that mHTT expression is lower in the striatum than in brain regions that experience less degeneration in HD (Gourfinkel-An et al., 1997). In the striatum, mHTT expression and aggregation are associated with reduced brain-derived neurotrophic factor (BDNF) signaling and glutamate excitotoxicity, leading to loss of neurotrophic support and cortical input to MSNs (Deyts et al., 2009;McColgan and Tabrizi, 2018). Cell type-specific gene expression signatures and proteostasis regulation may also underlie the enhanced susceptibility of MSNs and other neuronal populations in HD. In support of this, MSNs express high levels of the GTPase and SUMO E3-like protein Rhes, which binds and sumoylates mHTT and is associated with increased cell death (Subramaniam et al., 2009). Thus, gene expression profiles and cell communication pathways that are differentially regulated in the brain may contribute to selective vulnerability of striatal and cortical neurons to HD pathogenesis.
Prion-like transmission in neurodegenerative diseases
Deposition of amyloid aggregates in the brain is a pathological hallmark of all age-related neurodegenerative diseases, including Alzheimer's disease (AD), Parkinson's disease (PD), frontotemporal dementia (FTD), amyotrophic lateral sclerosis (ALS), and polyQ disorders such as HD and spinocerebellar ataxias. Misfolded proteins that form the core of these insoluble deposits are unique to each disease and share no obvious homology with one another, but aggregation in each case occurs by propagation of a disease-associated, amyloidogenic protein conformation via a self-templating mechanism (Dobson et al., 2020), similar to the process illustrated for HTT in Figure 1B. Initially, it was thought that templated aggregation occurs between homotypic molecules within individual cells; however, abundant evidence now strongly supports a unifying mechanism whereby pathogenic protein assemblies transmit the aggregated state between cells in a manner similar to infectious prions (Brettschneider et al., 2015;Jucker and Walker, 2018) (Figures 1B, 2). Prions are protein-only infectious agents that form due to conformational change in the cellular prion protein (PrP C ) from its native state to the more stable scrapie form (PrP Sc ) and cause "prion diseases, " rare but fatal neurodegenerative disorders that include scrapie in sheep, chronic wasting disease in deer, bovine spongiform encephalopathy in cows, and Creutzfeldt-Jakob disease in humans (Prusiner, 2013). Most cases of prion disease occur sporadically, but PrP Sc molecules can also be acquired (e.g., through ingestion of prion-containing material) or stabilized by autosomal dominant mutations in the PRNP gene (Colby and Prusiner, 2011;Peggion et al., 2020). In all cases, prions spread by nucleated aggregation of PrP C monomers by PrP Sc "seeds" originating in other cells or even other organisms (Colby and Prusiner, 2011;Prusiner, 2013).
The idea that protein assemblies associated with neurodegenerative disorder have "prion-like" propertiesi.e., nucleated aggregation of pathological proteins within and between cells-provides a convincing mechanism for the spatiotemporal patterns of aggregate pathology observed in AD, PD, and ALS patient brains (Braak and Braak, 1991;Braak et al., 2003;Brettschneider et al., 2013) and accelerated development of aggregate pathology in fetal neuron grafts transplanted into the brains of PD and HD patients (Kordower et al., 2008;Li et al., 2008;Cicchetti et al., 2014). Studies focused on expanding our understanding of molecular mechanisms underlying prion-like propagation of pathogenic protein aggregates formed by mHTT, tau, α-synuclein, TDP-43, and SOD1, and how aggregate spread is causally linked to neuronal loss, will lend insight into new therapeutic strategies that can impede progression of these fatal neurodegenerative diseases.
Evidence for prion-like transmission in Huntington's disease
Vast experimental evidence from the last ∼15 years supports the prion-like hypothesis for nearly all neurodegenerative diseases, but how aggregate spreading relates to neuropathogenesis in inherited disorders like HD, other polyQ diseases, and familial forms of AD, FTD, PD, and ALS is still unclear. As noted earlier, selective vulnerability of certain neuron populations to protein aggregate pathology is a defining feature of all neurodegenerative diseases, despite widespread expression of pathological proteins in the brain (Fu et al., 2018). Protein aggregate burden is strongly correlated with loss of neurons in affected brain regions, but there are exceptions to this rule, suggesting that multiple pathogenic mechanisms are at play. For example, in HD, the striatum is not the predominant site for either mHTT expression or inclusion formation, and thus mHTT protein enrichment alone cannot account for the selective degeneration of MSNs in the brain (Landwehrmeyer et al., 1995;Sapp et al., 1997). Enhanced susceptibility of MSNs to HD neuropathology must therefore involve other cell-intrinsic or non-cell autonomous factors. Intriguingly, network modeling of neuroanatomical changes in HD patient brains recapitulates the observed regional atrophy and supports a model of pathology spread along the structural connectome (Poudel et al., 2019;Raj and Powell, 2021). Prion-like transmission of neurotoxic mHTT seeds could therefore be an important contributor to HD progression by driving movement of neurotoxic mHTT seeds between anatomically-connected brain regions, possibly from cells that are less tolerant to mHTT-induced neurotoxicity to those that are more tolerant. Evidence obtained from post-mortem and in vivo human studies indicates that early disruption of cortical neuron structure and function precedes loss of MSNs, suggesting progression of toxic factors along vulnerable corticostriatal connections (Rosas et al., 2005;Unschuld et al., 2012;Reiner and Deng, 2018). In BACHD mice, selective reduction of full-length mHTT protein expression in the cortex improves motor and psychiatric functions, whereas reducing mHTT expression in both the cortex and striatum rescues behavioral, tissue atrophy, and cortico-striatal synaptic defects (Wang et al., 2014). These findings raise the intriguing possibility that prion-like mHTT aggregate transmission along susceptible cortico-striatal synaptic paths in the brain may underlie HD neuropathogenesis.
Several lines of evidence support the idea that mHTT aggregates exhibit prion-like properties in HD patients. First, mHTT aggregates appeared in fetal striatal tissue grafts transplanted into three patients with manifest HD between 9 and 12 years prior to their death (Cicchetti et al., 2014). In post-mortem brain analyses, mHTT aggregates were found to associate with markers extracellular matrix in the grafted tissue as well as the host tissue, suggesting that mHTT secretion into the extracellular space could mediate its spread and/or toxicity. This finding was reproduced in wild-type mice that received xenografts derived from HD patient fibroblasts, where the transmitted mHTT aggregates led to HD-like neurodegenerative phenotypes in the mice (Jeon et al., 2016;Maxan et al., 2018). Second, pathological mHTT proteins are found in the cerebrospinal fluid (CSF) and blood of HD patients, with levels that correlate strongly with worsening motor and behavioral symptoms (Wild et al., 2015;Rieux et al., 2020). mHTT isolated from the CSF of HD subjects or BACHD rats is seeding-competent (Tan et al., 2015;Lee et al., 2020), suggesting that mHTT aggregates may propagate to peripheral tissues and mediate systemic HD pathologies (Chuang and Demontis, 2021). Heterogeneity in the structure and toxicity of mHTT aggregates (Shen et al., 2016;Ko et al., 2018;Mario Isas et al., 2021) implies the existence of mHTT strains that could be linked to variability in seeding ability or clinical HD phenotypes as seen for other amyloid aggregates (Tarutani et al., 2022). Interestingly, lesions formed by other pathogenic proteins, such as tau and α-synuclein, are also present in HD patient brains (Jellinger, 1998;Charles et al., 2000) and appear in fetal graft tissue (Cisbani et al., 2017;Ornelas et al., 2020), suggesting common pathways driving protein aggregation in these diseases. Anti-tau antibody treatment improves motor and cognitive performance in HD mice expressing mHTT ex1 (Alpaugh et al., 2022), suggesting active involvement of tau pathology in HD neuropathogenesis. Evidence for co-aggregation of mHTT proteins with other amyloidogenic proteins such as Aβ (Hartlage-Rübsamen et al., 2019) and TDP-43 (Coudert et al., 2019) suggests that heterotypic crossseeding might also contribute to disease pathogenesis. Together, these findings point to the existence of shared molecular mechanisms and potential cross-talk between pathogenic proteins underlying these complex pathologies, raising the possibility that therapeutic approaches targeting protein aggregation could be effective against multiple proteopathies.
Mechanisms underlying cell-to-cell transmission of mutant huntingtin
Experimental models for prion-like spreading of mutant huntingtin Numerous approaches have been used to monitor prionlike spreading of mHTT proteins in in vitro and in vivo models of HD and are summarized in Figure 2 and Table 1. Methods that report pathological protein transfer from "donor" cells to "acceptor" cells commonly use fluorescent labeling or fluorescent protein (FP)-fusions of HTT to track aggregate movement between cells in vitro or in vivo. The simplest of these approaches involves addition of fluorescently-labeled, exogenous aggregates formed from recombinant mHTT or polyQ peptides to unlabeled cells or tissues (Figure 2A) (Yang et al., 2002;Ren et al., 2009;Jeon et al., 2016;Ruiz-Arlandis et al., 2016;Masnata et al., 2019). Aggregate internalization is measured by detecting cell-associated mHTT signal using light microscopy or flow cytometry methods; however, these may lack sufficient resolution to distinguish intracellular vs. surface-bound HTT proteins. A modification of this approach improves this by detecting mHTT co-occurrence with a soluble, cytoplasmic protein marker such as GFP or mCherry expressed in the cytoplasm of donor (Figure 2A) (Pecho-Vrieseling et al., 2014;Babcock and Ganetzky, 2015) or acceptor cells ( Figure 2B) (Costanzo et al., 2013;Pecho-Vrieseling et al., 2014;Sharma and Subramaniam, 2019). These approaches have been used in many experimental models to demonstrate mHTT aggregate entry into acceptor cells from the extracellular space or a donor cell cytoplasm. However, they fail to consider another key feature of prion-like proteins-the ability to recruit soluble versions of the same protein into aggregates. Expression of soluble wtHTT-FPs can address this and serves two purposes: (1) wtHTT marks the nucleocytoplasmic boundaries of acceptor cells, and (2) because wtHTT remains soluble at physiological concentrations unless nucleated by mHTT (Preisinger et al., 1999;Chen et al., 2001), induced aggregation of wtHTT-FPs reports the seeding capacity of prion-like mHTT species (Figures 2C-F) (Ren et al., 2009;Trevino et al., 2012;Pearce et al., 2015;Tan et al., 2015;Donnelly et al., 2020;Lee et al., 2020). Use of FP fusions for both mHTT and wtHTT proteins enhances spatial information by detecting co-localization between converted wtHTT proteins and their mHTT seeds ( Figure 2D). Differentially-tagging mHTT and wtHTT proteins in donor and acceptor cells, respectively, has the added benefit of enabling bimolecular fluorescence complementation (BiFC; Figure 2E) (Lajoie and Snapp, 2010;Herrera et al., 2011;Kim et al., 2017) or fluorescence resonance energy transfer (FRET; Figure 2F) (Holmes et al., 2013;Pearce et al., 2015;Ast et al., 2018;Donnelly et al., 2020) approaches to measure direct molecular interaction of HTT proteins originating in different cells. Fusion of FP fragments commonly used in BiFC/split-FP approaches ( Figure 2E) to HTT may also reduce interference of the tag with aggregation kinetics. Labeling multiple cell populations with different HTT proteins can be readily achieved in cultured cells by separately transfecting cell populations, but is more difficult in in vivo models. However, independent in vivo labeling of 2 or more cell types in the same tissue can be easily achieved in genetically tractable invertebrate models such as Drosophila melanogaster or Caenorhabditis elegans.
A combination of these strategies has been employed to investigate prion-like spreading of mHTT in multiple model systems, including cultured mammalian cells, ex vivo mouse brain slices, and in vivo approaches in Caenorhabditis elegans, Drosophila, mice, and non-human primates ( Table 1). In vitro studies have revealed important information about the kinetics of mHTT aggregate transfer and subcellular localization of mHTT as it transits between individual cells. On the other hand, transgenic animal models of HD may accurately model mHTT transmission in tissues, where multiple cell types interact and communicate in complex ways that are not easily replicated in cultured cells. Together, these models have lent abundant insight into the molecular mechanisms that underlie prion-like mHTT spreading, including the molecular properties of seedingcompetent mHTT aggregate species and the genes and pathways Experimental approaches to monitor prion-like behavior of mHTT proteins. (A) Spreading of mHTT can be reported as a time-dependent loss of co-localization between mHTT-FP expressed in "donor" neurons and a co-expressed non-transmissible cytoplasmic protein marker such as synaptophysin-or synaptotagmin-GFP. This approach has been used to monitor the transmissibility of mHTT proteins in mouse and fly brains (see Table 1). A similar approach in vitro monitors internalization of exogenous fluorescently-labeled mHTT or polyQ aggregates by several unlabeled "acceptor" cell types, including neuronal cell lines (e.g., SH-SY5Y, Neuro2A, and PC12), COS-7 fibroblast-like cells, and THP1 macrophages. (B) Transfer of mHTT-FP proteins from donor cells can also be detected by monitoring acquisition of mHTT-GFP signal within the cytoplasm of acceptor cells labeled by a soluble FP such as GFP, BFP, or mCherry. This approach has been used to demonstrate cell-to-cell mHTT spreading between neurons in mouse brain slices and cultured neuronal or primary neuron cells. (C) Entry of extracellular polyQ or mHTT fibrils into the cytoplasm of numerous cell types (e.g., HEK, HeLa, and PC12) causes templated aggregation of wtHTT-FP proteins, detected by a phenotypic change in wtHTT-FP expression pattern (e.g., diffuse → punctate) or decreased solubility measured by biochemical methods. (D) The seeding capacity of mHTT-FP aggregates can be measured by examining the aggregation of cytoplasmic wtHTT-FP proteins in acceptor cells templated by mHTT-FP aggregate seeds from donor cells. This approach has been applied to in vitro (e.g., in co-cultured HEK cells) and in vivo (e.g., adult Drosophila brains) experimental systems. (E,F) Physical interaction between mHTT seeds and monomeric HTT proteins originating in donor and acceptor cells, respectively, has been reported using biomolecular fluorescence complementation (BiFC; E), where each HTT protein is fused to a non-fluorescent GFP fragment, or by fluorescence resonance energy transfer (FRET; F), where HTT is fused to FP FRET pairs (e.g., CFP/YFP or GFP/mCherry).
Babcock and Ganetzky, 2015 mHTT ex1−12 Q138-RFP aggregates spread from Drosophila ORNs labeled by synaptotagmin-GFP to large posterior neurons; patterns of spread depend on site of mHTT expression.
Addition of extracellular fluorescently-labeled mHTT aggregates to unlabeled acceptor cells (Figure 2A) Yang et al., 2002 FITC-Q42 aggregates were internalized by PC-12 and COS7 cells by flow cytometry and confocal microscopy; addition of a nuclear localization sequence caused nuclear accumulation of the aggregates and increased cytotoxicity. Ren et al., 2009 Extracellular fibrils formed by FITC-K 2 Q44K 2 peptides internalized by COS7 cells co-localize with intracellular markers of protein quality control systems (e.g., ubiquitin, proteasome subunits, and Hsp70). Pecho-Vrieseling et al., 2014 mHTT ex1 Q150 aggregates transferred from R6/2 mouse organotypic brain slices to synaptically-connected human neurons derived from embryonic or induced pluripotent stem cells; mHTT ex1 Q150 aggregates transferred from R6/2 cortex to wild-type striatum in mixed genotype brain slice co-cultures.
Sharma and Subramaniam, 2019
Full-length mHTTQ111 and mCherry-mHTT N 171 Q89 aggregates transferred from GFP-Rhes-expressing striatal cells to co-cultured striatal neuron acceptor cells labeled with BFP through TNTs positive for Rhes-GFP; HTT transfer depended on its SUMOylation state.
Addition of unlabeled polyQ or mHTT fibrils or seeds to nucleate the aggregation of wtHTT-FP expressed in cells ( Figure 2C) Ren et al., 2009 Fibrils formed by K 2 Q44K 2 peptides or recombinant mHTT ex1 Q51 proteins added to the extracellular media induced aggregation of cytoplasmic wtHTT ex1 Q25-mCherry in HEK cells.
Lee et al., 2020 mHTT ex1 Q46-GFP aggregation in HEK cells was accelerated by addition of mHTT ex1 Q51 fibrils and sonicated seeds. Mouse and human HD brain lysates and CSF samples induced aggregation of mHTT ex1 Q46-GFP with the first 17 residues deleted ( N17). Expression of split-GFP/Venus or FRET-pair mHTT constructs in BiFC-and FRET-based approaches (Figures 2E,F) Lajoie and Snapp, 2010 BiFC occurred between mHTT ex1 Q145-GFP N(or C) and wtHTT ex1 Q23-GFP C(or N) proteins after co-transfection of Neuro2A cells. Herrera et al., 2011 BiFC occurred between mHTT ex1 Q103-Venus N with mHTT ex1 Q103-Venus C proteins in co-cultures of H4 or HEK cells separately transfected with each construct.
Holmes et al., 2013
Aggregation of co-expressed wtHTT ex1 Q25 proteins fused to CFP or YFP induces FRET in C17.2 neural precursor cells treated with FITC-mHtt ex1 Q50 fibrils; Unlike tau and α-synuclein, mHTT fibril internalization was independent of heparin sulfate proteoglycans. Kim et al., 2017 BiFC detected between wtHTT ex1 Q25 and mHTT ex1 Q97 constructs fused to Venus FP fragments and expressed in pharyngeal muscle cells and connected neurons. Pearce et al., 2015;Donnelly et al., 2020 FRET detected between FP tags on mHTT ex1 Q91-mCherry and wtHTT ex1 Q25-GFP expressed in donor and acceptor cells in a Drosophila HD model.
Ast et al., 2018
Measurement of seeding-competent HTT species in biological samples using a FRET-based mHTT aggregate seeding (FRASE) assay; seeding activity was detected for small mHTT structures in presymptomatic HD mice; seeds are toxic in Drosophila HD model.
Host-to-graft spread of mHTT in HD patient brains Cicchetti et al., 2014;Maxan et al., 2018 mHTT aggregates co-localized with the extracellular matrix marker phosphocan in grafted solid or suspension fetal tissues that had survived ∼10-15 years following transplantation in HD patient brains.
Mammalian focal injection models Jeon et al., 2016
Appearance of mHTT aggregates in mouse striatum and cortex after injection of HD patient-derived fibroblasts (Q72, 143, or 180) or iPSCs (Q143) into the lateral ventricles. Injection of exosomes derived from HD fibroblasts led to neurological deficits and appearance of mHTT pathology in DARPP-32+ MSNs.
Gosset et al., 2020 mHTT from HD-derived brain homogenates injected into wild-type or BACHD mouse cortex spread to sites distant from the injection site and worsened behavioral phenotypes only in BACHD mice; Injection of brain homogenates from a juvenile HD patient into the striatum of non-human primates caused persistence of mHTT near the injection site, but was not associated with neurological impairments.
Frontiers in Neuroscience
that mediate cell-to-cell aggregate transfer. The remainder of this review will summarize the mechanisms identified for inter-cellular mHTT aggregate transmission (summarized in Figure 3) and ways in which these findings can be utilized to improve therapeutic options for HD patients.
Trans-synaptic transmission of mutant huntingtin
Synaptic dysfunction and loss are early features of all neurodegenerative diseases and typically precede detectable tissue atrophy or clinical manifestations (Schirinzi et al., 2016;Jackson et al., 2019;Smith-Dijak et al., 2019). Pathogenic protein aggregates directly interfere with many key synaptic functions, and prion-like mechanisms could be a primary driver of synaptotoxicity, particularly for vulnerable neurons and their pre-and postsynaptic partners. There is accumulating evidence to support trans-synaptic spread of many protein aggregates, including mHTT, Aβ, tau, and α-synuclein (Freundt et al., 2012;Dujardin et al., 2014;Brahic et al., 2016;Donnelly et al., 2020), and a role for neural activity in aggregate spreading (Pecho-Vrieseling et al., 2014;Babcock and Ganetzky, 2015;Wu et al., 2016Wu et al., , 2020Donnelly et al., 2020). These findings suggest that synaptopathology, regional vulnerability, and aggregate transmissibility are intimately linked. Several research groups have explored the hypothesis that mHTT propagation is linked to neural connectivity in experimental HD models, with varying results that could reflect time-dependent or cell type-or regional-specificities. mHTT ex1 proteins transfer from R6/2 HD cortico-striatal slices to ectopically-connected human neurons, and this was inhibited by botulinum toxin treatment (Pecho-Vrieseling et al., 2014), suggesting a role for SNARE-dependent synaptic functions in mHTT spreading. In the same study, spreading of mHTT was also observed from R6/2 cortical to wild-type striatal neurons in mixed genotype cultures and following injection of wild-type mouse brain cortex with viruses encoding FP-tagged mHTT ex1 , with increased appearance of mHTT ex1 protein in striatal neurons over time (Pecho-Vrieseling et al., 2014). More distant mHTT spreading was seen after focal injection of mouse striatum with purified polyQ fibrils or HD brain homogenates and was associated with worsening symptoms in both wild-type and BACHD mice (Masnata et al., 2019;Gosset et al., 2020). However, inoculation of the striatum of non-human primates with brain homogenates from a juvenile HD patient led to persistence of mHTT pathology near the injection site, but there was no evidence for mHTT spreading or behavioral abnormalities associated with the injected material (Gosset et al., 2020). Synaptosomal mHTT assemblies isolated from HD mouse brains possess higher seeding capacity than mHTT in ER/Golgi fractions (Chongtham et al., 2021), suggesting that the synaptic environment is favorable for mHTT propagation.
Taken together, these studies indicate that genetic background, molecular features of mHTT aggregates, and/or context-specific sensitivities to mHTT toxicity influence progression of HD pathology in the brain.
Invertebrate HD models also support a role for synaptic activity in the spread of mHTT aggregates (Figure 3A). Two Drosophila models of HD support spreading of FP-tagged mHTT ex1 and mHTT ex1−12 proteins from olfactory receptor neurons (ORNs) to their postsynaptic partners, projection neurons (PNs) (Donnelly et al., 2020), or to neurons in more distant regions of the adult fly brain (Babcock and Ganetzky, 2015). Interestingly, inhibition of synaptic activity in mHTT-expressing ORNs had different effects on mHTT protein spreading in each model. In one study, stalling synaptic vesicle recycling in ORNs expressing RFP-mHTT ex1−12 proteins by expressing temperature-sensitive dynamin mutants decreased the appearance of transgenic mHTT proteins in neurons in other regions of the fly brain (Babcock and Ganetzky, 2015). In another study, inhibiting synaptic activity of mHTT ex1expressing ORNs using either temperature-sensitive dynamin mutants or tetanus toxin subunits to block SNARE-mediated vesicle fusion enhanced the spread of mHTT ex1 aggregates to postsynaptic PNs (Donnelly et al., 2020). The molecular basis for these differing results remains to be determined, but could be due to unique molecular properties of pathogenic HTT ex1 vs. HTT ex1−12 protein fragments or roles for neural activity in tissue environments that support trans-synaptic vs. nonsynaptic aggregate transmission. Together, these studies indicate that relationships between neural activity and mHTT aggregate spreading are complex and likely to be protein-, cell-, and context-specific.
Cellular uptake and release of mutant huntingtin
Cells interact in dynamic ways with their extracellular environments, including actively internalizing and/or releasing material via diverse endocytic and exocytic processes. During endocytosis and the analogous process of phagocytosis, extracellular material is internalized via non-specific, "bulkphase" processes or by directly activating cell surface receptors that initiate engulfment (Hu et al., 2015;Levin et al., 2016;Borchers et al., 2021). Exocytic processes mediate the release of cellular material to initiate cell-cell communication or eliminate toxic material from damaged cells (Quek and Hill, 2017). Proper functioning of endocytosis, phagocytosis, and exocytosis pathways is critical for brain homeostasis during development and adulthood, and thus it is not surprising that defects in many endocytic and exocytic pathways are implicated in the pathogenesis of many neurological disorders, including neurodegenerative diseases (Giovedì et al., 2020). Mechanisms for cell-to-cell transmission of mHTT. Pathways reported to mediate inter-cellular transmission of mHTT aggregates are illustrated here and described in more detail in the text. mHTT aggregate release from donor cells (red cell; left) may be coupled to synaptic activity in presynaptic neurons (A), could occur with in exosomes released from a multivesicular body (MVB) (B), or could be passively released from dying cells (C). Entry of prion-like mHTT aggregates into acceptor cells has been reported to occur via bulk-phase or receptor-mediated endocytosis (D), direct penetration of the plasma membrane (E), or alternatively, aggregates may transfer directly from one cell cytoplasm to another via membrane-enclosed tunneling nanotubes (F). Phagocytic glia may play double-edged roles in HD through receptor-mediated engulfment of aggregates from neurons (G), which can lead to either clearance in the lysosome or aggregate "escape" from the glial phagolysosomal system prior to degradation (H). Disruption of normal endosomal or autophagosomal pathways may also underlie mHTT aggregate transmission to the cytoplasm of non-phagocytic cells (I). mHTT aggregates that evade lysosomal degradation as a result of endo/phagolysosomal defects could generate cytoplasmic reservoirs of prion-like mHTT species in "intermediate acceptor cells" (e.g., glia) and enhance aggregate seed transmission to other cells, such as post-synaptic neurons (J).
Entry of oligomeric and amyloid assemblies of mHTT into a cell's cytoplasmic compartment from the extracellular space or from a different cell cytoplasm has been reported to occur via multiple endocytic pathways ( Figure 3D). polyQ expansion and protein aggregation can alter the interaction of HTT with endocytic machinery components and lead to endocytic dysfunction (Parker et al., 2007;Davranche et al., 2011;Borgonovo et al., 2013). Genetic or pharmacological inhibition of receptor-mediated endocytosis, macropinocytosis, or the GTPase activity of dynamin reduces the ability of purified polyQ or mHTT ex1 fibrils to be internalized by multiple cultured mammalian cell types (Holmes et al., 2013;Ruiz-Arlandis et al., 2016), suggesting that mHTT can enter cells from the extracellular space via multiple endocytic routes. Treatment of cells with exogenous fibrils formed by mHTT ex1 , SOD1, α-synuclein, or TDP-43 proteins triggers membrane ruffling, a phenomenon associated with Rac1-dependent macropinocytosis (Zeineddine et al., 2015) and suggests that extracellular aggregates activate endocytic pathways to stimulate internalization. Interestingly, aggregates formed by pathogenic polyQ 44 peptides were observed by deep-etch transmission electron microscopy to be directly associated with the actin cytoskeleton in HEK cells, suggesting that fibrils have the ability to directly penetrate the plasma membrane bilayer (Ren et al., 2009) (Figure 3E). This property may also extend to intracellular membranes, as extracellular fibrils formed by mHTT ex1 , tau, and α-synuclein damage intracellular vesicle membranes, indicated by colocalization with Galectin-3, in cultured neuroblastoma cells (Flavin et al., 2017). These findings suggest that a combination of active and passive mechanisms permit mHTT protein assemblies to gain entry to a cell's cytoplasm where they can effect prion-like conversion of soluble HTT monomers.
Aggregation of mHTT and other pathological, diseaseassociated proteins is associated with proteostasis impairment in cells. Defects in the ability of cells to clear and posttranslationally regulate mHTT proteins declines with age, accelerating aggregate formation and associated toxicity (Tsvetkov et al., 2013). HTT's interactions with components of the autophagy pathway (Petersén et al., 2001;Atwal and Truant, 2008;Greco et al., 2022) and its role in selective autophagy (Ochaba et al., 2014;Rui et al., 2015) suggest that HTT loss-offunction could decrease clearance of aggregates and other toxic debris. Disruption of endolysosomal and autophagolysosomal functioning by mHTT could generate partially-degraded mHTT aggregates (Trajkovic et al., 2017), possibly directly contributing to dissemination of prion-like mHTT aggregates in the brain (Figures 3H,I). Interestingly, several studies have reported an inverse relationship between the size of mHTT seeds and their seeding capacity (Ast et al., 2018;Donnelly et al., 2020;Lee et al., 2020;Schindler et al., 2021). It is thus possible that fragmentation or incomplete degradation of larger mHTT aggregates could lead to the formation of small, highly transmissible seeds.
Release of mHTT from "donor" cells could occur via active mechanisms such as secretion or packaging in extracellular vesicles (EVs; Figure 3B), or passively during cell death ( Figure 3C) (Brahic et al., 2016;Trajkovic et al., 2017;Wang et al., 2017;Caron et al., 2021). mHTT secretion is postulated to be neuroprotective by reducing the load of mHTT aggregates in cells Deng et al., 2017;Hong et al., 2017), or neurotoxic, as a mode of aggregate spreading (Jeon et al., 2016;Zhang et al., 2016). Exosomes have recently emerged as a focus of attention in many fields of research as "shuttles" for transferring proteins, lipids, and RNA between cells. Exosomes are small EVs (∼50-100 nm in diameter) that form following invagination of a larger endosomal structure known as the multi-vesicular body, which fuses with the plasma membrane to shed exosomes (Kalani and Tyagi, 2015;van Niel et al., 2018). Exosomes package and transport signaling molecules such as proteins, mRNAs, and miRNAs between cells and across the blood-brain barrier as a key mode of cell-cell communication between diverse cells and tissues. Pathogenic proteins associated with most neurodegenerative diseases have been detected in exosomes secreted into the extracellular space by cultured cells or in vivo, including mHTT (Jeon et al., 2016;Zhang et al., 2016;Deng et al., 2017), Aβ (Rajendran et al., 2006), tau (Simón et al., 2012;Asai et al., 2015), α-synuclein (Emmanouilidou et al., 2010), SOD1 (Grad et al., 2014), and TDP-43 (Iguchi et al., 2016). EV fractions isolated from HD brain tissue and injected into wild-type mouse brains induce mHTT pathology and HD-like behavioral phenotypes (Jeon et al., 2016), suggesting that exosome-mediated transmission could be an important pathogenic mechanism that supports long-range propagation.
Tunneling nanotubes
Tunneling nanotubes (TNTs) are thin, membrane-enclosed extensions that form cytoplasmic bridges between cells across short or long (up to 200 µm) distances (Onfelt et al., 2005). TNTs can be most readily observed in cell monolayers and are defined based on their transient nature, presence of F-actin, and formation above the substratum. TNTs transport diverse intracellular signaling molecules and cargos between cells, including Ca 2+ , RNAs, lipids, proteins, viruses, endosomes and lysosomes, and even large organelles such as endoplasmic reticulum, Golgi, and mitochondria. TNTs generate a direct line of communication between cells under physiological conditions and can be induced by a variety of cell stressors or pathological states (Victoria and Zurzolo, 2017). TNTs allow cells to exchange material that can promote survival, especially in suboptimal or stressful environments. For example, TNTs formed by cells under stressful conditions permit healthier cells to donate functional organelles such as mitochondria, thereby increasing survival in a population of cells (Cho et al., 2012;Islam et al., 2012;Ahmad et al., 2014;Rostami et al., 2017). Similarly, TNTs can transfer damaged proteins and organelles from unhealthy to healthy cells, or from more vulnerable to more resistant cells, in an effort to expedite debris clearance (Rostami et al., 2017;Scheiblich et al., 2021).
The first hint that TNTs could play a role in propagating neurodegenerative disease pathology came from the Zurzolo lab in 2009 describing TNT-mediated transfer of infectious prions between catecholaminergic neuronal cells in cultured monolayers (Gousset et al., 2009). PrP Sc aggregates were detected within TNTs connecting infected and uninfected neuronal cells, bone marrow-derived dendritic cells, and primary neurons and astrocytes. Interestingly, TNT-mediated transfer of PrP Sc between primary astrocytes occurred more efficiently than via secretion, and PrP Sc aggregates detected within TNTs were associated with markers of endocytic vesicles (Zhu et al., 2015). Neuronal cells treated with fibrils formed by the N-terminal 480 amino acid residues of mHTT (mHTT N 480 ), Aβ, tau, or α-synuclein induced TNT formation, and FPtagged mHTT N 480 proteins were detected inside some of the TNTs (Costanzo et al., 2013;Tardivel et al., 2016;Rostami et al., 2017;Chastagner et al., 2020). Interestingly, transfer of mHTT proteins between cultured neuronal cells was found to occur simultaneously with the MSN-enriched and SUMO E3-like protein Rhes in a SUMO-dependent manner (Sharma and Subramaniam, 2019), suggesting that TNT-like structures facilitate mHTT transfer between the most vulnerable neurons in HD (Figure 3F). Though the hypothesis that pathogenic mHTT aggregates could be disseminated between cells or brain regions via TNTs is an attractive one, identification and characterization of TNT-like structures in the brain and the molecular events that lead to TNT formation and cargo transport will help to determine the clinical relevance of these structures in HD and other neurodegenerative diseases.
Roles for non-neuronal cells
While much HD research to date has focused on the impacts of mHTT aggregation on neuron structure and function, emerging evidence suggests that non-neuronal cells, particularly astrocytes and microglia, are key players in HD neuropathogenesis (Wilton and Stevens, 2020). Reactive gliosis is one of the earliest and most prominent findings in HD patient brains and is recapitulated in many mouse models of neurological disease (Sapp et al., 2001;Faideau et al., 2010;Franciosi et al., 2012;Kraft et al., 2012), suggesting that glial cells play roles at every stage of disease. Reactive astrocytes and microglia alter their morphology, proliferate, and activate gene expression programs that promote inter-cellular communication, removal of toxic debris, and neuronal survival (Burda and Sofroniew, 2014;Hammond et al., 2019). Failure to extinguish these responses once homeostasis is achieved and dysregulation of immune signaling pathways leads to chronic neuroinflammation and development of neurotoxic phenotypes in astrocytes and microglia (Chung et al., 2015;Liddelow et al., 2020).
Glial dysfunction in HD arises directly from mHTT expression or indirectly due to interactions with damaged neurons and other extracellular debris, possibly including mHTT aggregates themselves. Though mHTT inclusions are much more prominent in neurons, mHTT expression has been detected in glia in HD brains (Myers et al., 1991;Sapp et al., 2001;Shin et al., 2005;Benraiss et al., 2021). Reduced appearance of aggregated mHTT in glia may result from increased proteostatic capacity and/or enhanced clearance of intracellular mHTT aggregates via ubiquitin-dependent or autophagic pathways in these cell types (Zhao et al., 2016Jansen et al., 2017). Targeted expression of mHTT in astrocytes and microglia is sufficient to decrease lifespan and cause HDlike behavioral phenotypes in mice (Bradford et al., 2009(Bradford et al., , 2010, whereas silencing mHTT expression in astrocytes slows disease onset and rescues neurological phenotypes, striatal atrophy, and synaptic dysfunction in vivo (Stanek et al., 2019;Wood et al., 2019). Strategies that lower mHTT expression at the genetic level and improve cellular and behavioral phenotypes are a primary focus of disease-targeted therapeutic development, and have the potential to rescue both neuronal and glial cell defects (Bradford et al., 2009). The ability of astrocytes and microglia to clear mHTT aggregates and dead or dying neurons suggests that these cells may also be viable drug targets (Cho et al., 2019).
A growing body of evidence supports the idea that glia play a central role in spreading mHTT and other pathogenic aggregates in the brain. In Drosophila, aggregates formed by mHTT ex1 proteins transfer from presynaptic ORNs to postsynaptic PNs in the fly olfactory system only after passage through the cytoplasm of phagocytic glial cell intermediates ( Figure 3J) (Donnelly et al., 2020). This circuitous route for mHTT spreading requires expression of Draper (Pearce et al., 2015;Donnelly et al., 2020), a scavenger receptor that regulates key phagocytic pathways in fly glia and other cell types ( Figure 3G) (MacDonald et al., 2006;Etchegaray et al., 2016;Ray et al., 2017). The mammalian homolog of Draper, MEGF10, is highly expressed in astrocytes and mediates phagocytic clearance of synapses in healthy and diseased adult mouse brains (Chung et al., 2013;Iram et al., 2016;Shi et al., 2021). Interestingly, MEGF10 binds to complement cascade factor C1q (Iram et al., 2016) and mediates endocytic uptake of Aβ (Singh et al., 2010;Fujita et al., 2020), suggesting a conserved role for this phagocytic receptor in engulfing aggregates. Roles for glia in aggregate transmission have also been described for mutant tau (Asai et al., 2015;Chastagner et al., 2020) and α-synuclein proteins (Lee et al., 2010;George et al., 2019;Dutta et al., 2021;Scheiblich et al., 2021) and could involve dissemination of prion-like aggregates due to inefficient clearance of engulfed aggregates by the phagolysosomal system (Brelstaff et al., 2021).
Accumulation of mHTT in the CSF correlates with mHTT load in the brain and clinical symptoms in HD patients and is currently used in clinical trials to measure effects of HTT lowering strategies in the brain (Tabrizi et al., 2019). mHTT release into the CSF via active or passive secretion leads to aggregate clearance by the glymphatic system (Caron et al., 2021), but has the potential to lead to mHTT spreading into the periphery. While relatively understudied compared to mHTT spread in the CNS, there is some evidence to support cell-to-cell transmission of mHTT and other aggregates in non-neuronal tissues. mHTT ex1 aggregates are detectable in plasma and circulating blood cells, liver, kidney, muscle, and brain tissues of wild-type mice following parabiosis with HD mice expressing mHTT ex1 (Rieux et al., 2020). Furthermore, in a C. elegans model of HD, HTT ex1 proteins spread bidirectionally between pharyngeal muscle cells and neurons in a process accelerated by increased polyQ length and age (Kim et al., 2017). These findings raise the intriguing possibility that spread of mHTT aggregates outside of neuronal tissues may drive systemic HD pathologies often experienced in later stages of HD (Chuang and Demontis, 2021).
Prion-like disease mechanisms as a therapeutic target
Currently available treatments for HD patients can temporarily improve quality of life by managing motor, cognitive, and psychiatric symptoms, but to date, no therapy can stop or slow HD progression. Therapies currently approved by the FDA for the treatment of HD include vesicular monoamine transporter 2 inhibitors [tetrabenazine (Xenazine R ) and deutetrabenazine (Austedo R )] that reduce chorea, a symptom that affects ∼90% of HD patients, and antipsychotics and other pharmacological agents that help manage cognitive and behavioral manifestations of HD. Unfortunately, these treatments do not modify the course of HD, often produce adverse side effects, and in some cases even exacerbate HD symptoms. The HD therapeutic pipeline continues to expand as a result of basic and pre-clinical research from the last ∼20 years, with numerous potentially disease-modifying approaches under evaluation in clinical trials (Tabrizi et al., 2020).
Therapeutic avenues with the potential to target prionlike mechanisms of HD include HTT-lowering strategies that prevent mHTT aggregate formation and toxicity, approaches that increase clearance of mHTT aggregates, and interventions that target glial inflammatory and/or phagocytic responses in the degenerating brain. Additional HD therapeutic strategies involve cell reprogramming or replacement therapies to rescue the effects of neuronal loss (Connor, 2018). A limited number of studies suggest some improvement in motor and cognitive functions following intrastriatal grafts of fetal neural stem cells in HD patients (Bachoud-Lévi et al., 2000; however, evidence for host-to-graft spreading of pathological proteins in HD and PD patient brains (Kordower et al., 2008;Li et al., 2008;Cicchetti et al., 2014) raises significant concerns about the utility of cell replacement therapies in prion-like diseases.
HTT-lowering therapeutic strategies are built upon the idea that reducing mHTT expression can prevent all downstream pathogenic events in HD (Leavitt et al., 2020). Approaches that selectively silence mutant HTT alleles without affecting wtHTT expression are especially attractive in avoiding potential adverse effects associated with loss of normal HTT functions (Murthy et al., 2019). There are currently three major HTTlowering approaches under clinical development: viral delivery of short-interfering RNA (siRNA) molecules, infusion of allelespecific antisense oligonucleotides (ASOs), and gene editing strategies such as DNA-targeting zinc finger nucleases or CRISPR/Cas9 (Tabrizi et al., 2019;Leavitt et al., 2020). Alternative strategies that could selectively target mHTT and not wtHTT include amplification of proteostatic systems to enhance clearance of toxic mHTT proteins from cells, including the UPS, autophagy, and phagocytosis (Harding and Tong, 2018). The potential for TNTs to deliver protective materials (e.g., functional mitochondria) or eliminate damaged or toxic materials (e.g., aggregates) suggest that these structures could be targeted to promote survival of dysfunctional neurons and/or block aggregate spread (Han and Wang, 2021). Despite recent setbacks in clinical trials, perhaps due to limited knowledge about the optimal timing of intervention (Kingwell, 2021), there remains much hope for HTT-lowering therapies as a viable disease-modifying approach for HD.
The early responses of microglia and astrocytes to neuronal cell damage in all neurodegenerative diseases and traumatic brain injuries suggest that glial cells may be promising therapeutic targets to treat many neurological disorders. Remarkably, many genetic factors associated with increased risk of non-familial forms of AD, PD, and/or ALS are predominantly expressed in astrocytes (e.g., the APOE4 allele) and microglia (e.g., rare variants of TREM2), underscoring the idea that glial cell dysfunction plays a critical role in neurodegeneration. Neuroinflammation is a key driver of pathogenesis, and thus targeting pathways that mediate pro-inflammatory signaling is a major focus of drug development. Immunotherapies that inhibit toxic effects and/or stimulate microglial clearance of aggregates are under evaluation for the treatment of AD, PD, and ALS (Kwon et al., 2020), and could directly interfere with prion-like pathogenic mechanisms. Interest in passive immunization as a therapeutic strategy for neurodegeneration has grown following accelerated FDA approval of the anti-Aβ antibody aducanumab for treatment of AD in 2021, although the effectiveness of this treatment in improving disease outcomes remains controversial (Karran and De Strooper, 2022). Preclinical studies suggest that immunotherapies have the potential to block protein aggregation, facilitate clearance by phagocytic microglia and astrocytes, and prevent cell-to-cell spreading by sequestering pathological proteins associated with nearly every neurodegenerative disease, including mHTT (Snyder-Keller et al., 2010;Butler and Messer, 2011). Interestingly, glial cells could also be employed in cell replacement therapies; indeed, transplantation of healthy astrocytes in the brain is currently undergoing clinical testing as a treatment for ALS (Glass et al., 2016). Intrastriatal transplants of glial progenitor cells improve motor coordination and lifespan in HD mice (Benraiss et al., 2016), suggesting that a similar approach may be useful for HD patients. Though many questions remain to be answered, interventions that can rebalance the beneficial vs. neurotoxic effects of glial cells in the degenerating brain have immense potential in the treatment of HD and other neurodegenerative diseases.
Concluding remarks
Here, we have provided a comprehensive summary of substantial progress that has been made in the last 10-15 years in identifying prion-like characteristics of mHTT proteins.
Though not initially thought of as an important component in the development of monogenic disorders such as HD, many studies now provide compelling evidence to support prion-like behavior of mHTT aggregates and potential links to pathological changes observed in HD patients. Key questions that remain include identifying roles for selectively-vulnerable neuronal and non-neuronal cell populations in aggregate spreading, subcellular structures or organelles that could accommodate aggregate entry or escape, active and passive modes of aggregate transmission, and roles for mHTT aggregate polymorphism and genetic risk factors in the transmissibility and toxicity of pathological mHTT proteins. Further elucidating the molecular mechanisms that mediate inter-cellular and inter-regional spreading of mHTT is critical to expanding our understanding of HD neuropathogenesis and identifying novel targets for treatments that can directly modify the course of HD. | 2022-08-24T13:44:06.694Z | 2022-08-24T00:00:00.000 | {
"year": 2022,
"sha1": "e1ce3ae8efc1e21129b67aba8b48daf4b9d1b6cc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "e1ce3ae8efc1e21129b67aba8b48daf4b9d1b6cc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
58535671 | pes2o/s2orc | v3-fos-license | Automatic Estimation of Coronary Blood Flow Velocity Step 1 for Developing a Tool to Diagnose Patients With Micro-Vascular Angina Pectoris
Aim: Our aim was to automatically estimate the blood velocity in coronary arteries using cine X-ray angiographic sequence. Estimating the coronary blood velocity is a key approach in investigating patients with angina pectoris and no significant coronary artery disease. Blood velocity estimation is central in assessing coronary flow reserve. Methods and Results: A multi-step automatic method for blood flow velocity estimation based on the information extracted solely from the cine X-ray coronary angiography sequence obtained by invasive selective coronary catheterization was developed. The method includes (1) an iterative process of segmenting coronary arteries modeling and removing the heart motion using a non-rigid registration, (2) measuring the area of the segmented arteries in each frame, (3) fitting the measured sequence of areas with a 7° polynomial to find start and stop time of dye propagation, and (4) estimating the blood flow velocity based on the time of the dye propagation and the length of the artery-tree. To evaluate the method, coronary angiography recordings from 21 patients with no obstructive coronary artery disease were used. In addition, coronary flow velocity was measured in the same patients using a modified transthoracic Doppler assessment of the left anterior descending artery. We found a moderate but statistically significant correlation between flow velocity assessed by trans thoracic Doppler and the proposed method applying both Spearman and Pearson tests. Conclusion: Measures of coronary flow velocity using a novel fully automatic method that utilizes the information from the X-ray coronary angiographic sequence were statistically significantly correlated to measurements obtained with transthoracic Doppler recordings.
INTRODUCTION
Myocardial ischemia is due to an imbalance between myocardial metabolic demand and coronary blood supply. This is mainly related to epicardial atherosclerotic coronary artery disease (CAD) (1). However, the angiographic evidence of a "normal" or mildly diseased epicardial coronary tree, usually defined as the absence of a luminal diameter reduction of ≥50% (or >70% of the luminal area reduction) (2), is a common finding, as it is documented in ∼25% of patients undergoing coronary angiography (3). This condition is usually defined as angina with "normal" coronary arteries or, more correctly, angina in the absence of obstructive CAD.
Psychological morbidity with great impact on daily living is well known in both patients with cardiovascular disease and in patients with angina in the absence of obstructive CAD. These patients constitute a therapeutic problem with considerable residual morbidity associated with functional limitation and reduced quality of life (4). In addition, a relatively large proportion of these patients are taken care of by the health authority system indicating that this issue has economic consequences for the society that is not negligible.
Our aim was to estimate the blood velocity in coronary arteries using a novel automatical algorithm employing X-ray angiographic sequence obtained by selective invasive coronary catheterization. Blood velocity can later be used to assess coronary flow reserve (CFR) by estimating the ratio between blood flow during full hyperemia using adenosine infusion and blood flow velocity at rest. Impaired CFR is associated with increased morbidity and mortality in this population (3,5).
Several methods have been utilized to indirectly assess micro vascular function including intracoronary Doppler measurements. CFR can now be calculated using transthoracic Doppler registration that makes it independent of an invasive procedure (6). Non-invasive measurement of coronary flow velocity (CFV) and coronary flow velocity reserve (CFVR) in the distal left anterior descending artery (LAD) using transthoracic Doppler echocardiography (TTDE) accurately reflects invasive measurement of CFV and CFVR by Doppler guide wire (DGW) method (7,8).
An automatic method of estimating CFR using non-invasive techniques has been of great interests for the cardiology society for years and recently new techniques have been developed to assess the coronary flow reserve by means of positron emission tomography (PET) (9)(10)(11); contrast stress echocardiography and cardiac magnetic resonance (CMR) imaging (2,12). Moreover, blood velocity has been calculated from volumetric dynamic computed tomography angiography (13). In addition, arterial flow has been quantified by using 3-dimensional (3D) rotational X-ray angiography (14). However, the use of imaging modalities is limited due to excessive costs and inaccessibility in small hospitals. Moreover, some of the described techniques are computationally heavy and time consuming. On the contrary, cine X-ray coronary angiography sequences obtained by invasive coronary catheterization is a routine imaging procedure normally to assess suspected coronary artery disease in patients with documented ischemia or classical symptoms of angina pectoris. Despite the availability of numerous non-invasive tests for the detection of coronary-artery stenoses, coronary angiography remains the common diagnostic procedure for stenosis evaluation with the immediate possibility to perform percutaneous coronary intervention if necessary (15).
Thus, the ability to estimate the blood velocity in coronary arteries by only using the coronary angiography sequence can form the fundamental basics for developing an alternative method for assessing CFR without using intracoronary Doppler wires during the first standard invasive angiography.
The goal for the current study was to develop a mathematical model to automatically estimate how fast blood propagates in coronary arteries using X-ray coronary angiographic sequences and to compare these estimates with transthoracic Doppler measurements of coronary flow velocity in patients with chest pain and normal coronary arteries (CPNCA).
Patient Enrollment
Patients with a history of repeated episodes of exercise induced chest pain and normal or near normal coronary angiography were screened for inclusion in the "The Syndrome X-ercise study (SYNDEX)"; clinicaltrials.gov # identifier: NCT02905630, at the department of cardiology, Stavanger University Hospital. The patients had to be of 18 years or older and being able to participate in training groups 3 times a week. Patients were excluded if they had other serious cardiac illness, cancer or contrast agent allergy. Twenty-one patients were included in the study. The initial aim of the study was to identify possible effects of high intensity exercise training on coronary flow reserve and its relationship to experienced angina In addition peak oxygen consumption (peak VO 2 ) measured with breath-to-breath ergospirometry (during a graded treadmill exercise test); and endothelial function were assessed. All the patients signed informed consent form. This study was carried out in accordance with the recommendations of the Helsinki declaration (2013/98-8), Norwegian Regional Committee for Medical and Health Research Ethics. The protocol was approved by the Norwegian Regional Committee for Medical and Health Research Ethics. All subjects gave written informed consent in accordance with the Declaration of Helsinki.
Image Acquisition
For all patients, cine X-ray Coronary angiography sequences were obtained by invasive coronary catheterization. Later in the manuscript this is simplified to coronary angiography, and the time-sequence of images as angiographic sequence. Standard selective coronary artery angiography with 6 Fr catheters using a GE coronary angio-laboratory and X-ray contrast medium (Iomeron 350) was performed. Manual injection of contrast agent with an approximate flow rate of 1 to 2 cc/s not exceeding 10 mL for each view was performed in standard views. A 10 cc syringe was used by a well-trained interventional cardiologist during selective coronary catheterization to do injection of contrast agent with an approximate flow rate of 1 to 2 cc/s for each standard view. All patients had normal coronary arteries with no proximal stenosis that would make selective catheterization difficult. All angles used for angiography and height of table above the radiation source were recorded. The sequences were acquired at 15 frames per second, with a pixel resolution of 0.2 mm per pixel and a bit-depth of 8 bits per pixel.
Coronary flow velocity was measured using a modified transthoracic Doppler at the mid part of the left anterior descending artery (LAD) in accordance with current standards (6). Patients were examined using GE ultrasound systems, Vivid 5, Horten Norway, with coronary flow probe, without using a contrast agent. The velocity was mainly measured in the distal to the mid left anterior descending (LAD) coronary artery. Alternatively, flow velocities were measured in marginal branches from the left circumflex coronary artery (CMB) or posterior descending coronary artery (PDA) if flow velocities in the LAD could not be satisfactorily measured.
Blood flow velocities were measured using pulsed-wave Doppler with 1.75 to 3.5 MHz frequencies.
Proposed Technical Method
In this work, we propose a method for estimating the blood velocity utilizing the movement of the contrast fluid as it fills the coronary arteries during invasively obtained angiographic sequences. A robust and accurate automatic segmentation technique for the coronary arteries during dye propagation is required and we use an upgraded version of our recently developed method (16), presented in the following: First the artifact of other chest cavity organs present in the images are suppressed and the edges of the arteries are sharpened by preprocessing. Thereafter, the first segmentation is done using Frangi vesselness Hessian filters (17)(18)(19), followed by morphological operations. Furthermore, the heart motion is modeled and removed by employing a non-rigid spline registration technique on the segmented vessel images. The aligned images are evaluated to determine whether the vessel segmentation is satisfactory. We propose an iterative method that finds the misaligned segmented images, then enhances the segmentations of both images, using anatomical and geometrical information of the coronary arteries, and updates the input of the registration step. The changes in the length of the segmented artery-tree in time give an estimation of the velocity of the propagation of the contrast fluid through the coronary arteries. Based on this we estimate the velocity of the blood in angiograms captured from coronary arteries. A flow chart illustrating the overview of the proposed method is shown in Figure 1. In the following, more details are provided on the different parts of the proposed method.
Preprocessing
The imaging method introduces noise and artifacts, which must be reliably suppressed or avoided to gain consistent information. The observed noise and artifact sources comprise of (1) the shadow of other present organs in the chest cavity (bones and lungs), (2) the heart motion that complicates the contrast agent movements, and (3) movements due to breathing, (4) appearance of external objects in some of the images for some patients.
An unsharp masking technique is used to sharpen the edges of the coronary vessels and to suppress the shadow of other organs than heart artery-tree (20). The unsharp contrast enhancement filter used in this study is of size 3-by-3 from the negative of the Laplacian filter. Applying this filter to the original image produce an estimation of a blurred version of the image. Then this estimation is subtracted from the original image, resulting in a sharp image with enhanced edges. Let I {k} be the kth image frame of the sequence, then the sharp image is estimated using the following equation:
Initial Segmentation of the Coronary Artery Tree
The coronary arteries, as all blood vessels, have shape as tubes or pipes in the angiograms, and they appear dark in the X-ray angiographic sequences. The first step of the vessel segmentation is to exploit this knowledge. An image processing filter technique is utilized to emphasize such shapes, using 2D Frangi vesselness Hessian filter (17), to initially detect vessels in each angiography video frame independently. The Frangi vesselness filter is well used and studied, and examples and details can be found in the sample references (18,19). The method outputs an image frame, I f , where the pixels are interpreted as probabilities for the corresponding pixel of the input image frame to be part of a vessel or not. Subsequently the probabilities given in I f are thresholded with a parameter th, generating a binary image with potential vessels as segments. After keeping the segmented areas corresponding to the dark tubular structures in the image sequence, morphological image processing operations such as opening and closing were applied to detect the largest component representing the main coronary artery-tree in the image. The resulting segmented frames compromise this initial segmented coronary artery tree found from each frame independently from the previous or next in time.
Removing Heart Motion Using Non-rigid Registration
Using Frangi vesselness filter and 2D image information gives a first approximation to the segmentation of the artery-tree frame by frame. However, with use of the image sequence, the similarity between consecutive frames can be utilized to improve the segmentation. The changes in the shape and the length of the visible artery-tree in the image sequence are due to two major factors, (1) the beating of the heart; contractions and expansions; (2) the length of visible artery-tree increasing as the dye propagates further into the arteries. If the heart motion can be modeled and removed from the image sequence the only strong motion of interest is of the contrast agent moving through the arteries.
The beating heart has a non-rigid motion, which can be well modeled by an affine transformation plus a free-form deformation (FFD) based on B-spline. Therefore, we tailored the algorithm developed by Rueckert et al. (21), which combines global and local transformations providing a high degree of flexibility to model a 3D deformable object. Spline-based FFD has also shown good results in tracking and analyzing the motion of cardiac images using positron emission tomography (22). The global motion describes the overall motion of the heart that we model with affine transformation, a general class of transformations with 12 • of freedom to describe rotation, translation, scaling, and shearing. In addition, FFD was used to model the local deformation of the heart and consequently the artery-tree. The local deformations have variable nature and can vary between patients and with age, and are not well modeled through parameterized transformations.
As a step of registration, a similarity measure is required to relate the two images and measure the degree of alignment between them. If the two images are aligned the similarity measure is maximized. Usually a direct comparison of image intensities, for example sum of squared differences or correlation, is used to measure the similarity between the two images. However, the propagation of the contrast agent during the sequence induces a change in image intensities from frame to frame. Thus, a direct comparison of image intensities is not an accurate similarity measure for this application. Therefore, normalized mutual information was used as a similarity criterion, which is based on information theory and expresses the amount of information that one of the images contains about the other (23,24).
Enhancing the Segmented Artery-Tree Based on Registration Results
The inputs to the registration step are binary frames corresponding to the segmented artery-tree. Thus, the accuracy of the registration is dependent on the quality of the segmentations. Missing parts of the artery-tree in the segmentations from one frame to the next can either be the result of poor segmentation or contrast agent propagation over the time. However, we assume that the vessel shape changes due to the propagation of the contrast agent are considerably smaller compared to the disturbances caused by the heart movement in the two consecutive images. Therefore, an adequately large misalignment in a pair of frames after registration is assumedly caused by poor segmentation.
If poor segmentation is detected, the probability output from the Frangi filter, I f , is thresholded again with a lower threshold th. This will typically include more segmented areas, and the additional areas are examined to justify if they should be included to the artery-tree segmentation or not. This decision is based on their distance to the main artery-tree, both the distance to center and the edge of the previously segmented area. After enhancing the segmentations, the segmented images were saved into the sequence and underwent another round of registration. This procedure was repeated until the registration results were satisfactory. This satisfaction factor was defined based on the difference in number of pixels in the two aligned images. For example, the number of segmented pixels in frame number two is always larger than number of segmented pixels in frame number one due to the propagation of the dye, at the same time the difference is not expected to be very large, since it should only correspond to the dye movement. Therefore, the satisfactory factor was defined so that the all pixels B {k} − B {k−1} ∈ {500, 1500} pixels, where B {k} is the k-th binary image in the sequence, after segmentation and registration, with ones at the position of the segmented arteries. The limits were chosen empirically.
Velocity Estimation
In hemodynamics it is commonly assumed that in medium to large arteries, the blood can be modeled as an incompressible Newtonian fluid. The coronary arteries are considered to be medium sized arteries. The flowrate, Q, of an incompressible fluid is considered constant, even if the cross sectional area, A, is changing, or if the pipe is branching. This gives the relationship Q = A 1 · v 1 = A 2 · v 2 for a single pipe and Q = n 1 · A 1 · v 1 = n 2 · A 2 · v 2 for branching pipes (or arteries) when assuming that the cross section over the different branches are all A 1 before the branching and A 2 after the branching. n i denotes the number of branches, and v i denotes the average velocity over the cross section. The velocity is defined as the length that a blood particle has moved over a period of time, v = L t . The angiographic sequence is acquired at a frame rate of 15 frames per second (fps). The changes from one frame to the next corresponds to a t = 1/15 = 0.066 s. In this work some simplifications are done to estimate the blood velocity from the propagating edge of the contrast agent.
Assumption (Ass1)
We assume that all vessels in the area of interest, i.e., medium coronary arteries, are of constant Areal, A i = A. This is a limitation because the vessels become thinner down in the branches, but we are not concerned with the smallest arteries and capillaries. This assumption gives us: n 1 · v 1 = n 2 · v 2 after a branch, and consequently (Equation 1): Let the time t 1 correspond to the time it takes the blood to travel a distance L 1 with 1 branch, n 1 = 1. After a branching into 2, i.e., n 2 = 2, let the time, t 2 , correspond to the time it takes the blood to travel a distance, L 2 , in one of the new branches (or L 3 in the other branch, because L 2 = L 3 since A i = A). The total time: From Equation 1 we can say: This gives an estimation for the velocity before the branching as (Equation 2): From the previous steps, the segmentation of the coronary tree during the propagation of the contrast fluid was found for all image frames throughout the time sequence. A skeletonization of the segmented tree in each frame is now performed to estimate the difference of the length of the arteries from one frame to the next, corresponding to n · L if all arteries were seen in the tangential direction of the X-ray projection. Of course this is not the case, and this will also impose a limitation of the method, but the angle of the arteries at the edge of the contrast fluid is considered constant from one frame to the next. An estimated velocity can be found as (Equation 3): Where T corresponds to a time interval during propagation of the contrast fluid, and L T corresponds to the total difference of the summed artery length during the time-period T, found by the skeletonization of the segmented arteries. To find the appropriate frames of propagating contrast fluid, we wish to estimate the required time for the contrast agent from the starting time of the injection to full propagation through the coronary arteries, and look at the L T of the corresponding frames. An angiographic sequence lasts for several heart beat cycles, but full propagation of the contrast agents requires less than a couple of heart cycles.
The segmented area of the projected artery-tree in XY plane for all frames of the video sequence is found for each patient. It is known that the heart contraction/expansion affects the shape and the size of the coronary artery-tree semi-periodically with the heart cycle. This affects the 2D projections of the artery-tree visible in the angiograms and consequently results in smaller area measures. The change in the shape of the arteries due to the heart motion is rather complex but they have a beneficial characteristic due to the repetitiveness of the heart cycle. Figure 2 shows plots of the coronary artery area measurements, found from the segmented areas, (blue circles) against the frame number for a sample video sequence. This is derived from a sequence from each patient. The difference in area measurements is somewhat noisy, and fitting the data to an appropriate model would be desirable. A polynomial model was chosen due to its simplicity and because it was expected to be able to model the increase of the area during propagation of the dye. This was assumed to be near linear. However, because the heart is moving with every heart beat this gives an overlying fluctuation of the vessel area, caused by the stretching of the vessels rather than the propagation of contrast fluid. A low-order model will not be able to capture the additional fluctuation resulting from the propagation of the fluid, and will typically result in a too low slope angle, illustrated in Figure 2 where 3 • , 5 • , and 7 • polynomial are fitted and displayed with red curves. A seven-order polynomial was chosen as a compromise between accurate modeling of a complex motion and having a simple model based on studying the resulting time sequences. A new and larger dataset should be used to verify this and the remaining part of the proposed method. The time corresponding to the first maxima of the polynomial function is a good estimate of the first time the contrast dye has fully propagated into the segmented part of the coronary artery-tree.
As illustrated in Figure 2, right, some of the captured video frames begin before the contrast dye injection starts, and several frames of the video show only the inserted catheter wire. The start time of the contrast dye injection, t s , and the time of the first maxima of the polynomial function, t max , is found automatically, and the duration between them is T. The L T of the corresponding frames are found from the skelatonization, and the velocity estimated as v est from Equation 3. The method is denoted Mlength in the results, and it provides an estimation of the blood flow velocity in [m/s].
An alternative approach, Mslope, is to estimate the slope of the polynomial function as this provides an estimate of the segmented (projected) area over time. Using the same assumption (Ass1) were all arteries in the region of interest are considered to have the same cross sectional area, A, the segmented artery-tree area corresponds to the length times the diameter of the arteries. Thus, the slope provides a measure correlated with the blood velocity. The unit is in (pixel area)/(frame time interval), which is equivalent to (0.2 mm) 2 0.066 s . To interpret that as velocity, we would have to estimate the typical diameter of the arteries, but in this work we solely investigate if it is significantly correlated to the Doppler velocities. The 2D projection is imposing an inaccuracy and is thus a limitation.
Statistical Methods
The estimated velocity should correlate with the measured velocity of the left anterior descending artery from transthoracic Doppler recordings. The Doppler recordings are from the medium sized arteries, not the microvessels, corresponding to
SEGMENTATION ASSESSMENT
The presented segmentation algorithm was applied to 1,428 image frames randomly chosen from 11 patients from each of which the coronary artery-tree was extracted. Assessing the segmentation algorithm is notoriously hard because of the difficult and time consuming process of manually segmenting the artery-tree. Therefore, we proposed the following alternative method: The cardiologists randomly chose one video sequence per patient from which five images were randomly selected. Then, two types of markers were manually located: (1) artery-tree markers and (2) background markers. The artery-tree markers were put inside the visible arteries and the background markers were put in the close vicinity of the visible arteries. The background markers comprise of higher number of locations in comparison to the artery-tree markers. Totally 300 marker locations were used to construct the manual annotations of the artery-tree and the background in the selected five images of a sequence.
Considering the location of these markers in the automatic segmentation of the artery-tree: if the vessel markers were inside the segmentations then these were True Positive (TP); if background markers were inside the segmentations these were False Positives (FP); background markers outside the segmentation were True Negatives (TN); vessel markers outside the segmentation were False Negatives (FN). Furthermore sensitivity, specificity and accuracy were estimated as follows: Sensitivity = TP TP+FN , Specificity = TN TN+FP , and Accuracy = TP+TN TP+FP+TN+FN . The approach for evaluation of the automatic segmentation algorithm is illustrated in Figure 3. In this figure the binary results of automatic segmentation of coronary arterytree is shown with the manual annotations of the artery-tree and the background superimposed. In accordance to the automatic segmentation results, the manual annotations are divided into, (1) True Positives (blue stars enclosed with red circles) (2) True Negatives (green stars) (3) False Negatives (red stars) (4) False Positives (red stars enclosed with green circle).
RESULTS
The characteristics of the patients enrolled in this study are summarized in Table 1. An example illustrating the effect of the registration approach is shown in Figure 4 using the chessboard visualization technique, where the black pieces of the chessboard show the first image and the white pieces show the second image. This shows that the artery-trees are better aligned and change more smoothly after the registration. Figure 5 shows the results of segmentation algorithm in three sample images from different patients. In this figure the original images are shown with the outer boundary of the artery-tree (green curves) superimposed. The results of subjective evaluation of the processed images for 11 randomly selected patients show accuracy of 97%, specificity of 99%, and sensitivity of 93%.
The estimated velocity for all the patients using the proposed multi-step algorithm on the X-ray images is illustrated in Figure 6, showing a good relation between the two measured velocities. Table 2 shows the velocity estimation from Mlength, as well as the numbers from Mslope in comparison to the measured velocity in LAD using transthoracic Doppler imaging, in addition to the correlation results using Spearman and Pearson tests. The one-sample K-test verified normal distribution of the velocity measurements (p < 0.05). We found a moderate but significant correlation between flow velocity assessed by Doppler and the proposed Mlength method: (r s =0.55, p < 0.008 Spearman, r p =0.58, p < 0.005 Pearson). Similar correlation and significance were found for the Mslope method: (r s =0.50, p < 0.02 Spearman, r p =0.55, p < 0.009 Pearson).
DISCUSSION AND CONCLUSION
The main finding in the current study is that coronary blood flow velocity estimated from cine X-ray angiographic sequences Frontiers in Cardiovascular Medicine | www.frontiersin.org using a fully automatic novel method correlates moderately with velocities measured using the more conventional method of transthoracic Doppler.
Image processing techniques are used to initially segment the artery-tree from each frame in an X-ray angiographic sequence; thereafter the heart motion is modeled and removed using nonrigid image registration. Suppressing the effect of the heart motion provides the ability to use the information from the previous and following frames in the time sequence to improve the segmentation of the artery-tree. With the availability of the segmented artery tree for each frame in the time sequence during contrast fluid propagation, the blood flow velocity is estimated from the velocity of the contrast fluid propagation (Mlength method). Flow velocity measurements from contrast angiograms is previously performed in anesthetized cats and rabbits to describe the physiology of the pulmonary circulation, using a specially designed X-ray apparatus (25). As far as we know, calculations of coronary flow velocity based on the contrast fluid propagation employing standard coronary cine X-ray angiographic sequences in humans has not been done before.
The most important advantage of our proposed method is that it is based on selective coronary angiographic sequences (2D frames in a time sequence). Selective coronary angiography is still the routine procedure for obtaining anatomical information for clinical decision-making in patients presenting with suspected coronary artery disease and it is cost effective compared to other techniques.
This is a prospective observational study in patients admitted for coronary angiography due to angina pectoris. All patients had normal or near normal coronary arteries. The coronary flow velocity was assessed under controlled circumstances with Doppler. The angiography was performed in a clinical setting with all the angles and heights registered, but without standardized injection velocity and volumes. However, this might strengthen the method as a clinical approach. The contrast dye is injected by a well-trained interventional cardiologist and is expected not to seriously affect blood-flow velocity.
Limitations
There are some limitations of the study.
1. The number of study samples is limited to 21 patients. The method is developed by studying and experimenting on the cine X-ray angiographic sequences of these 21 patients, thus it is necessary to validate the findings on a new and larger dataset. 2. Some assumptions are made when estimating the velocity.
From Ass1 in the method section: we assume that all vessels in the area of interest, i.e., medium coronary arteries, are of constant Areal, A i = A. This would be a limitation where the vessels become thinner further into the branches, but we are not concerned with these smallest arteries and capillaries. 3. The different views are chosen by the clinician during the procedure to maximize the view of the artery under consideration. This means that it is chosen to get the upper and mid part of the artery as tangential to the X-ray projection as possible. The method estimating the length of the vessels, n · L, based on the skeletonization of the segmented vessels assumes the vessels to be tangential with the view, which is not always true, thus imposing a limitation. The velocity estimate is made over a number of frames from the start of the contrast propagation, t s , to the time of the first maxima of the polynomial function, t max , hoping that this would provide an averaging effect removing some of the noise caused by both the projection not being tangential but also to the segmentation having inaccuracies. 4. The velocity estimation relies heavily on the segmentation, thus mistakes and inaccuracies in the segmentation might lead to wrongly estimated velocities. 5. The segmentation results rely on the registration and modeling. Modeling the non-rigid movement of the heart in the existence of other rigid and non-rigid movements such as patient breathing and sudden body movement due to probable pain is not an easy task and sometimes its accuracy is affected. Moreover, non-rigid registration is a time-demanding procedure and the on-site estimation of blood flow velocity in real-time is yet not an option.
Conclusion and Further Work
The main finding in the current study is that coronary blood flow velocity estimated from cine X-ray angiographic sequences using a fully automatic method are moderately correlated with velocities measured using conventional method of transthoracic Doppler. The method should be verified in a larger dataset.
The results show a moderate correlation, but are not as close to the measured velocity as one could hope. This can partly be caused by limitations 2, 3, 4, and 5. We will continue working on improving segmentation and registration, to deal with the effect of limitations 4 and 5. There are no simple solutions to limitations 2 and 3, but we are currently looking into the possibility of developing an approximate 3D reconstruction based on 3 or more selective coronary angiographic sequences (2D frames in a time sequence). For such a 3D reconstruction to be possible, landmark points have to be identified in the different sequences, in addition a synchronization with the heart beat sequence is necessary. Thus, the heart motion needs to be adequately modeled.
In future work we want to calculate the coronary flow reserve (CFR) for assessment of microcirculation as the etiology of angina in patients with normal coronary arteries. Further research will include X-ray angiography sequences after infusion of a vasodilator (Adenosine), in addition to the baseline as in the present study. Adenosine is a natural occurring substance in the body, and interventional cardiologists utilize adenosine during such procedures to maximally increase the blood flow by reducing the resistance in the microvessels. There is a potential for assessing the microvascular function by using the ratio of basal and adenosine stimulated blood velocity estimations as an estimate of CFR. Some few approaches have been presented to assess CFR by using CT angiography, (12) and (11). However, information concerning flow changes with the cardiac cycle is lacking. In the proposed method, the area of the coronary arterytree covered by the contrast dye is modeled by a 7 • polynomial, capturing the cardiac cycles with the periodic changes in the shape of the coronary arteries over time. This information helps estimating the time duration for the first start-to-complete cycle of propagation of the contrast dye injection.
For potential real-time purposes in the future, powerful computational tools are needed for the means of non-rigid image registration.
AUTHOR CONTRIBUTIONS
MK proposed the methodology and carried out all analyses and wrote the first draft of the manuscript. AIL formulated the problem of estimating the blood flow velocity from X-ray angiograms. KE and TE provided input regarding the image analyses and velocity estimation methodology. KE provided major inputs on the revision of the manuscript. AIL and CS provided the datasets and input on data acquisition and patient enrollment. MK, KE, AIL, TE and CS jointly wrote the final version of the manuscript. | 2019-01-22T14:02:07.881Z | 2019-01-22T00:00:00.000 | {
"year": 2019,
"sha1": "613693ca235da6962ae0a63559a1563abefb3051",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2019.00001/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "613693ca235da6962ae0a63559a1563abefb3051",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
126054601 | pes2o/s2orc | v3-fos-license | Courses timetabling problem by minimizing the number of less preferable time slots
In an organization with large number of resources, timetabling is one of the most important factors of management strategy and the one that is most prone to errors or issues. Timetabling the perfect organization plan is quite a task, thus the aid of operations research or management strategy approaches is obligation. Timetabling in educational institutions can roughly be categorized into school timetabling, course timetabling, and examination timetabling, which differ from each other by their entities involved such as the type of events, the kind of institution, and the type and the relative influence of constraints. Education timetabling problem is generally a kind of complex combinatorial problem consisting of NP-complete sub-problems. It is required that the requested timetable fulfills a set of hard and soft constraints of various types. In this paper we consider a courses timetabling problem at university whose objective is to minimize the number of less preferable time slots. We mean by less preferable time slots are those devoted in early morning (07.00 – 07.50 AM) or those in the late afternoon (17.00 – 17.50 AM) that in fact beyond the working hour, those scheduled during the lunch break (12.00 – 12.50 AM), those scheduled in Wednesday 10.00 – 11.50 AM that coincides with Department Meeting, and those in Saturday which should be in fact devoted for day-off. In some cases, timetable with a number of activities scheduled in abovementioned time slots are commonly encountered. The courses timetabling for the Educational Program of General Competence (PPKU) students at odd semester at Bogor Agricultural University (IPB) has been modelled in the framework of the integer linear programming. We solved the optimization problem heuristically by categorizing all the groups into seven clusters.
Introduction
The timetabling is one of the problem that often encountered in arranging the assignments at an organisation includes educational institutions. When we want to solved problem used a design of mathematical model, then we need to consider in depth the various kinds of constraints related to their policies or regulations. Thus, being able to create a timetable that is as ideal as possible.
Even though not so complicated as in the university, timetabling issues at the school level are still interesting to be studied because it can provide the solution with a better quality. Such as the research conducted by Birbas et al. [1] which solved the problem assignments of optimal working shift from each teacher. The problem is modelled using an integer programming framework. Cangaiovic and Schreuder [9] proposed a special case of a teacher-class timetabling problem. This case is considering a partial request between the topics in the curriculum and specific requirements based on their daily lectures. The problem is modelled as a discrete lexicography model with heuristic procedure. This method is a combination of two different approaches, specifically: on the level of public used a specific heuristic approach and at the level of daily used graph colouring method. Another timetabling problem at the high school level covered by Saviniec et al. [9] that applied the three algorithms of Iterated Local Search (ILS) includes two operators of new neighbourhood which be proposed to solve the problem from literature heuristically. This study examined seven cases. The results showed that this method is effective and efficient to solve the problem, because these methods are able to find the optimal solution for all cases. The 0-1 integer programming model of timetabling problems at the university presented by Daskalaki et al. [2]. The objective function of the problem formulation is to minimize the cost function. This model is used to solve timetabling cases at the Department of Engineering for five years with a large number of lectures and lecturers. Dimopoulou and Miliotis [3] reported the results of the design and implementation of PC-based computer systems to help the construction of a combined schedule of lectures and exams at the university. The difficulties to be found is the limited availability of classroom space and increase the flexibility of the courses that can be elected by students, which makes the problem becomes very tight. The system used integer programming (IP) model that assigned the course in a specific both the time slot and room. This study has been successfully applied at the Athens University of Economics and Business. In other cases, Schimmelpfeng and Helber [7] solved timetabling problems using integer programming that has been implemented at the Economics and Management Faculty of Hannover University. A problem of determining the course time slot in a faculty studied by Ismayilova et al. [5] also developed timetabling of courses as integer programming problem with the multi-objective functions. The model is designed considering both the administration's and instructor's preferences. Both modelling and solving such problem are difficult task due to the size, the varied nature, and conflicting objectives of the problem. Analytic Hierarchy Process (AHP) and Analytic Network Process (ANP) is used to consider the purpose of different and contradictory.
The courses timetabling problem at university with large numbers of students is a difficult task, particularly to ensure that there are no courses which overlap. Most timetabling prepared manually, or at least with the aid of a spreadsheet program. However, the manual process requires verification number of experts (e.g. lecturers and supervisors) before it can be approved. Thus, several researchers solved the problem with heuristically analysis. In this research, proposed a solution of the courses timetabling problem at university. Various soft and hard constraints of timetabling parameters considered, such as the number of subjects, tutorials, classrooms, teachers, students, and workloads. Simulated annealing method is used to obtain optimal solutions and sub-optimal. Illustration was applied to the timetabling problem of Tamhidi Program in Universiti Sains Islam Malaysia as a case study [12]. Hertz [8] defined the daily quantum as a number of total period that scheduled into consecutive period a day for each chapter. Daily quantum has a minimum and maximum number of courses each day, so there are no arbitrary chapter on every single day. The completion of this problem using Tabu Search Method for obtaining a schedule of feasible lectures. Studenovsky [11] arranged the University Course timetabling Problem (UCTP). In particular, this study presented UCTP into two issues, as the timetabling of time slot and the timetabling of room. Researchers proved that in this case UCTP is a polynomial form that be reduced.
Hanum et al. [4] considered a capacitated timetabling problem of the invigilator exam. By using the framework of Goal Programming in formulating the problem, where a number of constraints associated with this type of exam, exam controller type, time availability, and several other preferences classified into primary and secondary constraints. The model was applied into a simple case of the exam invigilator in the Department of Mathematics, Bogor Agricultural University. A study at the University Malaysia Pahang done by Kahar and Kendall [6] modelled the capacitated examination problems with calculating the distance between the rooms and divided the test in several rooms. Construction heuristically is built to produce solution with good quality for real world problems that be encountered. Fahrion and Dollansky [10] proposed both the room and lecturer based on the faculty educational plan and the selection of fixed assignment. To speed up the process of finding a solution, then is made the of priority scheme assumption of simple heuristic.
In this paper, we consider the courses timetabling of Educational Program of General Competence (PPKU) that be carried out by the Directorate of PPKU used manual timetabling system. The constraint that often hamper the timetable process is there are no warnings at software Microsoft Excel if courses timetable is overlap. So that, the schedule-makers must made the timetabling carefully and 2
IORA
IOP Publishing IOP Conf. Series: Materials Science and Engineering 166 (2017) 012025 doi:10.1088/1757-899X/166/1/012025 checking manually in order to obtain a schedule that can be applied without any courses those collide. Because of the currently timetabling method is still using manual method, it will hinder the directorate to produce the timetable of courses would be desirable.
Therefore, the authors tried to design the courses timetabling of PPKU at first semester with framework of integer linear programming to minimize the number of courses at less preferred time slots. The design of this model, hoping to eliminate some types of schedules those less effective, such as the tutorial day that be scheduled before its lecture day for each of the corresponding subjects, and subjects which has two classes of tutorial those be held on different both the day or time slot.
The outline of the paper is as follows: The problem formulation and the corresponding mathematical model are presented in Section 2. Section 3 provides an illustrative example. Section 4 develops the solution approach and numerical results. Some conclusions drawn from the study are presented in Section 5.
Problem formulation
The timetabling model involves about 3600 students of PPKU that has been organized into 34 groups, when each group will take a different set of courses. Every group is then classified into seven classes, each class will take the same set of courses. Among 34 groups, we have 9 groups of agriculture class, 9 groups of science and technology class, 6 groups of economy and management class, 2 groups of social and humanity class, 5 groups of flora, fauna, and human class, 2 groups of chemical and biochemical class, and 1 group of international class. Overall, it is offered 35 different courses that will be organized in 26 large rooms of capacity 130, 7 small rooms of capacity 80, 1 Laboratory of Biology, 1 Laboratory of Physic, 1 Laboratory of Chemistry, 1 mosque for Islamic Religious Education tutorial, and 1 gymnasium for sport. The courses will be scheduled at six work days (Monday-Saturday) with eleven time slots a day. Duration of each time slot is fifty minutes.
Each group is scheduled to follow a number of courses. Each group those are in the same class will take same of courses. Each course has different time of face to face. A few courses must be scheduled only once time of slot, but should be scheduled more than once time at respectively time slot for anyone else. There are courses those be accompanied by tutorial or practicum. For the subjects those be accompanied by tutorial, implementation of the lecture day should come before its tutorial day for each of the corresponding subjects. Each tutorial those has two classes are scheduled on the same both of the day and time slot for the same subjects, but at the different of rooms. Timetabling is designed certainly that each course which be taken by each group are timetabled once a week. The learning activities are not recommended on Friday (11.00 AM -13.00 PM). The courses on the less preferred time slots are expected to be avoided or be minimized. The number of courses at every day should be limited, so that students can be absorbing the materials of learning optimally.
Modelling of the problem is encountered by defined a few of indices, parameters, and variables. Further described the constraints in the form of mathematical equality or inequality which gave a demonstration of rules and regulations which applicable at that institution. Recently presented the objective function which stated the purpose of optimization that will be established. ݕ be the minimum number of courses that might be assigned to each group during the day, ݕ ௫ be the maximum number of courses that might be assigned to each group during the day, 1݅ݏ݁ݏ be the minimum number of courses at first time slot that might be assigned to each group during the week, 1݅ݏ݁ݏ ௫ be the maximum number of courses at first time slot that might be assigned to each group during the week, 11݅ݏ݁ݏ be the minimum number of courses at last time slot that might be assigned to each group during the week, 11݅ݏ݁ݏ ௫ be the maximum number of courses at last time slot that might be assigned to each group during the week, ݀ be the time of face to face for courses l.
Decision variables
ݔ : 1 if the course l is scheduled for group k on day i time slots j in the room n and 0 if others. ݕ : 1 if the course l is scheduled for group k on day i in the room n and 0 if others.
Objective functions
Minimize S1, where S1 is the courses of early morning (07.
Constraints
Based on the terms and conditions of the timetabling, then there are several constraints as follow.
ݔ ≤ 1, ∀݅, ݆, ݊ The constraint sets (7) ensure that each group is scheduled to follow exactly one course on its time slot for every day. The constraint set (8) guarantees that each room used by exactly one group on its time slot for every day. Based on the capacity and utility, we can categorized the rooms into several types, as follow.
-Large rooms: useable for lectures (the courses which be delivered by lecturers, not by assistants. Where those being assistants are students from second, third, or fourth levels. We can called as mainly courses), tutorials (the courses which be delivered by assistants. But, a few tutorial be delivered by lecturers. We can called as additional courses). Each tutorial has one class or two classes. -Small rooms: useable for a tutorials that has two classes. The constraint set (9) and (10) guarantees that each course should be scheduled based on the time of face to face for each group and each course are scheduled exactly once a day respectively. The constraint set (11.1), (11.2), and (11.3) respectively guarantees that each course with time of face to face just over one hour are scheduled at respectively time slots. There are three possibilities conditions for scheduled each course with time of face to face more than one hour as follow.
-The courses might be scheduled in the first slot, -The courses might be scheduled not both in the first and the last slot, or -The courses might be scheduled in the last slot.
The constraint sets (12.1) ensure that each course which be taken are scheduled exactly once a week, the constraint sets (12.2) guarantees each course those be accompanied by tutorial, then the lecture day is scheduled before tutorial day for each of the corresponding subjects. For example, lecture Biology would be taken by group P1 be scheduled at Monday in one of large rooms, we defined as constraint (12.2). For another courses be scheduled at the fixed time like we listed at Table 5. The constraint sets (13) ensure that the tutorial one and the tutorial two held on the same both the day and time slot. There are no courses on Friday at 11.00 AM -13.00 PM which is stated on the constraint sets (14). The constraint sets (15) ensure that if there are subjects those should not be taken for each group, so they will not be scheduled. The constraint set (16.1), (16.2), (16.3), and (16.4) respectively guarantees that each mainly courses are not scheduled on the small rooms and laboratories, each tutorial those has one class is not scheduled on the small rooms and the laboratories, each tutorial those has two classes are not scheduled on the laboratories, and each practicum are not scheduled on the large and small rooms. The constraint set (17), (18), and (19) guarantees that each student attend minimum two courses and a maximum four courses for every day, each student attend minimum two courses and a maximum four courses at the first time slots for every day, and each student attend minimum two courses and a maximum four courses at the last time slot every day respectively. The constraint sets (20) ensure that each laboratory starts at 07.00 AM in Monday -Friday, this constraint is applied to Cluster One through Cluster Three. The constraint set (21.1), (21.2) and (21.3) respectively guarantees that each group not used Laboratory of Biology, Laboratory of Physics, and Laboratory of Chemistry both on the days and time slots those had been scheduled for the 6
IORA
IOP Publishing IOP Conf. Series: Materials Science and Engineering 166 (2017) 012025 doi:10.1088/1757-899X/166/1/012025 previously clusters. The constraint sets (22) ensure that each mainly course with the same subject are scheduled maximum four times on same of the day and time slot, it is associated with the availability of the lecturers. The constraint sets (23.1) and (23.2) ensure that ݔ and ݕ decision variable integer zero or one.
An application-a demonstrative example
We consider a particular problem with 35 courses, 34 groups, 6 days, 11 time slots, and 6 lecturers. The courses that must be taken by each group is shown in Table 1.
The solution
Because of the desired result is not obtained by once modeling, thus we break down the problem into seven clusters where each cluster consists of several groups and rooms so there are no intersection between one room and the others. Each program of the clusters will be ran by gradually process. The partition of the members and the rooms which can be used by each cluster is shown in Table 2 and Table 3 respectively. However, every laboratory can be used by all of cluster at the different both of day and time slot. In reality there are still exist the tutorial's day that become before its lecture's day for each of the corresponding subjects as listed in Table 4. With operations research modeling is expected there are no such that conditions. Therefore, the authors conducted a lecture and tutorial time scheduling so that making lectures held on the day before the day of the tutorial for each of the corresponding subjects as attached in Table 5. At modeling of mathematic be presented in constraint twelve.
Conclusions
Our result, whose summary is provided in Table 6, reveals that timetabling based on operations research may reduce the number of course sessions which scheduled in less preferable time slots. For instance, now there are only 19 course sessions scheduled in the early morning slot, which is 9 IORA IOP Publishing IOP Conf. Series: Materials Science and Engineering 166 (2017) 012025 doi:10.1088/1757-899X/166/1/012025 equivalent to a 68 percent reduction compared to that produced manually. Moreover, a hundred percent reduction attained for managing late afternoon sessions. | 2019-04-22T13:04:56.363Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "b5d6e157005e73d2af2bc22ea7eacc6b028d71fa",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/166/1/012025",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "89f0e29d220601b9dc4238ac33be410029fe0c2e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
266949062 | pes2o/s2orc | v3-fos-license | Micro‐ and macroclimate interactively shape diversity, niches and traits of Orthoptera communities along elevational gradients
Temperature is one of the main drivers shaping species diversity and assembly processes. Yet, site‐specific effects of the local microclimate on species and trait compositions of insect communities have rarely been assessed along macroclimatic temperature clines.
| INTRODUC TI ON
Climate change is causing a restructuring and reassembly of biotic communities worldwide (Habel et al., 2016;Halsch et al., 2021;Hill et al., 2021).Mountain regions are particularly at risk as temperatures rise rapidly there (Nigrelli & Chiarle, 2021;Pepin et al., 2015) while they serve as exceptionally important refugia for cold-adapted and often endemic species (Berger et al., 2010;Laiolo et al., 2018;Trew & Maclean, 2021).Since air temperature declines steadily with elevation (lapse rate of ~−5.5 K/1000 m), gradients along mountain slopes can be used as space-for-time surrogates to study community assembly processes and anticipate species' responses to climate warming (Körner, 2007;Rahbek et al., 2019).However, the temperature conditions species are exposed to within their habitats are not only determined by the elevational macroclimatic gradient per se but also significantly modified by local conditions-the microclimate (Kankaanpää et al., 2021;Ohler et al., 2020;Scherrer et al., 2011).As the microclimate can buffer changes in the macroclimate (Bennie et al., 2013;Senf & Seidl, 2018;Stark & Fridley, 2022), neglecting it may lead to an overestimation of species' responses to climate warming (Scherrer et al., 2011).Yet, the scale at which species respond to climatic variation (micro vs. macro) depends on the size, area requirements and dispersal distances of species (Poggiato et al., 2023), and studies on how the interplay of both facets of climate shapes biological communities in mountain regions remain inconclusive (Potter et al., 2013).
Despite facing above-average temperature increase (Pepin et al., 2015), mountains provide a variety of microclimatic conditions in proximity based on differences in aspect/orientation/exposition and slope (topographic heterogeneity), radiation, wind speed, substrate and vegetation structure at the meso-and microscale (Albrich et al., 2020;Körner & Hiltbrunner, 2021;Ohler et al., 2020;Rita et al., 2021).Especially on sunny days, soil temperatures of northand south-facing slopes can differ starkly at similar elevations, comparable to differences expected from about 500 m elevational difference for seasonal average soil temperatures (Ohler et al., 2020;Scherrer et al., 2011), with hottest conditions reached at steep sunfacing slopes with low vegetation cover (Maclean et al., 2019).This climatic heterogeneity facilitates the persistence of species with diverging niches in proximity, resulting in high rates of β-diversity in mountains (Fontana et al., 2020;Sponsler et al., 2022;Tello et al., 2015;Zografou et al., 2017).Within such a mosaic of climatic conditions, species are known to match their climatic requirements by macroclimate-dependent preferences for specific suitable microclimates (relative niche consistency, Dobrowski, 2011;Feldmeier et al., 2020), but this has rarely been linked to species traits.
Rapid range shifts of insect communities to higher elevations in mountain regions due to temperature increase have been shown (Kerner et al., 2023;Maihoff et al., 2023;Ogan et al., 2022).
However, responses to increasing temperatures are species-specific (Engelhardt et al., 2022;Hickling et al., 2006;Neff et al., 2022;Poniatowski et al., 2020).Since certain ecological and morphological traits can be beneficial under either warm or cold climatic conditions, they determine distribution patterns of species along climatic gradients (Chichorro et al., 2022;Classen et al., 2017;Hoiss et al., 2012;Leingärtner et al., 2014;Peters et al., 2016).At high elevations, shortgrowing seasons and cold temperatures shorten the time available to complete a life cycle, demanding thermoregulatory adaptations (e.g.faster heating colour phenotypes, Dieker et al., 2018;Fernandez et al., 2023;Harris et al., 2013;Köhler & Schielzeth, 2020) and rapid development, which is associated with small adult body size (Berner et al., 2004;Levy & Nufio, 2015;Tiede et al., 2018), or early hatching phenology to prolong the season (Ingrisch & Köhler, 1998;Kankaanpää et al., 2021).Additionally, the reduction of wing length independent of body size is a common adaptation in cold environments (Laiolo et al., 2023;Leihy & Chown, 2020;Tiede et al., 2018), as it may be advantageous to allocate resources to reproduction rather than wing development (energy trade-off, Hodkinson, 2005;Laiolo et al., 2023;Tiede et al., 2018).Cold habitats may also require utilizing a broad range of food items, thus favouring less specialized species (König et al., 2022;Pitteloud et al., 2021;Rasmann et al., 2014).Despite the urge to understand the position of species' climatic niches to estimate potential threats for the systems in the context of climate change, it is largely neglected how microclimate interacts with macroclimate to form the climatic niches of species and how trait combinations promote or constrain the use of microclimatic refugia under a warmer macroclimate according to predictions.
Evidence for climatic filtering processes has been demonstrated
by approaches based on mean trait values without considering intraspecific variability, assuming that the difference in functional trait values between species is larger than within species (Jung et al., 2010).However, a growing number of studies have suggested that intraspecific variability can hint on underlying filtering mechanisms (Classen et al., 2017;Jung et al., 2010;Laiolo et al., 2023;Tiede et al., 2018).
preferences and phenology, highlighting the importance of mountains for conservation as climatic refugia where species with diverging niches can persist in proximity.
Hence, the distribution of Orthoptera species in Europe is predominantly determined by climatic conditions, leading to diversity decreases towards northern latitudes and high elevations, which are characterized by less favourable climatic conditions and, therefore, shorter seasons (Geppert et al., 2021;Hochkirch & Nieto, 2016).The specific demands of Orthoptera to the microclimate of their habitat make them suitable indicators for environmental changes (Bazelet & Samways, 2011;Fartmann et al., 2012).
Here, we ask: How does the interplay of macro-and microclimate drive diversity patterns and the assembly of insect herbivore communities?To answer this question, we studied orthopteran assemblages along an elevational gradient in a topographically heterogeneous S1).(4) Intraspecifically, body sizes, wing lengths and coloration should follow the same clines as the community-level trait patterns.
| Study region & study sites
We studied Orthoptera communities at grassland sites on calcareous bedrock along elevational gradients in southern Germany (Bavaria).above sea level (m a.s.l.) elevation (Figure 1) (Hoiss et al., 2013).The tree line is on average just above 1500 m in this region (Sponsler et al., 2022).When selecting the sites, we took special care to ensure a balanced and even distribution of orientation and elevation spanning five elevational zones (submontane: 600-825 m a.s.l., montane: 825-1200 m a.s.l., high montane: 1200-1500 m a.s.l., subalpine: 1500-1825 m a.s.l. and alpine zone: 1825-2150 m a.s.l.).By focusing on a single mountain region, we attenuate large-scale spatial variation in the species pool resulting from historical or biogeographical circumstances, which allows direct inference on assembly processes caused by local climatic variation.All grassland sites were either extensively managed (n = 48; one cut per year on meadows, extensive cattle or sheep grazing on pastures with 0.5-1.5 livestock units per ha) or unmanaged (n = 45).The established grassland study sites covered 60 × 60 m each.
| Climatic variables
We used the mean elevation of the study sites as a proxy for macroclimatic temperature variation along the gradient (hereafter referred to as macroclimate/elevation, correlation with summer-seasonal mean temperature derived from a climate model based on neighbouring local climate station temperature data: r = −.98,df = 1,92, p < .001;Kerner et al., 2023).At each of the study sites, we additionally recorded temperature in 2-h intervals from June to October 2020 with three covered temperature loggers (ibuttons, Maxim Integrated) installed 2 cm above the soil level in the vegetation to account for average near ground temperature deviations from macroclimate resulting from vegetation structure, aspect/orientation, inclination/slope, exposition, topography, wind speed, solar radiation, atmospheric moisture and cloud cover (Hodkinson, 2005;Hoiss et al., 2013;König et al., 2022).
From the logged temperatures, we calculated the mean of daytime and nighttime temperatures during the sampling period from June to October.The local microclimate was defined as the residuals of a regression of the in-field measured average temperatures with the modelled macroclimatic temperatures at the site and therefore represents local temperature deviations near ground.Positive values for microclimate indicate, on average, warmer microclimatic temperature conditions at the study site than expected based on its elevation (Figure S1).
| Orthoptera field surveys
To record orthopteran communities, we conducted two surveys at all study sites.To ensure a comprehensive sampling of species across seasons, we pooled together the results of the two surveys, one early (beginning of July 2020 to end of August 2020) and one later in the season (end of August 2020 to end of October 2020).The order of sampling at the sites followed the phenology from the valleys to the summits.Each of the two variable transect counts per site consisted of five subtransects of 10 min within the plots margins to cover most of the microhabitat variation (König et al., 2022).We carried out the surveys on warm, sunny days, representing maximum activity conditions for most Orthoptera in the region (König & Krauss, 2019).
Orthoptera species were identified by their stridulation and visually in field.Additional late afternoon surveys with bat detectors and vegetation beatings were performed to also record species with late and high-pitched song activity as well as tree-dwelling species.
Grylloids and Tetrigids were additionally recorded in May and June by listening to their songs (evening/night) or specific search in suitable microhabitats, respectively.Due to low detectability, we excluded the soil-dwelling species Myrmecophilus acervorum from sampling results, leading to near complete assemblage assessments.
Recorded abundances of the species from the two surveys and the additional assessments were aggregated at site level to focus on community patterns along the climatic gradients.We restricted our analyses to data gathered on adult specimens as identification of nymphs in field is difficult for several species.
Species-level morphological traits included were mean female body size, relative wing length of females and the predominant body coloration of the species (brown/green).We decided to include morphometrical measurements (Detzel, 1998;Harz, 1969Harz, , 1975) ) of females rather than males due to the sexual dimorphisms of many Orthoptera species (Laiolo et al., 2013(Laiolo et al., , 2023)).Larger species are often capable of producing more offspring than smaller species (Ingrisch & Köhler, 1998).Relative wing length is considered as a measure of resource allocation, where short-winged species may be worse dispersers due to reduced flight ability but invest more into reproduction (Laiolo et al., 2023;Tiede et al., 2018).
Intraspecifically, we measured pronotum lengths as a proxy for body size and wing lengths of two grasshopper species that occur along a broad elevational range with digital callipers to the closest 0.1 mm in field.We selected the Common Green Grasshopper Omocestus viridulus (L., 1758), a graminivorous, long-winged species with broad elevational distribution and no clear preference for warmer sites, and the graminivorous, long-winged Rufous Grasshopper Gomphocerippus rufus (L., 1758), which prefers warm microclimates all along the elevational gradient.Wherever possible, we caught 10 individuals (males and females) at every site each of the two species was present, measuring each parameter twice to reduce measurement error.Additionally, we scored body coloration and colour morphs of O. viridulus (green, dorsal green-lateral brown, brown) in the field to calculate colour morph frequencies.This green/ non-green polymorphism is common in Orthoptera and similar ratios between the sexes in local populations suggest a shared genetic or environmental control (Dieker et al., 2018).
We started our analyses by assessing the impact of climatic variation on Orthoptera assemblages at the community level.First, we employed permutational multivariate analysis of variance (PERMANOVA, adonis2 function in the 'vegan' package) based on Bray-Curtis distances between Orthoptera communities, including elevation, microclimate and their interactive term as fixed effects to study the community compositional dissimilarity (β-diversity).To plot the ordination based on nonmetric multidimensional scaling (NMDS) of the Bray-Curtis dissimilarity matrix, we used the metaMDS function in the 'vegan' package.We then computed β-diversity rates as the abundance-based Bray-Curtis dissimilarities between all pairs of communities within a moving elevational distance window of 200 m to examine at which part of the elevational gradient community composition differences peaked (Descombes, Vittoz, et al., 2017;König et al., 2022).Therefore, we partitioned the total differences into balanced variation in abundances (turnover equivalent of incidence-based β-diversity) and abundance gradients, in which one community is a subset of another (nestednessresultant equivalent of incidence-based β-diversity) with the package 'betapart' (Baselga, 2017;König et al., 2022).Low values in balanced variation indicate a greater proportion of shared species abundances between site pairs, while high values of abundance gradients indicate that communities with low abundances are subsets of communities with high abundances of similar composition (Baselga, 2017;König et al., 2022).We related all β-diversity indices to the mean elevation of each pair of sites as well as to corresponding microclimatic differences and their interaction with beta-regressions (logit-link) using generalized additive models (Wood, 2023), constraining the number of basis functions to three as we expected low complexity of the functions underlying the β-diversity patterns a priori (Pedersen et al., 2019).
Deviations from an intercept-only model indicate non-constant
Bray-Curtis dissimilarity, balanced variation or abundance gradients rates with temperature and microclimate (Descombes, Marchon, et al., 2017;König et al., 2022).Additionally, we tested the effect of elevational difference and microclimatic differences on compositional dissimilarities with permutational mantel tests based on Pearson product-moment correlation.
Second, we used a multivariate hierarchical generalized linear mixed modelling approach (latent variable model) fitted with Bayesian inference to jointly model species elevational and microclimatic niches to assess the impact of climatic variation on species richness, abundance and species-specific responses (Hmsc, Drag et al., 2023;Ovaskainen et al., 2017;Tikhonov et al., 2020).When assessing the impact of the environment on traits, it is necessary to control for the tendency of related species to resemble each other more than species drawn at random from the same tree (phylogenetic independence, Abrego et al., 2017;Münkemüller et al., 2012;Ovaskainen et al., 2017).Therefore, we reconstructed a phylogeny of the occurring Orthoptera species (Appendix S1).
We excluded eight species with low prevalence (occurrence ≤ 10 sites) from the recorded communities, as statistical inference may not be trustworthy, resulting in a data set of occurrences and abundances of 32 Orthoptera species at 93 study sites.
As sampling units, we aggregated the abundances observed at the individual visits to the study sites to yield one abundance estimation per species and study site.Due to zero inflation of our count data, we applied a hurdle approach, that is, one model for presence-absence (probit regression) and another one for abundance conditional on presence (abundance COP model, linear regression of abundances with log-normal error distribution, declaring zeros as missing data, Whalen et al., 2023).
We included the mean elevation of the study sites (linear and quadratic effect) and the sites' microclimatic temperature deviations (microclimate) as focal fixed effects.We allowed microclimate to interact with elevation to capture elevation-dependent differences in microclimatic niches.The site-level random effect controls for additional unexplained variation at the site level on top of the explicitly modelled, uncorrelated climatic covariates (Figure S2A).
Hierarchical modelling of species communities includes a hierarchical structure assessing how species' responses to environmental covariates depend on species traits and phylogenetic relationships (Abrego et al., 2017).Thus, we examined if species with a similar set of traits had more similar climatic niches than species with converging trait expressions.As uncorrelated traits, we included body size, relative wing length, coloration, moisture preference, dietary specialization and hatching phenology (Figure S2B).After determining the phylogenetic signal in the traits (Figure S3), we examined if the variation in species niches after accounting for the species' traits was phylogenetically structured, that is, if closely related species had more similar climatic niches than distantly related species.
We fitted the HMSC hurdle model with the R package 'Hmsc' (Tikhonov et al., 2020) To address our main study question, how and which traits modulate species responses to elevation and microclimate, we first examined (1) the peak elevations of all species' abundances (elevational/ macroclimatic optima).As we included the first-and second-order polynomial term of elevation, we did not directly infer elevational patterns from the β-and γ-parameters of the single models but derived the predicted elevational peak within the range of sampled elevations for each species from the combined models' full posterior predictive distribution (total effect).( 2) We further assessed which species showed a positive or negative response to microclimate (microclimate slope) with at least 0.95 posterior probability (linear effect across the range of elevations, weighted by the sample frequency within the five elevation bins, resulting in 1000 slopes summarized as median and 0.95 credible intervals (CIs)).( 3) We calculated the elevational abundance peak shift due to microclimate, addressing the interaction between macro-and microclimate as a third climatic niche parameter from the models' posterior distribution.Therefore, we derived the differences between predicted peak elevations for warm microclimate (+1 SD) and for cold microclimate (−1 SD).
Then, we asked if the elevational distribution of median values of species' abundance peaks, their peak shifts and their microclimate slopes (posterior median) could be explained by their traits using phylogenetic generalized least squares regression with maximum likelihood estimation of the phylogenetic signal λ (Orme et al., 2018), since the residual errors are not independent.
To examine the effect of elevation and microclimate on intraspecific trait distributions, we used Bayesian generalized mixed effects models fitted with 'brms' (Brückner, 2018).Thus, we used the empirically measured body sizes, relative wing lengths (tegmen length divided by pronotum length, Gaussian regression with identity-link) and body colouration score frequencies (logistic regression with logit-link) as responses and the mean elevation, microclimatic temperatures and the corresponding three-way interaction with sex as explanatory variables.Furthermore, we included the sampling site as random factor, as well as the species identity as random effect in the models where necessary.
Orthopteran species richness decreased with decreasing macroclimatic temperature along the elevational gradient from 16 to three species (Figure 3a) and abundances decreased from more than 500 to five individuals (Figure 3b).At similar elevations, species richness and abundance were higher at sites with warmer microclimates than at sites with colder microclimates, implying additive effects of micro-and macroclimate on α-diversity (Figure 3).Thus, species richness and abundances approached zero faster at cold sites of high elevations than at sun/south-exposed sites.
Community composition changed along the elevational gradient, with sites characterized by a warm microclimate harbouring different Orthoptera communities compared to those from cold sites, especially in the submontane and montane zone (Figure 3c).
Therefore, Orthoptera β-diversity rates were pronounced at midelevations (where balanced variation of abundances between sites peaked) and decreased towards high elevations (with increasing abundance gradients; Figure 3d, Figure S4, Table S3).The balanced variation of abundances component of β-diversity increased with elevational distance and with microclimatic differences between the study sites (Figure S5).
| Species climatic niches
The diversity patterns emerged from underlying species-level climatic niches.Accounting only for positive and negative effects with 0.95 posterior probability, 94% of the species were more abundant at the low than at the high range limit of the elevational gradient.However, 12 of the 32 species analysed had broad elevational ranges and occurred in all elevational zones along the entire 1.5 km gradient.On average, species abundances showed a hump-shaped pattern and peaked between the submontane and montane elevational zone at around 824 m a.s.l.(range: <600-1410 m a.s.l., Figure 4, Figures S6 and S7, Table S4).
Warmer microclimates were favourable for a high proportion (41%) of the species across the entire elevational gradient (e.g.
Chorthippus eisentrauti, Decticus verrucivorus, Gryllus campestris,
Psophus stridulus, Stenobothrus lineatus, Tetrix tenuicornis), while the remaining 59% did not differ in abundance between warm and cold microclimate with high statistical support, which could either result from a preference for intermediate microclimate, indifferent behaviour or a change in preference with elevation (Figures S6, S8 and S9, Table S5).
Despite relatively broad elevational ranges, our assessment revealed narrow temperature niches for some species, which they either find at sites with cooler conditions in low elevations or at higher, more sun-exposed sites (e.g.Miramella alpina, Omocestus viridulus, Pholidoptera aptera, Pseudochorthippus montanus, Tettigonia cantans).Hence, microclimate effects depended on elevation for those species.Elevational distributions along gradients with warm microclimatic conditions were higher up than those along gradients with cold conditions for most species (Figures S6 and S10).None of the species had a higher occurrence probability with 0.95 posterior probability nor a higher abundance under cold microclimatic conditions at high elevations in the subalpine and alpine zone (Figures S6 and S10, Table S6).
| Trait-environment interactions
Species ecological traits influenced species' climatic niche parameters.Especially, moisture preferences and hatching phenology of species were important predictors for species responses, since brown-coloured, xerophilic and late-hatching species were likely to increase in occurrence probability and abundance at sites with warm microclimates (Table 1).Less xerophilic and mesophilic species peaked in abundance further up the elevational gradient at sites with warm microclimates than at cold microclimates (Table 1).
Akin the effect of microclimate, the warm macroclimate at low elevations supported late hatching species.Likewise, long-winged species mainly occurred in the valleys (Table 1).Although average body sizes at community level got smaller with increasing elevation, the morphological trait body size did not systematically affect the responses to any of the environmental covariates with high statistical support on species level (Table 1).
Furthermore, the predicted community-weighted mean traits changed along the elevational gradient, revealing a consistently higher share of xerophilic, large and late-hatching individuals within the communities at warm microclimate sites (Figure 5).belt around 1500 m a.s.l.; above this elevation, predicted communities mainly consisted of the alpine specialist Miramella alpina and two species with broad environmental niches, Gomphocerippus rufus and prominently Omocestus viridulus, shaping the communities' traits (Figure 5).
Empirical morphometric measurements of pronotum lengths and
relative wing lengths of two Orthoptera species revealed changes for females, but not males, along the elevational and microclimatic gradient with high statistical support (Figure 6, Table S7).Females were generally larger than males and tended to be smaller and with shorter wings (only Gomphocerippus rufus) at high elevations than at low elevations, particularly when microclimatic conditions were cold, whereas male size did neither vary systematically along the macro-nor microclimatic gradient.The proportion of brown colour morphs in Omocestus viridulus populations did not systematically vary with elevation but was higher when the microclimate was warmer (Figure S11, Table S8).
Concerning trait-environment interactions, we detected a moderate phylogenetic signal only for the effect of species moisture preferences on the microclimate slopes and for the effect of diet breadth on the elevational distribution (Table 1).
F I G U R E 3
Effect of microclimatic variation on community-level patterns of Orthoptera communities along an elevational macroclimatic gradient.Predicted species richness (a) and abundance (b) decreased with elevation and were both consistently higher under warm (red lines) than under intermediate (purple lines) and cold microclimatic conditions (blue lines) at similar elevations (numbers indicate the posterior probability of a positive impact of microclimate within each elevational zone).Both elevation (F = 22.97, p < .001,R 2 = .19)and microclimate (F = 6.42, p < .001,R 2 = .06)as well as their interaction (F = 2.86, p = .010,R 2 = .02)influenced the composition of Orthoptera assemblages (c).Compositional dissimilarity between Orthoptera communities was high at mid-elevations, reflected in the abundance-based β-diversity rate (d), which peaked at intermediate elevations and was higher when microclimatic conditions between sites differed, except at high elevations (Table S3).Point colours represent microclimatic conditions at the sites (red = warm, purple = intermediate, blue = cold).Vertical dashed lines separate the submontane, montane, high-montane, subalpine and alpine elevational zone from left to right in (a, b and d).
There is growing evidence that microclimatic conditions modulate the response of species to macroclimatic variation and, therefore, to climate change (Bennie et al., 2013;Mammola et al., 2019Mammola et al., , 2021;;Montejo-Kovacevich et al., 2020;Pincebourde & Woods, 2020;Suggitt et al., 2018).Our study revealed strong patterns of elevational structure in the richness, abundance and β-diversity of mountain Orthoptera communities.Although richness and abundance of Orthoptera peaked in the valleys, community dissimilarity was highest in the montane and high-montane zone.By extending our macroclimatic analysis with microclimatic contrasts between sites, we were able to empirically disentangle effects of local microclimatic and macroclimatic variation not only on the distribution but also on the abundance of a functionally important insect group.Especially sites with warm microclimate supported almost the full spectrum of species, while some were regularly absent at sites with colder microclimate.Thus, we found additive effects of macro-and microclimate for diversity, but many species experienced interactive effects, highlighting an elevation-dependent effect of microclimate, which suggests narrower temperature niches than the elevational distribution indicates.Particularly, moisture preferences and hatching phenology were linked to the differentiation of climatic niches.While both traits explained the response to microclimate, the phenology and wing length also determined the position of the species' macroclimatic niches.
| Orthoptera diversity thrives under warm climatic conditions: Additive and interactive effects of the local microclimate and macroclimate
Mountains are ecological theatres where the interplay of orientation and slope affect the local temperature and water balance, leading to heterogeneous microclimates at small spatial scales (Scherrer et al., 2011).Such topography-based combinations of micro-and macroclimates in mountain areas enable species to track thermally optimal habitats within short distances (Rebaudo et al., 2016).Temperature had the expected strong impact on Orthoptera communities in our study system.Like in many other taxa (Kerner et al., 2023;Maihoff et al., 2023), Orthoptera richness and abundance exhibited an almost monotonic decline with elevation (Descombes, Marchon, et al., 2017;Geppert et al., 2021;Pitteloud et al., 2020), decelerating in the valleys.As mostly thermophilic insects, they are favoured by the rising mean annual temperature towards the valleys since low ambient temperatures limit available biomass and physiological processes, such as metabolism or enzyme activity, leading to reduced performance and fitness (Berner et al., 2004;Ingrisch & Köhler, 1998;Willott, 1997;Willott & Hassall, 1998).Meeting our expectations, our results imply that higher temperatures due to climate warming result in a diversification of temperature-limited mountain communities.Since most species peaked in abundance between the submontane and montane zone, richness and abundance did not continue to increase in the valley, which could hint on a lack of more thermophilic (stenothermal) species in the regional species pool or a lack of suitable microhabitats at the lowest elevations.Although most species were more abundant at lower elevations, many had broad elevational distribution ranges (thermal generalists), spanning the entire 1.5 km gradient which highlights their ability to survive in colder macroclimates by utilizing sun-exposed sites with warm microclimate.
Local microclimatic conditions close to the ground varied considerably (up to 5°C) at similar elevations across the entire elevational gradient, equalling several hundred metres of elevational difference in atmospheric temperature.Such variation can buffer against the effects of regional warming, as species that evade unsuitable warm macroclimatic conditions can survive at colder sites within short distances (e.g.north-facing slopes in the northern hemisphere), making them potential stepping stones or recolonization nuclei (Albrich et al., 2020;Bennie et al., 2013;Körner & Hiltbrunner, 2021;Scherrer et al., 2011;Senf & Seidl, 2018;Stark & Fridley, 2022;Suggitt et al., 2018).Like for macroclimate, we expected consistent effects of microclimate on the Orthoptera species.As predicted, we found more individuals and species when microclimatic conditions were warmer throughout the entire gradient (Weiss et al., 2013).
However, not all taxa reacted equally to the microclimate along the gradient.Especially species that are vertically oriented and usually dwell in longer swards were more abundant at sites with a colder microclimate or did not profit from warm microclimate, particularly at low elevations.If the climatic niche of species is narrow and stable, this would imply that their microclimate preference changes with elevation.While for some species microclimate had no impact or colder sites were favoured under warm macroclimatic conditions, this effect vanished at the high-elevation tail of the TA B L E 1 Effects of traits on median values of climatic niche parameters derived from joint species distribution modelling.Note: Peak elevation describes the median elevation, where predicted abundances of the species peaked, microclimate slope represents the median estimate of a species' response to warming microclimate and peak shift is the median of the predicted difference between abundance peak elevations, if microclimate is either warm or cold.We highlight effects of the phylogenetic generalized least squares regression which are significantly positive in red or negative in blue and marginally significant slopes (p < 0.1) in pale (light red/blue).Significance levels: *p < 0.1, **p < 0.01, ***p < 0.001.
species' distributions or even changed to a positive impact of warm microclimate, a phenomenon referred to as elevation-dependent microclimate preference (Dobrowski, 2011;Feldmeier et al., 2020).
| High community dissimilarity in the montane and high-montane elevational zones
The peak of β-diversity of Orthoptera communities at intermediate elevations probably reflects the fading dominance of typical low-elevation species like most grasshoppers and crickets, and simultaneously a highly diverse mountain community at midelevations, which could result from the interplay of macro-and microclimate in this transition zone.Decreasing richness and abundance with elevation suggest that harsh abiotic conditions close to the summits formed specific communities out of a small species pool, leading to more similar communities at high elevations (Fontana et al., 2020;Laiolo et al., 2023;Tello et al., 2015).
However, we also found an impact of the microclimate on the dissimilarity at site pairs in low and mid-elevation zones, probably As thermophilic species require a sufficient amount of external heat, they are more strongly bound/restricted to warm microclimate sites (Geppert et al., 2021).Warm microclimatic conditions facilitate their presence at higher elevations such as the montane and high-montane zone, where cold sites harbour only a subset of the species in the pool, which increases the dissimilarity between sites.Interestingly, the tree line did not constitute a discrete transition to a new equilibrium of species composition, but rather the beginning of an accelerating decline in abundance as found for bumblebees (Sponsler et al., 2022).This implies that communities decrease in the total number of individuals but are of similar composition, probably because elevational distributions of many species were rather broad in contrast to other taxonomic groups (Fontana et al., 2020).
| Eco-morphological trait interactions with climatic niches
As season length declines along elevational or latitudinal gradients, univoltine insects with a long generation time are expected to follow the converse of Bergmann's rule (Classen et al., 2017).Thus, predominantly positive effects of larger body sizes on fecundity, thermoregulatory ability and desiccation resistance prevail under warm conditions (Schellenberger Costa et al., 2018;Tiede et al., 2018).
While there was no clear evidence of smaller body sizes of species that inhabit high elevations or cold microclimates than their low elevation relatives, we found a higher share of large individuals within communities at warm, low elevation sites, and similar intraspecific body size clines for female grasshoppers.This matches findings of Levy and Nufio (2015) that larger females react stronger than males to climatic variation because their fitness may be more sensitive to changes in season length and climatic conditions (Laiolo et al., 2013), Gomphocerippus rufus.Females were on average smaller at higher elevations than at low elevations (Table S7).In G. rufus females, wing length decreased with elevation, stronger when microclimate was cold.Solid lines were used in cases the 0.95 credible intervals of the elevation slope estimates did not include zero.
or to decreases in nutritional quality of food plants resulting from elevational turnover.The mechanism behind the body size reduction at high elevations was shown to be a local adaptation in form of lower size thresholds to adult moulting (Berner et al., 2004).
Since high temperatures facilitate insect flight (Prinster et al., 2020), we expected a wing length reduction with elevation and microclimate.Indeed, long-winged species occurred more often in low elevations, corroborating findings of Tiede et al. (2018) and Laiolo et al. (2023), but not at sites with warmer microclimate.As reported in other studies, however, relative wing lengths of longwinged species did not vary with elevation intraspecifically.There it was argued that only species with low dispersal ability are locally adapted and show reductions in wing lengths with increasing elevation, highlighting the impact of dispersal potential on size clines (Levy & Nufio, 2015).While there may be frequent genetic exchange of long-winged dispersing species along elevational gradients (Levy & Nufio, 2015), especially species of low mobility contribute to βdiversity patterns (Marini et al., 2012).
We expected a higher share of darker animals in cold environments based on thermoregulatory benefits (Köhler & Schielzeth, 2020).However, no consistent effect of macro-and microclimate on body coloration and colour morph frequencies was detected, neither for species distributions nor community traits.
Intra-and interspecifically, warm microclimates were associated with a higher proportion of brown individuals, contrary to our expectation.That doesn't necessarily exclude the proposed impact of local microclimate and macroclimate on the coloration as found in other studies (Köhler et al., 2017), but suggests that other effects such as the advantage of matching background/vegetation colour features to avoid predators, UV protection or precipitation differences interfere with temperature effects (Dieker et al., 2018).This phenomenon is referred to as crypsis-thermoregulation trade-off (camouflage, background-matching, predator avoidance) (Dieker et al., 2018;Köhler & Schielzeth, 2020), underlining the multifaceted nature of colour patterns and morph frequencies.
We found a strong impact of hatching phenology on the climatic niche parameters and community patterns.A later hatching may be risky for univoltine species, as the summer season length may be too short to complete their development and reproduce successfully in cold and high elevation habitats.Much of the differences in hatching phenology could be explained by differences in post-diapausedevelopment, development speed and oviposition sites (Ingrisch & Köhler, 1998;Kankaanpää et al., 2021).
Since the study area is humid with high levels of summer and winter precipitation, we found that xerophilic species had higher abundances at sites with a warm microclimate.This is in line with our prediction and could be caused by drier conditions at microclimatically warm sites due to run-off dynamics, increased evaporation or lower vegetation cover (e.g. at sun-exposed, steep sites; Häring et al., 2013), or because xerophilic species are often also thermophilic (Ingrisch & Köhler, 1998).Following the same line of argumentation, hygrophilic species were more restricted to northexposed, cool sites at low elevations, but the effect of microclimate changed with increasing elevation, leading to upslope shifts in the distributions from cold to warm microclimate sites.This elevationdependent preference for microclimate or aspect was especially evident for less xerophilic species.
The elevational-niche breadth hypothesis suggests broader dietary spectra of species which occur further up the mountain (Rasmann et al., 2014).However, a recent empirical study on dietary specialization, which includes a broader climatic range and phylogenetic relationships of food plants, suggests a unimodal relationship with most pronounced dietary specialization at intermediate elevations (König et al., 2022), offering explanations for the lack of such a pattern.For several traits, species niches and community mean patterns differed.
Such differences may result from intraspecific trait variation (Classen et al., 2017) or variation in elevational niche breadths of different species, as some species such as Omocestus viridulus displayed broader climatic niches than others such as the specialist Miramella alpina, but dominate the communities in terms of numbers of individuals, potentially blurring the understanding of environmental filters.
Trait expressions are often correlated with evolutionary relationships between species, as also demonstrated in our study, since closely related species often share similar characteristics.However, we also found evidence for phylogenetic signals in trait-environment interactions, highlighting that not only the traits we focused on contribute to species' climatic niches (e.g.thermal tolerances, thermoregulatory capacities).
| Caveats: The scale of microclimate and associated covariates
We found the highest number of species at sun-exposed extensive pastures in line with other studies (Chisté et al., 2016;Gardiner & Dover, 2008;Klein et al., 2020;Marini et al., 2009;Weiss et al., 2013), conditions which offer a mosaic of warm microclimate but also facilitate structurally rich vegetation, that could be used as shelter.Within-site microclimatic variation at even smaller scales than measured in our study (0.01-1 m) could also be crucial for the persistence of certain species, as shown for plants in alpine habitats (Ohler et al., 2020;Scherrer et al., 2011), possibly dampening the microclimate response we measured with this study.Likewise, species responses derived may be interfered by factors interacting with climate, such as moisture (Dvořák et al., 2022;Powell et al., 2007), management (Humbert et al., 2021;Marini et al., 2009), vegetation structure (Gardiner, 2022;Löffler & Fartmann, 2017;Schirmel et al., 2019), composition (Tobisch et al., 2023) and diversity (Fournier et al., 2017;Ramos et al., 2021).
| CON CLUS IONS
The limited potential of montane assemblages to respond to climate change is of major concern to conservationists.Our nuanced findings imply that macroclimatic as well as microclimatic changes in temperature have the potential to restructure, reassemble, and replace Orthoptera communities in temperate mountain grasslands.
Here, we demonstrate additive effects on diversity, but also community composition and functional traits are affected, as the interaction of elevation and microclimate shapes species niches.Since species can shift their elevational distribution not only upward but also northward to sites with cooler microclimates, climate change impacts might be mitigated by the complex topography in mountain areas (Feldmeier et al., 2020;Suggitt et al., 2018).Our results suggest that this turnover is the result of differences in abiotic conditions at similar elevations, highlighting the importance of mountains as climatic refugia, which support species with diverging preferences or requirements in proximity.Under future climate warming, we expect a less pronounced dissimilarity pattern in low elevations, as thermophobic species retreat and thermophilic species equally spread.This is referred to as biotic homogenization (Thorn et al., 2022).At higher elevations, the arrival of thermophilic species at warm-microclimatesites and the retreat of thermophobic species to cold-microclimatesites is expected, increasing dissimilarity in the high-montane zone.
Furthermore, our results underline the extraordinary value of traditional extensive pastoral systems including different slope exposures and therefore contrasting microclimatic conditions to conserve biodiversity in mountains.
Our results suggest that microclimate preferences of a species in its core distribution are not always reflected at the edges, where they may be more specialized.For example, less demanding species concerning temperature conditions in their core distribution may be more restricted at the edges.Therefore, possible shifts of microclimate preferences should be acknowledged not to overestimate range reductions or expansions.Since microclimate data and smallscale modelling approaches become available (Maclean et al., 2019;Senior et al., 2019;Zellweger et al., 2019), and local deviations to downscaled macroclimate are often high (±2°C), there is an urgent need to incorporate high-resolution microclimate data into species distribution models for an accurate estimation of the availability of suitable conditions for future species distributions (Stark & Fridley, 2022).
Combinations of traits help explain species' complex ecological niches and thus should prove useful in predicting their responses to future climatic changes in their habitats.Increasing temperatures in combination with drought events will likely increase diversity and the fraction of xerophilic Orthoptera species, but possibly force moisture dependent and high-temperature sensitive species to retreat to higher elevations and/or north-facing slopes.As macroclimatic average temperatures are increasing with climate change, so does the frequency of extreme weather events, which can differentially affect future distributions of species (Feldmeier et al., 2018).In the course of climate change, upslope shifts and population growth of thermophilic species at higher elevations is likely but can also be hampered or reversed by late snowfall or unsuitable extreme events, which regularly occur in mountain systems.
mountain region in southern Germany to test the following expectations: (1) The diversity and abundance of Orthoptera increase with micro-and macroclimatic temperature since species' climatic niches are constrained by harsh temperature conditions at cold sites or high elevations.Due to the overall cold and humid macroclimate and the complex topography in the northern Alps, we expect microclimatic effects to be particularly evident.(2) Differences in community compositions between sites peak at mid-elevations, where lowland species are still fostered by warm microclimate and overlap with mountain species.(3) Climatic niche parameters are related to species traits.Cold micro-& macroclimatic conditions filter the species pool towards smaller (body size), short-winged (wing length), less specialized (dietary breadth), early-hatching (phenology), darker (coloration) and less xerophilic (moisture preference) species (predictions Table Within a region of heterogeneous landscapes in the Northern Limestone Alps (Berchtesgaden Alps), characterized by tessellated mountain pastures in a matrix of (mainly) coniferous forest and bare rock, we selected 93 study sites along the slopes of several mountains, covering a gradient of 7-0°C mean annual temperature, 1500-2600 mm annual precipitation and ranging from 600-2150 m F I G U R E 1 Location of the 93 study sites along elevational gradients ranging from 600 to 2150 m a.s.l.(greyscale) in the Northern Limestone Alps (Berchtesgaden, Bavaria, Germany).All study sites were either extensively managed (grazed/ mown) or unmanaged open grassland sites.Point colour scale corresponds to measured local microclimatic conditions (red = warm, purple = intermediate, blue = cold).Example images of study sites from the five sampled elevational zones were added from low to high elevation (left to right: submontane, montane, highmontane, subalpine and alpine zone).
assuming default prior distributions and generating a total of 1000 posterior samples after thinning (model fitting and validation details in the Appendix S1; Ovaskainen & Abrego, 2020).Combining both statistically independent parts of the hurdle model, we predicted Orthoptera species abundances from the models' β-parameters, species richness, cumulative abundance and community-weighted mean trait patterns along the elevational gradient for cold (−1 standard deviation (SD) of microclimate), intermediate and warm (+1 SD) microclimate by multiplying the predictions for occurrence probabilities of each species from the presence-absence model with the conditional abundance predictions from the abundance COP model using the full 1000 posterior samples.
Average community hatching phenology and diet breadth decreased along the elevational gradient, while moisture preference and wing length increased (Figure5).Body sizes and wing lengths of individuals within communities were larger only at warm sites of low elevations, but not in high elevations.A distinct change in community-level trait patterns became evident in the subalpineF I G U R E 2Orthoptera communities recorded at the study sites.Circle size is proportional to the abundance of the species recorded during surveys.Sites are ordered vertically according to their mean elevation from valleys (bottom) to summits (top), where dashed horizontal lines delimit the submontane, montane, high-montane, subalpine and alpine elevational zones.Species richness is shown as bars on the left and summed abundances on the right.Orthoptera species are ordered and coloured according to their phylogeny with representative species images (from the left to the right: Gomphocerinae, Oedipodinae, Melanoplinae, Tetriginae, Gryllotalpinae, Gryllinae, Phaneropterinae, Tettigoniinae, Conocephalinae and Meconematinae).
F
Effect of microclimatic variation on four representative Orthoptera species along an elevational macroclimatic gradient.(a) The Field Grasshopper (Chorthippus brunneus) exhibited a broad elevational range and without specific microclimatic preference, (b) the Woodland Grasshopper (Omocestus rufipes) was found exclusively at warmer low elevation sites, (c) the Common Green Grasshopper (Omocestus viridulus) avoided warmer sites at low elevations and (d) the Common Mountain Grasshopper (Podisma pedestris) was prevalent at mid-elevation sites with warm microclimate.Shown are model predictions for warm (red line), intermediate (purple line) and cold microclimatic conditions (blue line) along the elevational gradient (numbers indicate the posterior probability of a positive impact of microclimate within each elevational zone).Point colours represent microclimatic conditions at the sites (red = warm, purple = intermediate, blue = cold).Vertical dashed lines separate the submontane, montane, high-montane, subalpine and alpine elevational zone from left to right in each panel.Individual responses of all 32 species are shown in Figure S6.
F
Effect of microclimatic variation on community-level abundance-weighted Orthoptera traits along an elevational macroclimatic gradient.Shown are model predictions for warm (red line), intermediate (purple line) and cold microclimatic conditions (blue line) along the elevational gradient (numbers indicate the posterior probability of a positive impact of microclimate within each elevational zone).Xerophilic, large and late-individuals had a higher share of the communities at sites with warm microclimate.Point colours represent microclimatic conditions at the sites (red = warm, purple = intermediate, blue = cold).Vertical dashed lines separate the submontane, montane, high-montane, subalpine and alpine elevational zone from left to right in each panel.reflecting reciprocal abundance patterns of thermophilic and thermophobic species in a zone where ambient temperatures are suitable to facilitate the development of many different species.
I G U R E 6 Empirical morphometric measurements of pronotum lengths (a), tegmen lengths (b) and their index, relative wing length (c) of two grasshopper species along an elevational macroclimatic gradient.The left panels show Omocestus viridulus and the right panels show | 2024-01-12T16:20:25.430Z | 2024-01-09T00:00:00.000 | {
"year": 2024,
"sha1": "cc4c1ce0c67b952eade79fe78ab44a0f5032fe24",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ddi.13810",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "bbc0b13fb7c880605cf6bdbd23622bb291890dd8",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
233965141 | pes2o/s2orc | v3-fos-license | ‘Desire to learn, learn to shine’: Idolizing motivation in enhancing speaking skill among L 2 learners
This paper aims to analyze the effect of motivation in the development of English speaking skills on second language acquisition. There are so many excellent teaching methods that vary in effectiveness. Motivation is the driving force for learning another language, and integration of that language into the person’s identity. Motiv ation is the practical reason for learning a second language. Data collection was done from two Indian universities named: Patna University and Patliputra University, from each university 50 students were randomly selected. Speaking English is one of the best needs of individual in both the students’ academic and a professional field . The second language students should be encouraged not only in the classroom but also outside to speak in English. In achieving this goal, motivation can be used as magical catalyst in learning L 2 . With this viewpoint, this research focuses on 3 main motivational factors to analyze the role of motivation in developing speaking skills: identifying the motivation function in promoting speaking skills; researches on motivational factors for English as second language; suggesting impact and strategies in stimulating learners in developing speaking skills. close-ended questionnaires using relevant types of questions are chosen to conduct data collection. For the research, descriptive statistical analysis was performed using SPSS software. The mean value was used to represent the analysis results. Within this paper, the researcher identifie d several factors that affect students’ motivation to learn second language. Motivation is surely, of great importance in this phenomenon, and is often critical in its development.
Introduction
According to Jorge Cela (2009), "The secret of learning is the desire to learn."English Language Education has been becoming more and more important as the world's most important living and demanding language with English emerging the world's foremost lingua franca.Numerous studies have shown that in recent years, English language learning is growing rapidly in India."The importance of the English language cannot be ignored in any field, whether it is science and technology, entertainment or business" (Kumar, 2020 a).The upsurge of English language learning is undoubtedly a fast growing trend.It has been inspired probably by some motivating force.Motivation is important for the development and course of human activity as well as why people carry out things themselves.The justification for the practice of motivation L2 learning is genuine and practical for it is pertaining to integrative motivation which implies natural affinity for language learning and mastering.
Second Language learning outside of the classroom has always remained major research issues for learners as well teachers.There are so many creative ways of teaching, but the outcome will be different.Learning theories have been reworded on the basis of learners' requirements time to time, but all the theoretical models focus on improving language skills only.This research examines the impact of the motivation of teachers in improving speaking skills of graduate-level students.The teacher certainly occupies the main role in the present study.The teacher could, however, stimulate the attention and mindset of the L2 learners in using English language.
Many researchers, including Kumar (2020 b), Alam & Farid (2011) and others, have investigated the role of teachers as important for students who idealise and copy their teachers in the learning and teaching process.The propensity to communicate effectively is a must to attain phenomenon for L2 students in present situation.A competent speaker succeeds in his/her goals in powerful speaking because he/she knows proper verbosity and use of words.On the contrary, a poor speaker faces a number of challenges due to an incognition of the norms of speech.Some speakers have a so wide array of contextual vocabulary and style that allow them to influence others.A successful speaker can, however, know how to address his/her audience in order to gain social, professional, and educational reputation in society.In addition, EFL teachers could design such tasks to improve speaking skills for L2 learners.Such tasks may be debate, free speech, listening comprehension activities, and role playing, (Wardhaugh, 2006).
Objectives of the study
The present research aimed at the following: • To explore and study the impact of motivation on students' speaking skills and how it influences students' academic, development.• To examine the connection between motivation and academic factors and their impact on speaking skill.
Research questions
• How does motivation work to L2 learners in speaking English inside and outside the class?
• What are the significant motivational factors and physical environments responsible for enhancing English speaking in large classroom and public speaking?• What are the expected strategies those a teacher can use to motivate his L2 learners in speaking fluent English?
Literature Review
This research is the outcome of student centered and teacher-centered opinion based on questionnaire administered survey.Many researches have been conducted in the past enumerating the role of L2 learner motivation and its relation with English achievement (Al-Qahtani, 2013;Long, Ming, & Chen, 2013;Hong & Ganapathy, 2017;Kumar, 2020 b), however this study is conducted in one of the states of India where speaking English is less emphasized."Activity which required learners to arrive at an outcome from given information through some process of thought" (Prabhu, 1987).Also "activity based learning is a classroom work, which involves learners in comprehending, manipulating, producing or interacting in the target language" (Nunan, 2003).Cummins (1998) suggested that if students are actively engaged in the target language and its surrounding community, the students would possibly be good at interacting in second language.Describing further the importance of English speaking for professional, in the study conducted by Fisher et al. (2003) reported that speaking skills are the passport of success in job.A professional will have a strong ability to communicate (Baublitz, 2010).Riemer (2002) also asserted that linguistic competence along with sound language skills is the key factor academic and professional success.
A number of researches done by Muyskens (1998), Warschauer and Kern (2000), Kirkwood (2005), Alotaibi & Kumar (2019) show that in recent years, the influx of new technology brought sea change in education and improved learning outcomes in L2 learning.Haddad and Jurich (2002) suggested that the usage of Education Technology would enhance the ability of learners to overcome the challenges, such as updating knowledge on recent and relevant issues.Foulger and Jimenez-Silva (2007) asserts that in recent times, technology has been a contributing factor towards self-motivation and the attainment of L2 learning.The experimental research was done by Meenakshi (2016), to see if better teaching techniques could really strengthen EFL teaching.
The term motivation and its concepts are primarily given by two well-known academicians Robert Gardner and Wallace Lambert Robert.They gave diagram for motivation model and also discovered its two types integrative motivation and instrumental motivation (Gardner & Lambert, 1972).Gardner and Lambert are famous for L2 motivation research.Their model suggests that the learner's motivation and willingness can have a substantial impact on L2 learning.Effort, effect and eagerness are three components of motivation.Instrumental motivation is a catalyst that inspires students to learn L2 for practical purposes.Again, according to Gardner (1985), when learners see no practical value for learning the intended language, but they learn it to show their liking or passion for the intended language, it is known as integrative motivation.
Figure 1.The Socio-Educational Model (Gardner, 2005, p. 6) An analysis of Gardner & Lambert's model of motivation indicates that the instructional content and teaching methods influence the performance of L2 learners in acquiring communication skills.The most fundamental element connected with the language proficiency in socio-educational model is thus motivation and attitudes.The model stresses that performance and emotions are affected by each other.Both outcome influences motivation and attitude for learning language.Motivation for second-language learning is an increasingly important feature of applied linguistics.As a conclusion, motivation has assumed to be the most important force that stimulates second language learning.
When attempting to learn a foreign language, researchers argue that the strong key to learning the target language is motivation.Motivation fosters an intangible driving force that inspires learners in the process of teaching and learning for being steadfast.Further, efforts, desire to achieve goal, and attitude are three significant intangible factors for L2 learners' motivation (Garner, 1985).One of the most important factors influencing the extent of second language learning is motivation.Whether it is intrinsic or extrinsic, it works as magical drive that induces someone to do whatever they wish.According to Crookes and Schmidt (1991), motivation is language learners' orientation regarding how they learn language.
Motivation can be well interpreted by two ways: instrumental versus integrative and intrinsic versus extrinsic.Instrumental motivation relates to the learning of a target language as a way to achieve certain materialistic objectives.The instrument may be career growth, developing professional communication, placement, business growth, social identity or others.Contrary to this, integrative motivation inspires those who want to acquire native like language efficiency and mix up with their community, culture and living style.
Researchers have varying opinions on the uses of motivation types.According to Lukiman (1972), instrumental motivation tends to be more effective than integrative motivation.L2 learners in India are also motivated to learn the English language because of some instrumental gain.The language teachers should motivate their students time to time for instrumental gain.If the students don't show interest in second language learning, the teacher should come up with a way to speak to the learners in English that will hopefully shift their attitude toward the language.He should keep in mind that motivation is crucial to learning.In improving the fluency and confident of L2 students, continuous motivation is quite effective.When the learners feel unmotivated, uninspired or feel their emotional resources are depleted, the teacher can greatly support the demotivated learners by telling them that speaking mistakes during learning stage are very common.Harmer (1991) also had the view that the language teacher should handle his students tactfully and intelligently, when they commit mistake.The teacher should not correct grammatical errors of the students while speaking in the target language.Teachers must also support and encourage students to communicate in English so that the learners don't lose interest.Also intrinsic motivation should be employed in the class side by side.
According to Moiinvaziri (2009), both instrumental and integrative motivations are equally important.Together they make uncanny impact on language learning.In light of his study, intrinsic motivation should be given priority, since it would be more effective in the long run.It is to be found that some students show L2 learning initially because of intrinsic motivation, but over time, they lose their interest, resulting in a lack of external motivation.
A study of Piniel and Csizér (2013) research reveals that Motivation is a key factor in learning and this it is helpful in boost up high performers.Tuan & Mai (2015) revealed that motivation to speak English is assumed to be one of the factors influencing the speaking skills.Ghanbarpour (2016) also stated that one's motivation makes it easier for interlocutors to interact.It is considered to be important to the success of language learning for students.
According to Astuti (2012), motivation is an important factor in psychology and in learning in Astuti (2012).When learners are highly excited about learning, they will learn more.The teacher is also advised to increase the students' excitement for learning English.As a teacher, he sometimes forgets that the learning practices of students are guided through the way he motivates.Students enjoy the movement of the classroom in this way.There is no pulse without the student's inspiration and life in the classroom.A teacher who incorporates Motivation based methods for students into his teaching, he proves to be a better teacher.Motivation thus leads to a good learning of the second language (Anjomshoa & Sadighi, 2015).
Research Design
The researcher has used both statistical and verbal descriptions to analyze and interpret the collected data.The mixed methods seem most appropriate for this research as they topic is concerned with inner drive.With this view, this research used a set of 15 questionnaires on speaking motivation, divided uniformly raising three research problems.The main objective of choosing the current data collection method is to determine the number of learners who are fluent and confident, willing to commit errors and well-inspired by their teachers.
Research Population
A total of 100 first year graduate from different subjects students were selected randomly from two Indian universities, namely Patna University and Patliputra University.Selection of the respondents was done in the way that from each university 100 students were chosen to take part in the research.
Data Collection Tools and Process
The present research uses both quantitative and qualitative methods.For this, a set of close-ended questionnaires comprising 15 statements were designed for this research to examine accurate information from the data collected.The data were collected through Google Drive Electronic Media during the year 2018-2019.In order to analyze the collected data, the SPSS programme was used for statistical analysis.However, the standard deviation was measured to rationalize the findings of the research.
Results and Findings
This research focuses on learner-focused awareness on the formation of motivation of speaking skills.No doubt every student reinforces the instructions and viewpoint of his/her teachers.In the aforementioned study, the validity and credibility of the study is listed in Table 1.The above table provides descriptive statistics of the L2 Learners' motivation to enhance speaking English language.
To the respond of first question, out of the 100 respondents 54 were strongly disagreed, 54 were agreed, 1 neutral, 2 were disagreed and 1 strongly disagreed.In response to statement, 'I like English and can speak fluent,' 54 (54%) of respondents seems strongly agree.The main point of the study is that the majority of the population can speak English fluently.
The second statement is about speaking English in clear and precise language.The response of this question shows positive impact.Of the 100 respondents, the majority are agreed as 91 (91%) of respondents.This observation indicates the fact that most of the respondents desire to speak English in clear and precise language.
The third statement in under the first research query is about the confidence of the students in speaking English.In response to this statement, 'I can communicate with great confidence to inside and outside the class' covering a strongly disagreeable to strongly agree that 38 (38%) of respondents are strongly agree with the majority.As a result the number of respondents believes that they are able to communicate with confidence both inside and outside of class.
The fourth statement is: 'I am able to imagine and think in English while speaking'.The response of this statement is encouraging as 48 respondents are agreed and 41 are strongly agreed, making it a total of 89 (89%).According to the survey, a large number of respondents claimed that they can conceptualize and think in English.
The fifth statement is again students' centered.After data collection, it was found that 14 each were strongly agreed and agreed, 6 students, 38 students, and 28 students were neutral, disagreed, and strongly disagreed respectively.In response to the statement which covered from strongly agree to strongly disagree with the majority of respondents (38 %) were strongly disagreed.The conclusion is that the majority respondents accepted that they were not anxious.In the above descriptive statistics in table 2, analysis of major factors and classroom environment for L2 learners' motivation to develop speaking skills has been done statistically.
The first statement in this group for analysis is 'I get nervous and forget things in English classes'.After data collection, it was found that 49 students were strongly agreed, 37 students were agreed, 6 students were neutral, 4 students were disagreed, and 2 students were strongly disagreed.The result shows that 49 % students are strongly believe that they consider English a tough subject and consequently they get nervous and forget things while speaking.
One of the important aspects of English language class is that it needs active participation of the students.It is teachers' role to make student engaging through interaction.The second statement reading 'I get confuse and embarrass to answer in English class.',analyses students' comfort in answering questions.After collecting data it was discovered that a total of 49 students were strongly agreed, 38 students were agreed, 4 students were neutral, 5 students were disagreed, and 4 students were strongly disagreed.There is no doubt that the lack of confidence in the English learning needs to be taken gravely.The teacher should therefore provide vocabulary and glossary of terms during teaching.Off course this is teacher-centered question.
The third statement is again teacher-centered, which analyses uneasiness of students in large class and public speaking.33 students have strongly agreed, 31 students have agreed, 12 students have neutral, 20 students have disagreed, and 4 students have strongly disagreed.The result of this data analysis is that most of the respondents felt nervous in large classroom and public speaking.
The fourth statement is about one of the important factors for almost common to maximum students.The statement reads as: 'I can get a job if I have good communication in English.' The analysis shows that 45 students were agreed, 46 students were agreed, 8 students didn't react to the statement, while 2 students were disagreed and none was strongly disagreed.
In the fifth learner-centered question, 40 students were strongly agreed, 51were agreed, 3 were neutral, 12 were disagree and none was strongly disagreed.This data analysis is for the motivation factor which is about students' perception of studying, working and living overseas.Table 3 discusses about the expected strategies those a teacher can use to motivate his L2 learners in speaking fluent English through the analysis of five well drafted statements.
The first statement reads as: 'I fail to use appropriate vocabulary to the context'.Of the 100 respondents, it was found that 29 students were strongly agreed, 39 students were agreed, 11 students were neutral, 11 students were disagreed, and 10 students were strongly disagreed.This implies that many students are not able to speak English because of poor vocabulary.
The second student-centered statement reads as: I can't follow grammar rules while speaking in English.The analysis of 100 respondents through SPSS reflects that 36 students were strongly agreed, 34 students were agreed, 7 students were neutral, 21 students were disagreed, and 2 students were strongly disagreed.The analysis of data collection refers that moist of the students don't follow grammatical rule while speaking English and consequently commits error.
The third statement is teacher-centered, which has stated as: 'Teacher uses various motives and activities for motivation in speaking English.' Data analysis indicates that 2 were strongly disagreed, 12 were disagreed, 7 were neutral, 37 and 42 were agreed and strongly agreed respectively.The analysis implies that teachers motivate their students in English class.Motivated students thus look forward to learning and participating The fourth participants' response was about the teacher who corrects the student's mistake while speaking.The statement thus reads as: 'It makes me afraid that my English teacher is correcting my all mistakes'.Out of 100 students, it was found that 39 students were strongly agreed, 42 students were agreed, 7 students were neutral, 13 students were disagreed, and 1 student was strongly disagreed.The result shows that most of the students are afraid of intervening teachers while speaking English.That's why a teacher should be intelligent enough in correcting errors.
The fifth and last statement is concerned with learners.Upon analysis it was found that 50 students were strongly agreed, 40 students were agreed, 5 students were neutral, 4 students were disagreed, and 1 student was strongly disagreed.The result and analysis shows that learners should learn rules critically and logically to save learning a boring experience.
Discussion
In the above research analysis, the Standard Deviation or Mean Value refers the average result of the research.The result shows that English learning is based on motivation, which means that motivating strategies need to be implemented in the language teaching.One of the challenging teaching practices is how the students are motivated.Students lacking motivation cannot learn language effectively.They will not preserve knowledge, engage enthusiastically, and some may even be disturbing.Motivated students seem to outshine the less or no motivated students in learning English language.
Students who have been motivated are more willing to learn.Teaching a great variety of motivated students in classroom is an exciting experience for both the mentor and the mentee.Some learners are natural learners, self-motivated, and willing to learn.But a great teacher can also make learning enjoyable and motivated for the students to achieve their full attention and potential.Kitjaroonchai and Kitjaroonchai (2012) found that student motivation and academic achievement have a considerable positive relationship.Motivation as a 'smart processor' to help learners, increase learning performance and achieve an aim for the curriculum.It also helps students to determine in the target language what they have stored, what they can do and the ability to gain.Similarly the researcher finds Motivation is characterized as the attitudes, desires, and effort of the person.For all aspects of activation, it is a constructive energy and realistic path in fostering learning achievement.Someone is driven to participate in the related activities, expends effort, persists in the activities, attends to the tasks, demonstrates a willingness to accomplish the aim, enjoys the activities, and obviously their motivation for English learning cannot be differentiated by the like.This research is limited to specific space, number and setting.The statements in the questionnaires for survey are directly focused on motivation.They don't evaluate students' attitudes, cognitive and behavioral aspect in details as the questionnaire is designed using close ending questions.Furthermore, in this analysis, variables such as classroom size, physical environment, time provided to practice and the correlation between motivation and language performance were not taken into account.
Conclusion
In this paper, the researcher examined a number of factors that influence students' motivation when speaking a foreign language.Other issues will be left for further investigation.Skills, such as reading and writing, will be studied to determine the influence of academic, linguistic and socio-cultural factors on them.In addition, demotivation will be researched, emotional intelligence will be examined in relation to the outcome variables and finally, a diversified sample across universities will be distributed for the study.However, the primary concern of this research was to investigate the role of the motivation of second language learners in improving speaking skills.
In both the academic and professional careers of L2 learners, the relevance of oral skills cannot be sidestepped.One of the essential skills in ESL contexts is speaking.However, In particular, speaking has not taken on such priority as reading and writing skills in ESL teaching.The results of the study are therefore very significant, because the students of the researched universities have argued that they are motivated by their English teachers to speak the second language.This study therefore states that their language teachers motivate the large number of second-language students in the above mentioned universities.These practices have been found to engage students in their learning and thus increasing their motivation and success in university.In the support for first research question, the teacher does timely encouragement.Students have great desire their work to be approved and appreciated, and they are more excited about learning when they have confidence in their abilities.As their teacher, one should promote open communication and listen their opinions and ideas.A teacher should treat them positive, compliment and acknowledge them for their achievement.They would be more motivated to learn if the classroom is a congenial place where students feel recognized and appreciated.Thus learning attitude matters a lot.
Motivation is a significant factor, among many signifying factors that influence Second Language learning.The second research question was pertaining to the factors responsible for motivation for second language learners in language learning.There is an immense need to perform research to further investigate what sort of relationship persists between the various kinds of motivation and the learning outcome of the students, in addition to examining the type of motivation.The language teacher can offer incentives to incline to learn target language.To provide realistic goals and logical rewards helps students to become active participants in the class however need to be pushed in the right direction often.Language learning will be fun and inspire the students and make them want to learn if teachers offer prizes and incentives.
The third and last research question is about motivational strategies used in language classroom for developing skills.For this, teachers can get creative and get students involved.Teachers can save class from getting monotony by changing the norms of classroom.Instead of traditional lectures, activities based teaching, games and discussions, debates and visual aids such as colorful charts, diagrams and videos.
Table 1 .
The way motivation effort to L2 learners in speaking English | 2021-05-08T00:02:51.840Z | 2021-02-25T00:00:00.000 | {
"year": 2021,
"sha1": "1cfe5d5d2731e9fbdca287981839b54891e16c2d",
"oa_license": null,
"oa_url": "https://un-pub.eu/ojs/index.php/cjes/article/download/5542/7437",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e1b5d339aa59c841094d76e727b0f80439bf939c",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
249645647 | pes2o/s2orc | v3-fos-license | Sarcopenia as potential biological substrate of long COVID‐19 syndrome: prevalence, clinical features, and risk factors
Abstract Background Severe clinical pictures and sequelae of COVID‐19 disease are immune mediated and characterized by a ‘cytokine storm’. Skeletal muscle has emerged as a potent regulator of immune system function. The aim of the present study is to define the prevalence of sarcopenia among COVID‐19 survivors and the negative impact of sarcopenia on the post‐acute COVID‐19 syndrome and its related risk factors. Methods A total of 541 subjects recovered from COVID‐19 disease were enrolled in the Gemelli Against COVID‐19 Post‐Acute Care between April 2020 and February 2021. They underwent a multidisciplinary clinical evaluation and muscle strength and physical performance assessment. Results Mean age was 53.1 years (SD 15.2, range from 18 to 86 years), and 274 (51%) were women. The prevalence of sarcopenia was 19.5%, and it was higher in patients with a longer hospital stay and lower in patients who were more physically active and had higher levels of serum albumin. Patients with sarcopenia had a higher number of persistent symptoms than non‐sarcopenic patients (3.8 ± 2.9 vs. 3.2 ± 2.8, respectively; P = 0.06), in particular fatigue, dyspnoea, and joint pain. Conclusions Sarcopenia identified according to the EWGSOP2 criteria is high in patients recovered from COVID‐19 acute illness, particularly in those who had experienced the worst clinical picture reporting the persistence of fatigue and dyspnoea. Our data suggest that sarcopenia, through the persistence of inflammation, could be the biological substrate of long COVID‐19 syndrome. Physical activity, especially if associated with adequate nutrition, seems to be an important protective factor.
Introduction
One year after the appearance on the world scene of the SARS-CoV-2 infection, there are increasing evidence that it has a systemic, inflammatory pathogenesis. 1 The biological and clinical course of COVID-19 is characterized by three phases, showing at the onset an early tropism for the upper respiratory system, which results at most in few symptoms, such as mild fever and fatigue. Thereafter, it replicates in the lower respiratory tract, and the patient complains of cough and dyspnoea. Finally, around the 10th-14th day from the onset of symptoms, it causes viraemia with subsequent attack against all organs that express angiotensin-converting enzyme-2 receptors such as the heart, kidney, gastrointestinal tract, and blood vessels with variable clinical manifestations in terms of site and severity. 2,3 The distinctive feature of those subjects who develop severe disease manifestations is not the extent of the viral damage, but the immune injury mediated by an exaggerated inflammation supported by the so-called 'cytokine storm'. 4 The progression of COVID-19 is associated with a continuous decrease in lymphocyte count and significant elevation of neutrophils and inflammatory markers including C-reactive protein, IP-10, MCP1, MIP1A, TNF-α, interleukin-6, and ferritin. 2,5,6 Cytokines and chemokines attract many inflammatory cells, such as neutrophils and monocytes, resulting in excessive infiltration of them into tissues. This dysregulated and/or exaggerated cytokine and chemokine response by infected cells plays a key role in the pathogenesis of SARS-CoV-2, and it is responsible for the massive prevalence of catabolic pathways that are observed during the acute phase of illness and its sequelae. In fact, at biochemical level, many authors have reported up-regulation of apoptosis, autophagy, and p53 pathways in peripheral blood mononuclear cells of COVID-19 patients. 7 Skeletal muscle is the most important 'metabolic controller' of our body. It is well known that muscle is the main site of glucose and fatty acid metabolism, through peroxisome proliferator-activated receptors, and thermoregulation. Furthermore, it is important to highlight that muscle has emerged as a potent regulator of immune system function. 8 Sarcopenia is a progressive and generalized skeletal muscle disorder (which include altered muscle strength and function) that is associated with increased likelihood of adverse outcomes. 9,10 When it occurs, in addition to complications as falls and disability, increased infections and significant alterations in the immune system are observed. While increased evidence had shown that lower muscle mass is independently associated with intensive care unit admission and hospital mortality, 11,12 there are no studies on the impact of COVID-19 on muscle and on the incidence of sarcopenia in post-COVID-19.
The aim of the present study is to provide a better insight into the comprehension of the prevalence of sarcopenia (according to the new EGSWOP2 definition) among COVID-19 survivors and the negative impact of sarcopenia on the long COVID-19 syndrome and its related risk factors.
Materials and methods
The Gemelli Against COVID-19 Post-Acute Care (GAC19-PAC) project was an initiative developed by the Department of Geriatrics, Neuroscience and Orthopedics of the Catholic University of the Sacred Heart (Rome, Italy) to better understand what happens in survival COVID-19 patients and how the virus impacted their health and quality of life. Beginning on 21 April 2020, the Fondazione Policlinico Universitario Agostino Gemelli IRCCS (Rome, Italy) established an outpatient service for individuals who suffered the SARS-CoV-2 infection. This outpatient service-called 'Day Hospital Post-COVID-19'-is currently ongoing with the aims to expand the knowledge of COVID-19 and its impact on health status and care needs as well as to promote healthcare strategies to treat and prevent the clinical consequence of SARS-CoV-2 infection across different organs and systems. Further details about the post-acute outpatient service and evaluation of the patients have been described elsewhere. [13][14][15] Study sample Between 21 April 2020 and 28 February 2021, 623 individuals officially recovered from COVID-19 were followed in Day Hospital Post-COVID-19. For the present study, 82 subjects were excluded for missing values in the variables of interest; as a consequence, a sample of 541 subjects was considered.
At the follow-up visit, all these patients met the World Health Organization (WHO) criteria for discontinuation of quarantine: no fever for three consecutive days, improvement in other COVID-19-related symptoms, and two negative tests for the SARS-CoV-2 virus 24 h apart.
Data collection
Patients were offered a comprehensive medical assessment with detailed COVID-19-related history and physical examination. A multidisciplinary approach, including internal medicine, geriatric, ophthalmological, otolaryngologic, pneumological, cardiological, neurological, immunological, and rheumatological evaluations, was put in place for a comprehensive assessment of all the possible damages caused by the SARS-CoV-2 virus. 16 All clinical parameters, including clinical and pharmacological history, lifestyle including smoking status and physical activity, and anthropometric measures, were collected in a structured electronic data collection system. Smoking habit was categorized as current or never/former smoker. Body weight was measured through an analogue medical scale. Body height was measured using a standard stadiometer. Body mass index was defined as weight (kg) divided by the square of height (m). Regular participation in physical activity was considered as involvement in exercise training at least twice a week.
The specific symptoms potentially correlated to COVID-19 were obtained using a standardized questionnaire in which the patient was asked about the presence or absence of the symptom and more than one symptom could be reported. 17 Patients were asked to recount symptoms retrospectively during the clinic visit and to confirm the persistence of them. A specific focus has been paid to collect information and data about signs and symptoms COVID-19 related: cough, fatigue, diarrhoea, headache, smell disorders, dysgeusia, red eyes, joint pain, short of breath, loss of appetite, sore throat, and rhinitis.
According to the WHO classification, 18 the COVID-19 severity has been defined as (i) patient at home and no hospitalization, (ii) patient hospitalized without oxygen support, (iii) patient hospitalized with oxygen support by Venturi mask, (iv) patient hospitalized with oxygen support by non-invasive ventilation or continuous positive airway pressure, and (v) patient hospitalized in intensive care unit with invasive ventilation.
Muscle strength and physical performance assessment
Muscle strength was assessed by handgrip strength, which was measured by using a dynamometer (North Coast Hydraulic Hand Dynamometer, North Coast Medical, Inc, Morgan Hill, CA). Participants performed one familiarization trial and one measurement trial with each hand, and the result from the stronger side was used for the analyses. 11 Participants' physical performances and oxygen saturation were evaluated by the chair stand test and the 6 min walking test. Subjects were asked to stand up from a chair with their arms folded across the chest for one minute as quickly as possible. A standard armless chair was used, usually 43-47 cm in height. The back of the chair was stabilized against a wall to ensure safety and stability. The number of times the patient completed the stand and sit cycle was recorded; higher number reflected better performance. 19 The 6 min walking test was performed along a distance of 20 m, and the distance covered in the given time was recorded in metres; greater number of metres reflected better performance.
Sarcopenia definition
According to the most recent EWGSOP2 consensus definition, 9 low muscle strength is considered as the primary parameter of sarcopenia. Sarcopenia is probable when low muscle strength is detected. The EWGSOP2 sarcopenia cut-off points for low strength by grip strength were considered. Hence, subjects over 65 years of age were defined to be affected by probable sarcopenia when handgrip strength was <27 kg in male and <16 kg in female, respectively. 9 For the subjects in the lower age groups, the cut-off values for sex and age previously identified in a large sample of non-hospitalized subjects living in the community (Lookup 7 + sample) were used. 12
Ethical approval and manuscript preparation
This study has been approved by the Catholic University/ Fondazione Policlinico Gemelli IRCCS Institutional Ethics Committee (protocol ID number: 0013008/20). 17 Written informed consent has been obtained from the participants. The manuscript was prepared in compliance with the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) reporting guidelines for observational studies.
Statistical analyses
Continuous variables were expressed as mean ± standard deviation (SD) and categorical variables as frequencies by absolute value and percentage (%) of the total. Descriptive statistics were used to describe clinical characteristics of the study population according to sarcopenia status. The differences in proportions and the means of covariates between subjects with and without sarcopenia were assessed using Fisher's exact test and t-test statistics, respectively.
Cox proportional hazard models with robust variance estimates were used to assess the association between clinical and functional characteristics and sarcopenia prevalence. Candidate variables to be included in the Cox model were selected on the basis of biological and clinical plausibility as risk factor for sarcopenia. To identify factors independently associated with prevalent sarcopenia, we first estimated crude odds ratio (OR) and its 95% confidence interval (CI). A multivariable Cox model was computed including all the variables that were associated with the outcome at α level of 0.05, after adjustment for age and gender. Model 1 included all the variables of interest and the WHO severity score of COVID-19 by comparing the risk of hospitalization with staying at home. Model 2 included the same variables as Model 1 by removing COVID-19 severity score and including length of hospital stay. Consequently, all subjects were included in Model 1 (n = 541), while in Model 2, only hospitalized subjects were considered (n = 332).
All analyses were performed using SPSS software (Version 11.0, SPSS Inc., Chicago, IL).
Results
Mean age of 541 subjects included in the present study was 53.1 years (SD 15.2, range from 18 to 86 years), and 274 (51%) were women. The prevalence of sarcopenia was 19.5%, with a significant difference between men and women (15.7% vs. 23.3%, respectively; P = 0.01). The average days from onset of COVID-19 to follow-up visit were substantially similar among subjects with sarcopenia vs. non-sarcopenic subjects (87.5 ± 47.4 vs. 95.3 ± 51.7, respectively; P = 0.1). In COVID-19 survivors under 45 years of age, sarcopenia was present in 14.5%; in subjects between 45 and 65 years, it was 14.6%; while in subjects over 65, it was 38.3%. Figure 1 shows the prevalence of sarcopenia according to different age groups by comparing the rates observed in COVID-19 survivors with those of the Lookup 7+ project comprising more than 11 000 non-hospitalized subjects living in community. The sarcopenia rate in the subjects affected by COVID-19 was always significantly higher than in the subjects enrolled in the Lookup 7+ project.
Characteristics of the study population according to the sarcopenia status are summarized in Table 1. Compared with participants without sarcopenia, those diagnosed with sarcopenia were significantly older and had greater prevalence of hypertension, diabetes, and chronic obstructive pulmonary disease. Subjects without sarcopenia at the time of follow-up visit were more physically active and had higher levels of serum albumin and haemoglobin. Finally, the prevalence of sarcopenia was significantly higher in hospitalized subjects, particularly among those subjects who needed oxygen therapy (i.e. non-invasive ventilation and continuous positive airway pressure) and invasive ventilation. Among subjects who had been hospitalized, the length of hospital stay was significantly longer among subjects with sarcopenia at follow-up time than among those without sarcopenia.
The presence of sarcopenia was associated with reduced physical performance. In particular, subjects with sarcopenia walked about 60 m less during the 6 min walking test than non-sarcopenic ones (494 vs. 546 m, respectively; P < 0.001). Similarly, sarcopenic subjects had reduced number of repetitions during the chair stand test performance (24 vs. 26, respectively; P = 0.05). Figure 2 shows the prevalence of persistent COVID-19-related symptoms according to the presence of sarcopenia. Overall, patients with sarcopenia had on average a higher number of persistent symptoms than non-sarcopenic patients (3.8 ± 2.9 vs. 3.2 ± 2.8, respectively; P = 0.06). In particular, a significantly higher percentage of fatigue (65% vs. 56%; P = 0.04), dyspnoea (57% vs. 48%; P = 0.05), and joint pain (36% vs. 24%; P = 0.01) was observed among sarcopenic subjects than in non-sarcopenic subjects. Even though not statistically significant, loss of appetite during the acute phase of COVID-19 is higher among sarcopenic subjects compared with subjects without sarcopenia.
Finally, Cox proportional hazard models were used to assess the association between clinical and functional characteristics and sarcopenia prevalence. After multivariable adjustment (Model 1), as expected, the likelihood of being sarcopenic increased progressively and independently with advancing age [prevalence ratio (OR) 1.02; 95% CI 1.01-1.04], and the risk was significantly higher among female participants (OR 1.88; 95% CI 1.06-3.33). Sarcopenia was associated with diabetes (OR 2.34; 95% CI 1.10-4.96) and with the severity of COVID-19 as expressed by the need of invasive ventilation (OR 2.78; 95% CI 1.04-7.43). Conversely, a decreased probability of being sarcopenic at the follow-up visit was detected among subjects with higher levels of serum albumin (OR 0.90; 95% CI 0.83-0.98) and involved in regular physical activity (OR 0.64; 95% CI Sarcopenia as substrate of long COVID-19 syndrome 0.39-0.99). Furthermore, when multivariate analysis was restricted to hospitalized patients, a longer length of hospital stay was significantly associated with an increased risk of developing sarcopenia (OR 1.05; 95% CI 1.02-1.07) ( Table 2).
Discussion
In the present study, we explored the prevalence of sarcopenia, defined using the new EGSWOP2 operational definition, 9 among a large sample of COVID-19 survivors. The prevalence of sarcopenia was 19.5%, a particularly relevant data compared with the one in the general population. After 3 months on average from the onset of COVID-19, a large number of patients still have sarcopenia and not fully recovered, compared with those living in community who did not have COVID-19. In fact, in the largest and most recent Italian database of muscle values collected from an unselected sample of subjects living in community, the prevalence of sarcopenia was 8.6%. 11 It is important to highlight that this study involved more than 11 000 subjects and the mean age of participants was 55.6 years, very similar to the present sample of COVID-19 survivors. 11 The differences in terms of sarcopenia prevalence between general population and patients recovered by COVID-19 are macroscopic at all ages. However, the high prevalence among older patients may become an emergency issue if we consider that sarcopenia is considered the biological substrate of physical frailty and the pathway whereby the consequences of physical frailty develop. In fact, physical frailty is associated with negative outcomes such as falls, mobility disability, loss of independence, and death. 20,21 In this respect, it is important to underline that, compared with participants without sarcopenia, those diagnosed with sarcopenia had greater prevalence of hypertension, diabetes, and chronic obstructive pulmonary disease; all these diseases, like sarcopenia, have an inflammatory pathogenesis.
We explore the association of sarcopenia with the long COVID-19-related symptoms, too. A significantly higher percentage of fatigue and dyspnoea was observed in sarcopenic subjects than in non-sarcopenic subjects. This is one of the most significant results of present research considering that, as recently reported in our previous study, 17 among patients who had recovered from COVID-19, more than 85% reported persistence of at least one symptom, particularly fatigue, dyspnoea, and joint pain. These symptoms alone or variously combined with others, such as 'brain fog', sleep disturbances, attention deficit, and generalized and discontinuous muscle pain, configure the so-called 'long post-COVID-19 syndrome', 17 which is emerging more and more as one of the most important challenges of healthcare systems. 22 This cluster of clinical features closely resembles the typical features of the fibromyalgia and chronic fatigue syndrome and, most surprisingly, is not associated with psychiatric disorders or residual functional and/or structural deficits in pulmonary parenchyma. 23 These two conditions share a common patho-physiological aetiology identified as central sensitization, 24 whose pathogenesis is precisely inflammation mediated. Central sensitization occurs when the peripheral response of the spinal neuron becomes independent of the injurious insult received. The state of sensitization is maintained and enhanced by the release of pro-inflammatory cytokines by glial cells, which, when hyperactivated, cause a real neuroinflammation. 25,26 Given these considerations, we can hypothesize that sarcopenia-which in turn is mediated by the persistence of inter- leukins systemic inflammation-may be the biological substrate of fatigue, the main symptom of the long COVID-19 syndrome, as well as of dyspnoea, which can be considered a fatigue affecting the respiratory muscles ( Figure 3). Finally, we identified the potential risk factors to develop sarcopenia. Diabetes, severity of COVID-19 by means of the need of invasive ventilation, and the longer length of hospital stay were all significantly associated with an increased risk to develop sarcopenia; on the other hand, higher serum albumin levels and regular physical activity seemed to be protective factors. A good nutritional status, in particular an adequate protein intake reflected by normal albumin levels, and regular physical exercise are currently the most effective therapeutic measures to counteract sarcopenia. 27,28 The nutrients that have been most consistently linked to an improvement in sarcopenia and frailty are protein, vitamin D, essential amino acids, and their metabolites, such as β-hydroxy β-methylbutyrate. 28 Current literature shows how physical exercise impacts positively on muscle physiology through systemic and local effects. From a biochemical point of view, resistance exercises increase the oxidative capacity of the muscle through the expression of some genes involved in mitochondrial function, activate satellite cells, and increase the size of type II fibres, 29 while aerobic exercise exerts systemic effects by reducing inflammation. In fact, it is widely described in the literature that in adults enrolled in a physical exercise programme, an increase in muscle mass and function is associated with a strong reduction of inflammatory cytokines. 29 The present work is the first large-scale study that investigates the prevalence of sarcopenia using the new EWGSOP2 diagnostic criteria on an unselected study sample of COVID-19 survivors. It is important to highlight that our sam-ple includes all COVID-19 severity degrees. 18 It is also the first study in which sarcopenia is indicated as a possible biological substrate of the long COVID-19 syndrome.
Limitations of the study include the lack of information on sarcopenia before acute COVID-19 and the lack of details on sarcopenia severity. Furthermore, this is a single-centre study with relatively large number of patients but without a control group of patients discharged for other reasons. Patients with community-acquired pneumonia or patients with other virus disease-such as herpes or chickenpox-can also have high rate of sarcopenia and persistent symptoms, suggesting these findings could be not unique to COVID-19. At the same time, it is really difficult to distinguish between symptoms related to long COVID-19 and symptoms related to pre-existing chronic diseases. However, clinical characteristics of the participants make it possible to exclude that acute illnesses were present at the time of follow-up evaluation. Furthermore, many participants complained of myalgia and/or joint pain, and the presence of these symptoms may have potentially influenced the data. Finally, we were not able to measure a larger range of inflammatory biomolecules, and so we could not depict the full inflammatory frame in which sarcopenia might have developed in the study population. However, C-reactive protein levels reported in the present investigation are consistent with the result obtained in a cohort of older adults with physical frailty and sarcopenia. 30 A role for the background inflammatory milieu-remnant of SARS-CoV-2 infection-may also be postulated to explain the C-reactive protein levels reported in the study. Of course, the cross-sectional nature of the present study limits the inference of temporal/causal relationships between any inflammatory mediator and the development of sarcopenia. Apart from these limitations, this study offers a unique opportunity to investigate the prevalence of and the risk factors for sarcopenia among an unselected population of COVID-19 survivors adopting the new EWGSOP2 criteria. In particular, given the health implications of sarcopenia, timely detection of lower handgrip strength test may be useful in assessment of potential physical function impairment and long COVID-19 symptoms. Cytokine storm is the key pathogenetic factor of the most severe COVID-19 cases, which are characterized by an important catabolic component. Therefore, a greater severity of COVID-19 will correspond to a greater risk of sarcopenia and COVID-19 long-term effects. Regardless of COVID-19, the length of hospital stay is always an important risk factor for the onset of sarcopenia. 31,32 In this context, physical activity, especially if associated with adequate nutritional support, seems to be important protective factor.
In conclusion, long COVID-19 syndrome, except for some specific cases (i.e. post-viral pericarditis and the appearance of immune disorders triggered by viral disease), shows significant overlaps with sarcopenia syndrome. Once again, the identification of subjects with long COVID-19 syndrome in whom sarcopenia is the cause of clinical phenotype 'fatigue and dyspnoea' becomes crucial because these subjects are able to earn significant benefits from interventions-nutrition and exercise-addressing muscle health. These interventions will target a function and not a pathology, revolutionizing the paradigm adopted up to now in clinical practice. An early identification of sarcopenia appears essential to prevent and to treat long COVID-19, a real current challenge for our health system. 33,34 | 2022-06-15T06:17:46.029Z | 2022-06-14T00:00:00.000 | {
"year": 2022,
"sha1": "07f8fa60cddbe3ce9892260d7a8738bdc750ec68",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9349974",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "7691e55112175c212d519347b708f9a170c71096",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
146029293 | pes2o/s2orc | v3-fos-license | Effects of phytonutrients on growth performance, antioxidative status, and energy utilization of broilers fed low energy diets
Two experiments were conducted to investigate the effects of phytonutrients (PN) on growth performance, antioxidant status, intestinal morphology, and nutrient utilization of birds fed low energy diets. In Exp. 1, a total of 1,440 one-day-old Ross 308 male broiler chickens were randomly divided into 3 treatment groups, with 16 replicates per treatment (48 pens; 30 birds per pen). Birds in treatment 1 were fed diets with normal energy content (NE). Birds in treatment 2 were fed NE diet but with 60 kcal removed (LE). Birds in treatment 3 were assigned to LE diet supplemented with PN (LE + PN). Results indicated that LE diet increased feed conversion ratio (FCR) compared with NE from d 1 to 38, while LE + PN diet prevented this response (P = 0.02). At d 26, birds in the LE + PN group had the highest ileal and jejunal villus height to crypt depth (VH:CD) ratio. At d 39, PN supplementation improved ileal and jejunal VH:CD ratio, compared with LE group. Moreover, birds fed PN diets received a better economic profit. In Exp. 2, 360 one-day-old Ross 308 male broiler chickens were used in a metabolism study. The treatments used in Exp. 2 were the same as those in Exp.1, with 4 replicates (pens) and 30 birds in each replicate. Dietary apparent metabolism energy (AME), energy and protein digestibility were determined between 21 and 28 d of age. Results showed that chickens fed LE + PN diet tended to have greater AME (P = 0.02) and nitrogen-corrected apparent metabolism energy (AMEn) (P = 0.03) than birds fed LE diets. It was concluded that LE + PN showed a potential advantage to improve feed conversion and gut health of broilers, as well as economic profits.
Introduction
Phytonutrients (PN), as secondary plant metabolites, have been shown to affect animal growth performance (Windisch et al., 2008;Wallace et al., 2010), immune status (Amerah et al., 2011;Karadas et al., 2013), and also have antioxidative or antiviral effects (S€ okmen et al., 2004). Karadas et al. (2014) found that a combination of PN improved growth and feed efficiency of broilers. Pirgozliev et al. (2015) stated that supplementary PN improved body weight (BW) and feed efficiency of birds, but did not affect dietary metabolism energy (ME). It is proposed that although dietary PN did not affect dietary ME, they caused an improvement in the utilization of energy for growth (Bravo et al., 2014).
In China, dietary energy represents up to 70% of the feed cost for broilers, and experiments investigating the available energy concentration of poultry feedstuffs are using the metabolism energy system. Therefore, it is important to study the change in dietary ME in response to PN, especially the response to low energy diets (Bravo et al., 2011). The objective of the current study was to evaluate the effects of a blend of PN on broiler growth performance, antioxidative status, intestinal morphology, and apparent metabolism energy (AME) of broilers diets.
Materials and methods
The study was approved by the Animal Care and Experiment Committee of New Hope Liuhe Corporation.
Experiment one
The objective of this experiment was to evaluate effects of a mixture of PN on growth performance, antioxidative status and intestinal tract morphology. A total of 1,440 one-day-old Ross 308 male broilers were obtained from a commercial hatchery, individually weighed, and assigned to 48 pens of 2 m  1.5 m in size, with 30 chicks in each pen. The chickens were reared on litter in floor pens. The litter was obtained from a commercial chick ranch and had been used to raise broilers for a batch. The brooding temperature was maintained at 33 C for the first day and was gradually decreased by 2 C per week until 21 C and maintained at that level thereafter. During the whole experimental period, chickens had free access to feed and water. Birds were reared on the following lighting program: 22 h light (22 L):2 h dark (2 D) for the first 3 d, 19 L:5 D from d 4 to d 7, 16 L:8 D from d 8 to 40. Birds were vaccinated for Newcastle disease and infectious bronchitis via injection at d 7, and via water at d 21. The infectious bursal disease vaccination was given via water at d 14.
Birds in treatment 1 were fed normal energy diets (NE Group). In treatment 2, energy levels were reduced by 60 kcal/kg relative to NE (LE Group). Birds in treatment 3 were fed LE diets supplemented with a PN blend (LE þ PN Group). Each treatment has 16 replicates of 30 chicks. The broilers were fed on a 4-phase feeding program (Table 1). With the exception of decreased energy, all diets were formulated to provide similar nutrients according to the requirement of broilers by NRC (1994). The active ingredients of PN product used in this experiment included 5% carvacrol, 3% cinnamaldehyde and 2% capsaicin.
Body weights and feed intake by pen were recorded on 14, 22, 30 and 38 d of age, and the mortality was recorded daily. Europe production efficiency factor (EPEF) was calculated as follows: EPEF ¼ ADG Â [(100%ÀMortality rate)/FCR]/10, where ADG is the average daily gain.
At 26 and 39 d of age, 12 chickens around average BW were selected from each treatment, weighed and killed by exsanguinations after CO 2 stunning. After the abdominal incision, a middle section of the duodenum, jejunum and ileum were collected. The contents of each gut section were gently flushed with a saline solution (NaCl, 0.9%), and the intestinal section sample (2 cm in length) was fixed in 10% formalin solution for further analysis. The tissue samples were processed, embedded, sectioned, stained with hematoxylin-eosin and mounted, then intestinal slides were examined by optical microscopy, and images were captured by a camera attached to the microscope and transferred to an image analyzer software. Villus height (VH) in micrometer was measured from the tip of the villus to the villus crypt junction. Crypt depth (CD) in micrometer was defined as the depth of the invagination between adjacent villi. Then VH:CD ratio was calculated.
At 39 d of age, 12 chickens around average BW were selected from each treatment, then blood samples were aseptically collected from the wing vein into vacutainers and centrifuged at 3,600 Â g for 10 min at 4 C. The serum was collected and stored at À20 C for future analysis. The activities of superoxide dismutase (SOD) and total antioxidant capacity (T-AOC), and malondialdehyde (MDA) content in the serum were measured using SOD, T-AOC and MDA assay kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China), and then analyzed by an automated spectrophotometric analyzer (Shimadzu, Model UVmini-1240, Tokyo, Japan).
Feed cost for weight gain and profit of chicks were calculated at the end of experiment. Feed cost (CNY/kg) ¼ Sum of feed cost for each phase/BW gain for the whole experiment; Profit (CNY/bird) ¼ Price of live bird À Feed cost À Production cost À Labor cost À Cost of 1-day-old chick.
Experiment two
A total of 360 one-day-old Ross 308 male broiler chickens were allocated to 3 treatment groups of 4 replicates (pens), with 30 chickens in each pen from 1 to 21 d of age. The diets were allocated to pens in a randomized complete block design and feed was offered to chickens ad libitum. Treatment assignments, diets and birds husbandry were the same as that described in Exp. 1.
At 21 d of age, 6 chickens from each pen with a nearest average BW were selected, and transferred to metabolism cages following the same randomization and dietary treatments as in the floor pen phase. The adaptation period for cage housing was 3 d. Feed and water were provided ad libitum. The chickens selected were kept in the cages for approximately 72 h until 27 d of age, and total excreta samples were collected each day. Feed intake for the same period was recorded for the determination of dietary energy and protein digestibility coefficients.
The experimental diets and the excreta were analyzed for combustion energy content to determine dietary ME. Combustion energy was determined using a bomb calorimeter (IKA C5003 Calorimeter System, Calorimeter, C5003, IKA Co., IL). The crude protein (CP) values were obtained as N Â 6.25 (AOAC, 2016). Dietary ME was calculated as follows: AME (kcal/kg DM) ¼ (Energy inta-keÀEnergy output)/Feed intake, in which Energy intake is the energy (kcal/kg) intake of the chickens from d 24 to 27, and Energy output is the energy (kcal/kg) output of the chickens from d 24 to 27 (Bravo et al., 2011).
Statistical analyses
Data in Exp. 1 and 2 were analyzed as a completely randomized block design using the GLM procedure of SPSS 16.0. The data were analyzed with One-way ANOVA. For growth performance, the experimental unit was the floor pen, and for dietary ME, the experimental unit was the metabolism cage. If the test showed significant differences (P < 0.05), ranked scores were separated by the least significant difference procedure. Results in tables were reported as means ± SD. Table 2 shows the results on growth performance of the chickens (Exp. 1). From 1 to 14 d of age, birds in LE and the LE þ PN groups had lower ADG, EPEF, and greater FCR than chickens fed normal energy diet (P < 0.05). From d 23 to 30 and d 31 to 38 periods, birds in LE þ PN group has numerically the lowest FCR, though the difference was not significant (P > 0.05). In general, the low energy diet caused greater FCR from d 1 to 38 (P < 0.05). When adding PN in low energy diet, birds had the similar FCR to that of
Serum anti-oxidation and intestinal morphology
The effects of PN on serum antioxidative status are showed in Table 3 (Exp. 1). There was no significant difference on serum antioxidation among the three groups. Malondialdehyde value in low energy group was lower than the other 2 groups (P ¼ 0.09). Table 4 shows the effect of dietary PN supplementation on intestinal morphology of broilers at d 26 and 39 (Exp. 1). There was no statistical significance observed for intestinal VH and VH:CD ratio in our study. Birds in LE þ PN group had the higher ileal VH (P ¼ 0.13) and jejunum VH:CD ratio (P ¼ 0.74) than the other two groups at d 26. At d 39, PN inclusion can improve jejunum and ileum VH:CD ratio by 2.5% and 8.5%, compared with the low energy group. Images of intestinal morphology are shown in Figs. 1 to 6. Compared with the LE group, the villi from birds in the LE þ PN group were longer, organized, and generally neater, which indicates better gut health. This may explain the improved FCR and survival rate of these animals. Table 5 showed the economic benefit of different energy reduction and PN addition (PN cost was included). Compared with the NE group, including PN product in the low energy diet cut the feed cost by 0.07 CNY/kg BW. The LE þ PN diet also increased the profit by 0.09 and 0.05 CNY/bird, compared with NE and LE diets, respectively. Also, PN supplementation reduced birds feed intake, which brought lower feed cost and better profit. Table 6 shows the data from the energy and nutrient metabolism experiment (Exp. 2). The AMEn values of the NE and LE groups were consistent with our experimental design (À60 kcal/kg). Compared with the NE group, the AME and AMEn of birds fed LE diet were lower by 1.9% (P < 0.05) and 1.8% (P < 0.05), respectively. The LE þ PN treatment had no significant effect on AME and AMEn values, compared with the LE group (2,863 vs. 2,852 kcal/kg). Moreover, dietary PN had no impact on dietary energy digestibility (72.9% vs. 73.1%) (P ¼ 0.29) and protein apparent digestibility (61.2% vs. 61.1%) (P ¼ 0.26).
Growth performance
The result from this study showed that when diet energy content was decreased by 60 kcal/kg, there was a negative effect on broiler growth and AME value. Many studies have been conducted to test the effect of diet energy level on broiler growth performance and health condition. High-energy diets promoted efficiency of feed utilization and maximized growth rate of broilers (Leeson and Summers, 1991). It was reported that as energy intake was restricted, there was a direct negative effect on growth rate (Leeson and Summers, 1997). These previous researches were partly in agreement with our results.
One experimental diet was formulated to be relatively lower in ME for testing the responses to supplementation of PN. The improvement in performance observed when PN included in lowenergy diets has been previously reported (Amerah et al., 2009;Cowieson et al., 2010). To our knowledge, this is the first report of such effects in a large commercial setting. It was also indicated that dietary supplementation of 100 mg/kg of a mixture of PN increased dietary AMEn (Bravo et al., 2011). In the present study, the birds fed diet with PN in low energy diets, had no significant difference on BW gain, but had lower FCR compared with birds fed the low energy diet. The results obtained in the present study confirmed the improvement in feed efficiency by the mixture of PN (Bravo et al., 2011;Jamroz et al., 2005). This positive effect on feed efficiency, probably due to the function of spices to increase bile secretion, higher activity of the pancreatic and brush border enzymes (Platel and Srinivasan, 2001). Botsoglou et al. (2002) revealed that oregano oil can increase the antioxidative status of broiler meat by reducing MDA values. Terenina et al. (2011) reported that PN could increase antioxidative enzymes activities, non-enzymatic antioxidant glutathione peroxidase (GSH) concentration and decrease lipid peroxidatic MDA concentration in intestinal mucosa of broilers. According to Hsu et al. (2011), Portulaca extracts can improve liver GSH, SOD and hydrogen peroxidase activities, and induce oxidative stress in mouse. While in this study, no significant effect was observed between LE and the LE þ PN groups, and the different result may be due to the different PN or animals used.
Serum anti-oxidative activity and intestinal morphology
Both VH and CD are important indicators of the digestive health and directly related to the absorptive capacity of the mucous membrane (Buddle and Bolton, 1992). However, the literature is equivocal regarding the use of PN as feed additives in relation to gut morphology (Zeng et al., 2015). In the current study, there is a tendency that the PN fed birds got greater VH and VH:CD ratio, this may be the reason of better conversion. Pictures also indicated that villus in PN group was arranged neatly, and mucosa membrane was thicker. The results were consistent with other previous study, in which PN increased villus height, reduced CD in ileum, and also decreased the energy required for intestinal maintenance (Bravo et al., 2011).
Energy content and nutrient digestibility
It was reported that PN improved nutrient digestibility and group uniformity (S€ okmen et al., 2004). Pirgozliev et al. (2015) defined that PN did not affect dietary ME, but caused a significant improvement in the utilization of dietary energy, which did not always relate to growth performance. Bravo et al. (2011) indicated that PN combinations improved energy content by 50 kcal/kg, this enhancement may be caused by a direct improvement in dietary energy digestibility or absorption, and a reduction in the energy required for the maintenance of the digestive tract. Thus it is expected that better results may be obtained when combinations of (Zeng et al., 2015). Our results showed that including PN product in the low energy diet increased the AMEn content by 11 kcal/kg, but no improvement on energy and protein digestibility was observed, the different results may be due to different PN products and their activities. Meanwhile, rearing conditions should take into account for experiments involving use of PN (Pirgozliev et al., 2015).
Conclusion
Low energy content in the diet can decrease broiler performance, lower AME value and nutrient digestibility. Supplementing PN to a low energy diet can maintain FCR thus increase economic profit of broilers apparently via improved gut health.
Conflictsof interest
We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the content of this paper. AME ¼ apparent metabolizable energy, AMEn ¼ nitrogen-corrected AME. 1 NE, birds were fed diets with normal energy content. LE, birds in were fed NE diet but with 60 kcal removed. | 2019-05-07T13:49:56.017Z | 2019-04-19T00:00:00.000 | {
"year": 2019,
"sha1": "78164328ae0b65216451cfc1302e018008af390c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.aninu.2019.03.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "006a4cf4c21409753062a2c8d9401f804cc8fe11",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
2532716 | pes2o/s2orc | v3-fos-license | International Journal of Modern Physics A, ❢c World Scientific Publishing Company RECENT RESULTS USING THE OVERLAP DIRAC OPERATOR
I derive the overlap Dirac operator starting from the overlap formalism, discuss the numerical hurdles in dealing with this operator and present ways to overcome them.
Introduction
The overlap Dirac operator 1 , derived from the overlap formalism 2 for the special case of vector gauge theories, is a way to realize exact chiral symmetry on the lattice. Exact chiral symmetry on the lattice does come at a price -numerical implementation of the overlap Dirac operator is significantly more expensive than Wilson or staggered operator. In spite of this numerical hurdle, we already have several physics results in quenched gauge theories using the overlap Dirac operator: (i) Evidence for spontaneous chiral symmetry breaking at zero temperature 3 . (ii) Evidence for chiral symmetry breaking in the deconfined phase possibly due to a dilute gas of instanton and anti-instantons 4 . (iii) Evidence for a diverging chiral condensate in the two dimensional U(1) case 5 . (iv) A study of exact zero modes of overlap fermions in the adjoint representation lend some support to the existence of fractional topological charge 6 .
In this talk, I shall derive the overlap Dirac operator starting from the overlap formalism, discuss the numerical hurdles in dealing with this operator and present ways to overcome them.
The Overlap formalism
The determinant of the chiral Dirac Operator C = σ µ (∂ µ + iA µ ) can be realized on the lattice as an overlap of two many body states 7,2 , namely; where |0± > are many body ground states of a † H(m)a and a † γ 5 a respectively. The a † and a are canonical fermion creation and destruction operators and γ 5 H(m) is a massive Dirac operator on the lattice with the mass set to a value less than zero. One choice is the Wilson Dirac operator, H(m) = H w (m). This realization of the chiral Dirac operator is natural since C is an operator that maps two different spaces, namely spinors under the (0,1/2) representation to (1/2,0) representation. 1 Therefore C does not have an eigenvalue problem and the determinant of C is a map between the highest form in the two spaces connected by the operator C. Clearly, the overlap formula does not fix the phase of |0+ since it is only defined as an eigenvector of a Hamiltonian and this is how it should be since the chiral determinant is a map between two different spaces. The details involved in the phase choice and possible gauge breaking is the subject of chiral gauge theories. For vector gauge theories, we want det CC † = | 0 − |0+ | 2 and the phase choice does not matter indicating a trivial cancelation of anomalies.
Computing the overlap of two many body states seems like a insurmountable numerical task in four dimensional theories since one has to diagonalize H w , form the many body state from the negative energy single particle states and compute the overlap by computing a determinant of a dense matrix, half the size of H w . But there is an elegant solution to circumvent these steps by directly dealing with the many body states and this is the overlap Dirac operator 1 .
The massless overlap Dirac operator is derived from the overlap formalism as follows. Let U be the unitary matrix that diagonalizes H w : Using det U = det α/ det δ † , we derive It is not immediately clear as to how it helps numerically since one will have to deal with ǫ(H w ) without having to diagonalize H w . There are two possible approaches. One approach is to use Gegenbauer polynomials to represent H 2 w 8,9 . Typically one need to go to a high order polynomial and this method is not expected to efficient. The other approach is to use the rational approximation 10 where one approximates ǫ(H w ) as a sum of poles Recent Results Using the Overlap Dirac Operator 3 Using the method of multiple masses, one action of ǫ(H w ) on a vector can be realized by a single conjugate gradient algorithm independent of the number of poles. This makes it numerically quite attractive.
3. Spectrum of the quenched H w ǫ(H w ) is discontinuous at the zero of H w . Approximations have to be good up to lowest eigenvalue of H w and this can be a problem if H w has very small eigenvalues. The density of the spectrum of H w (m), ρ(λ), in a quenched ensemble has a non-zero ρ(0) at any fixed lattice coupling at the values of m that are relevant (m < m c ) 11 . This numerical result has support from an analytical argument where one shows that small defects can already give rise to a gapless spectrum 12 .
One can also show that a change in gauge field topology necessitates zero eigenvalues at any mass. To see this, let us assume we have a gauge field configuration that has zero topology. Then H w (m) has an equal number of + and -eigenvalues. Consider evolving from this configuration to another gauge field configuration that has non-zero topology. This configuration has a spectrum where the number of + and -eigenvalues of H w (m) are not equal. The spectrum as a function of the evolution has one configuration in the path where H w (m) has a zero eigenvalue. In a discrete evolution scheme the exact zero will be avoided but one can have arbitrarily small eigenvalues. Therefore one will have to live with very small eigenvalues of H w (m) or its variants. Numerical techniques that deal with ǫ(H w ) will have to project out a few small eigenvectors and treat them exactly. On a finite lattice and at a fixed lattice spacing, the number of eigenvalues below a fixed number λ min will grow with volume since ρ(0) is finite. This would mean that one has to project out more eigenvalues as one increases the volume and/or go to a larger number of poles in the rational approximation. It is useful to compare the overlap formalism with the related method used to realize chiral symmetry on the lattice, namely domain wall fermions 13 . This is a five dimensional realization and the effective overlap Dirac operator is obtained by setting H = H d = log(T w ) where T w is the transfer matrix in the fifth direction 14 . The low lying spectrum of H d is completely governed by the low lying spectrum of H w and hence the problems caused by a finite ρ(0) exist for domain wall fermions 15 . In practice one works with a finite extent in the fifth direction (L s ) and this amounts to an approximation of the ǫ(H w ) by tanh( 1 2 L s H d ). Clearly, small eigenvalues are not taken care of properly at a finite L s and one will have to go to larger L s as one increases the lattice volume at a fixed lattice spacing. Current simulations using domain wall fermions 16 seem to indicate a significant effect due to finite L s . One can avoid this by projecting out small eigenvalues and treating them exactly in the domain wall formalism 17 .
Each action of ǫ(H w ) requires a Conjugate Gradient type algorithm and therefore the solution to the equation of the form D o (m)ψ = b requires nested Conjugate Gradient. Is it numerically much more involved than domain wall fermions since it only involves one inversion of a higher dimensional operator? One can write down a five dimensional operator from which one gets the required four dimensional overlap Dirac operator by integrating out all but one fermion degree of freedom. An analysis of the condition numbers shows that the the five dimensional inversion is no less expensive that two nested conjugate gradients 18 . In the nested case, it is easy to see that the condition number is proportional to the product of the condition number of H w and the fermion mass, µ. This also turns out to be the case for the five dimensional version and for the conventional domain wall fermions. This shows that it is practical to work directly with the four dimensional operator. | 2014-10-01T00:00:00.000Z | 2000-01-01T00:00:00.000 | {
"year": 2000,
"sha1": "7940d9a5abaee026bfc02aa3b41fece3dd084123",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/0011017",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "295d490f085eda4d910544a37356319c7eb5bdd4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255984818 | pes2o/s2orc | v3-fos-license | Prevention of feline leishmaniosis with an imidacloprid 10%/flumethrin 4.5% polymer matrix collar
Leishmaniosis caused by Leishmania infantum is one of the most important vector-borne diseases affecting animals and humans worldwide. Dogs are considered main reservoirs of the zoonotic forms, though in the last years the role of cats as reservoirs has been increasingly investigated. Feline leishmaniosis (FeL) occurs in endemic areas and no specific preventive measures have been investigated so far. In this study the efficacy of a 10% imidacloprid/4.5% flumethrin polymer matrix collar, licensed for tick and flea prevention, has been assessed against FeL in a longitudinal study on 204 privately owned cats from the Aeolian islands (Sicily), an area highly endemic for the disease. From March to May 2015 [Study Day 0 (SD 0)], cats negative for FeL were collared (G1, n = 104) or left untreated (G2, n = 100). Diagnosis consisted of serology and qPCR on blood and conjunctival swabs, which were collected at baseline (SD 0) and at the end of the study (SD 360). Interim clinical examinations were performed on SD 210 (when collars were replaced in G1) and SD 270. Of the 159 cats which completed the study, 5 in G1 and 20 in G2 were positive for L. infantum infection, in at least one of the diagnostic tests leading to a yearly crude incidence of 6.3% and 25.0% in G1 and G2, respectively (P = 0.0026). This translates into an efficacy of the collar of 75.0% in preventing feline Leishmania infection. The collar was generally well tolerated with no systemic adverse reactions and few local skin reactions were observed in the application area in four out of 104 treated cats (3.8%). The 10% imidacloprid/4.5% flumethrin collar significantly reduced the risk of L. infantum infection in cats. To our knowledge, this is the first study in which a preventative strategy against feline Leishmania infection is assessed under natural conditions. These findings close a gap in veterinary medicine, in that they confirm this collar as a tool in reducing the risk of Leishmania infection in cats. Such a preventative tool could contribute to the reduction of the risk of the disease in animal and in human populations when included in integrated leishmaniosis control programmes.
Background
Leishmaniosis caused by Leishmania infantum (Kinetoplastida: Trypanosomatidae) is a vector-borne parasitic disease affecting animals and humans worldwide. The disease in humans is included amongst the most important neglected tropical diseases, with up to 0.4 and 1.2 million cases per year for visceral and cutaneous forms, respectively [1], and it has been the only tropical vector-borne disease endemic to southern Europe for decades [2]. Although dogs are regarded as primary reservoirs of L. infantum in many endemic areas, other domestic and wild animal species have been implicated in the epidemiology of the infection as secondary reservoirs [3,4]. Since the first report of feline leishmaniosis (FeL) [5], the cat has been regarded as a resistant species and its involvement considered negligible to the epidemiology of the infection [6]. The main reason for this assumption was the low number of clinical cases in cats, especially when compared to that of dogs living in the same endemic areas [7][8][9][10][11][12]. In the last years, the development of feline medicine coupled with the employment of more refined serological and molecular protocols to diagnose the infection in cats have provided clues for a better understanding of FeL [13,14]. Therefore, cases of FeL have been increasingly reported in areas endemic for canine leishmaniosis with prevalence rates up to 68.5% according to the cat population studied and diagnostic methodologies [14]. Also, in spite of the number of clinical cases that has always been considered marginal, reports of clinical conditions due to FeL are increasing either in cats suffering for immunodebilitating, concurrent infections such as feline immunodeficiency virus (FIV) and feline leukemia virus (FeLV), neoplastic diseases or in animals without any evidence of co-infections [14]. Remarkably, signs of FeL partially overlap those observed in diseased dogs with skin lesions and lymph node enlargement the most frequently reported [14][15][16]. Phlebotomine sand flies, the natural vectors of L. infantum, are generalist feeders and may take their blood meals from a variety of wild and domestic animals, including cats [17]. The infectiousness of L. infantum-infected cats has been demonstrated in xenodiagnosis studies for Phlebotomus perniciosus [18] and Lutzomyia longipalpis [19], two competent vectors. These data have ultimately provided further evidence on the possible role of cats as reservoirs for L. infantum. A recent study on vector-borne diseases (VBDs) of cats and dogs of the Aeolian Islands (Sicily, southern Italy), an endemic area for L. infantum, reported prevalence of 26% and 42% in cats and dogs, respectively, by serological and molecular methods [20]. In addition, up to 15% yearly incidence of L. infantum infection was assessed in cats exposed to one transmission season, indicating that, like dogs, cats living in endemic areas are exposed to the infection [20]. Cats are now recognized as a potential domestic reservoir of L. infantum and strategies to prevent infection in this animal species have been advocated [14,16].
Currently, the most promising strategy for the prevention of Leishmania infection in dogs is the use of synthetic pyrethroids in different formulations (e.g. spot-on, collar and spray) with repellent properties against sand flies [4]. However, most of the pyrethroids, except for flumethrin, are toxic to cats [21] hampering studies on the prevention of Leishmania infection in this animal species [14,16]. A polymer matrix collar containing a combination of 10% imidacloprid and 4.5% flumethrin (Seresto® collar, Bayer Animal Health GmbH, Monheim, Germany), thereafter referred to as the collar, has been recently registered for the use in cats for the prevention of flea and tick infestations associated with a repellent (anti-feeding) activity [22]. The same collar is also available for the control, up to 8 months, of ticks and fleas in dogs [23]; though not registered with a claim against sand flies, the collar proved to be highly effective (i.e. efficacy from 88.3 to 100%) in reducing the risk of L. infantum infection in dogs living in endemic areas [24][25][26].
In the present study, we investigated the efficacy of the collar in the prevention of feline Leishmania infection in a cohort of privately owned cats living in the Aeolian archipelago where FeL by L. infantum is highly endemic.
Study site and animals
The study was conducted from March 2015 to April 2016 in Lipari and Vulcano, two of the main islands of the Aeolian archipelago (Tyrrhenian Sea, Sicily, Italy, 38.4724°N, 14.9541°E), a geographical area recognized endemic for canine and feline VBDs and where an overall prevalence of 26% and an incidence of 15% of L. infantum infection were recorded in cats [20]. Animals were enrolled in the study from March to May 2015, before the beginning of the sand fly season and did not leave the study area or travel to other places. Cats enrolled in the study were 10 weeks or older, in satisfactory general health conditions, with a constant access to or living outdoors and negative for L. infantum infection by serology, quantitative real-time PCR (qPCR) and cytology (see below).
Study design
This study was a Good Clinical Practice (VICH GL9 GCP) (http://www.vichsec.org) negatively controlled, partly blinded and randomised field study conducted on privately owned cats. The study protocol was approved by the Italian Ministry of Health and animals were included only after the signature of an informed consent by the owner. At the inclusion [Study Day 0 (SD 0)] cats were identified, physically examined, weighed and allocated to treatment groups (G1 = Seresto® collar for cats or G2 = untreated control) following a "per household" random allocation plan in order to avoid contacts between cats wearing the collar and untreated ones. Animals were sampled for blood and conjunctival swabs and those assigned to the G1 were treated with the collar according to the package leaflet. Briefly, the collar was fastened around the cat's neck and adjusted according to label instructions until a comfortable fit was achieved, in that it was possible to insert two fingers between collar and neck when fastened. Animals assigned to the G2 group were left untreated and served as negative controls.
All the included cats were clinically examined and weighed at SDs 210, 270 and 360 ( Fig. 1). In addition, at SD 360 (study closure) cats were sampled again for blood and conjunctival swabs. Collars in cats of the G1 group were replaced at SD 210 and at any time of the study in case of collar loss or damage. During the study cats remained with their owners and were managed as per normal routine without any containment measure or restriction. The owners were asked to observe their animals daily and to report, as soon as noticed, any abnormality in the general health of the animals as well as losses of or damages to the collar in cats of the G1 group. Any treatment with products with known efficacy against L. infantum vectors or ectoparasites was not allowed throughout the study. For animals in the G2 group, in the case of severe flea infestation, rescue treatment with Advantage® for cats (imidacloprid, Bayer Animal Health GmbH, Monheim, Germany) was allowed for animal welfare reasons.
Sample collection and laboratory procedures
Blood samples of about 5 ml were collected from the jugular vein of which 2 ml were split into two anticoagulant tubes (K 3 EDTA). From the first tube, two capillary tubes were filled up and centrifuged for buffy coat extraction and preparation of smears on glass slides. The remaining blood was processed for complete blood count using an automated blood cells counter (ProCyte Dx®, IDEXX Laboratories, Westbrook, Maine, USA). The blood in the second EDTA-tube was processed and analysed for the molecular diagnosis of L. infantum. Three millilitres of blood were stored in a tube with clot activator from which serum was obtained by centrifugation (1800× g for 10 min) and stored frozen (-20°C) until analysis. Conjunctival swabs were collected for the diagnosis of L. infantum infection, using sterile cotton swabs manufactured for bacteriological isolation. One sample per eye was collected by rubbing the swab against the surface of the lower eyelid to collect the exfoliating cells. Conjunctival swabs were kept in sterile tubes and stored frozen (-20°C) until analysis. Serum samples collected on SD 0 and SD 360 were tested for anti-L. infantum antibodies by using an immunofluorescence antibody test (IFAT) protocol, as described elsewhere [27]. The IFAT assay was prepared using conjugates specific for cats (anti-cat IgG; Sigma-Aldrich, St. Louis, Missouri, USA) and a positive control, obtained from the serum of a L. infantum diseased cat was included in each slide. Samples were scored as positive when they produced a clear cytoplasmatic and membrane fluorescence of promastigotes at a cut-off dilution of 1:80 for those on SD 360 (study closure), although animals with 1:40 dilution of sera collected on SD 0 (inclusion) were excluded from the study. Positive sera were titrated by serial dilutions until negative results. Blood and conjunctival swab samples collected on SD 0 and SD 360 were molecularly analysed for L. infantum by qPCR. Briefly, genomic Fig. 1 Time points of the study and scheduled activities DNA was extracted from blood and conjunctival swabs using the QIAamp DNA Micro Kit (Qiagen, Milan, Italy), following the producer's recommendations. Thereafter, a fragment (120 bp) of the L. infantum minicircle kinetoplast DNA (kDNA) was amplified by qPCR using a protocol described elsewhere [28]. For all PCR tests, positive (DNA of pathogen-positive blood samples) and negative (no DNA) controls were included.
Smears of buffy-coat were prepared as described above and stained using May-Grünwald-Giemsa quick stain (Bio-Optica, Milan, Italy). Intracellular inclusions or free amastigote forms of L. infantum were searched in each smear by examining the entire stained area at low magnification (×100) and representative areas at high magnification (×1000) for 10 min. All the samples and smears were identified using a unique alphanumerical code, and laboratory personnel conducting the analyses were blind to the treatment groups.
Entomological survey
Light and sticky traps were used to monitor presence and activity of sand flies during the study period. From May to December 2015, traps were placed monthly in eight different sites (five in Lipari and three in Vulcano). Traps were placed nearby some of the households whose cats were included in the study (Fig. 2). In each site and for each trapping session, one light trap and sticky traps for a total of 2 m 2 were set and left working for 2 consecutive days (sticky traps), or 2 consecutive nights, i.e. from 6.00 pm to 7.00 am (light traps). Trapping activity was concluded at each site after two consecutive negative trapping sessions. Sand flies collected were separated from other insects with the aid of a stereomicroscope, differentiated by sex and stored into vials containing 70% ethanol according to site and date of capture. Sand fly specimens were prepared for microscopic observation as described elsewhere [29] and identified at species level using morphological keys [30].
Data management and statistical analyses
A minimum sample size of 80 cats was estimated for each group based on the following assumptions: confidence level: 95%, power: 80%, expected incidence of L. infantum infection of 2% and 12% in treated and untreated cats, respectively. In order to make provision for a drop-out of about 20% during the study period, a minimum of 100 cats were included in each group. A cat was considered L. infantum infected if it tested positive in at least one of the diagnostic tests employed (IFAT, qPCR on blood and on conjunctival swabs, or buffy coat cytology). The efficacy in preventing L. infantum infection was based on the year-crude incidence (YCI), the percentage of infected cats in each group on SD 360 and calculated in each group as follows: YCI = number of infected animals/(number of negative animals included − number of animals not completing the study) × 100.
The difference between YCI in G1 and G2 was tested for statistical significance using Chi-square test. Efficacy in preventing Leishmania infection was calculated using the following formula: Efficacy = [(A−B)/A] ×100, where A is % of infected animals in the control group and B is % of animals in the treated group.
Results
A total of 204 cats (104 in G1 and 100 in G2), belonging to 80 owners, were enrolled in the study on SD 0. The study population was composed of 111 females (54.4%) and 93 males (45.6%) with age ranging from 6 months to 15 years. During the study 45 cats (25 from G1 and 20 from G2) were removed or lost to follow up for different reasons (e.g. animal lost, collar lost and not replaced within two days, adverse events or suspected adverse drug reaction), whereas 159 animals (79 from G1 and 80 from G2) completed the study (Table 1). Amongst the excluded cats, 18 (8 from G1 and 10 from G2) were removed after the enrolment because found infected by L. infantum on samples collected on SD 0. On samples collected at the study closure (SD 360), 5 out of 79 cats in G1, and 20 out of 80 in G2 scored positive to L. infantum infection in at least one of the diagnostic tests ( Table 2). The majority of animals tested positive by IFAT (15/25; 60%), whereas few cats were . Only three cats (3/25; 12%) tested positive to IFAT and qPCR on blood and/or conjunctival swab simultaneously. None of the cats tested positive to cytology on buffy-coat smears either at the inclusion (SD 0) or at the study closure (SD 360). The YCI was 6.3% in G1 and 25.0% in G2 (χ 2 = 9.095, df = 1, P = 0.0026) leading to 75% efficacy of the collar in preventing FeL infection. At the study closure all cats were in good general health; however, some of them showed systemic signs such as peripheral lymph node enlargement (G1 = 15.2%; G2 = 35.0%) and splenomegaly (G1 = 5.1%; G2 = 21.3%). Clinical signs were more frequent in animals of the G2 group than in those of the G1 (χ 2 = 7.266, df = 1, P = 0.0070).
During the study 18 cats lost the collar once and one twice; collars were replaced within 2 days, except in two cases for which the loss was not reported by the owner, resulting in the exclusion of the animals from the study ( Table 1). The collar was well tolerated and few local skin reactions were observed at the application area in four out of the 104 treated cats (3.8%). Of these, one showed mild alopecia, two mild dermatitis and pruritus, and one an ulcerative dermatitis. Except for the latter case for which the collar was removed and the animal excluded from the study and topically treated for (i.e. antibiotic and anti-inflammatory drugs), all the other cases recovered in few days without the need to remove the collar. Heavy flea infestations and the associated itchy dermatitis were recorded in 16 cats of the G2 group; for these animals rescue treatments with a commercial spot-on product containing imidacloprid (Advantage® for cats, Bayer Animal Health GmbH, Monheim, Germany) were authorized on a welfare basis.
Discussion
The Seresto® collar containing a combination of 10% imidacloprid and 4.5% flumethrin showed to be effective in reducing the risk of infection by L. infantum in cats, being thus a tool for controlling FeL in endemic areas. The YCI here recorded in G2, i.e. 25%, was higher than that reported previously in cats (15%) in the same areas [20], but similar to that of dogs (i.e. 27%). Cats included in this trial were at high risk of L. infantum infection with the study being carried out in a highly endemic area for FeL. The vast majority of cats lived constantly outdoors in suburban or rural areas; in addition, animals of the control group were not treated with any insecticide except in cases of rescue treatments against heavy flea infection. Although cats appear to be more resistant to L. infantum than dogs [6], the present data suggest that at least the likelihood of infection in these two hosts is similar as it relies on the risk of being exposed to sand fly bites, also considering that some vectors display a catholic feeding behaviour [17].
Diagnosis of Leishmania infection in cats is challenging [14]. The majority of infected cats scored positive to IFAT, but it should be noted that 10 out of 25 infected animals tested positive only to qPCR with blood tissue being more frequently positive (9/10) than conjunctival swab (5/10) (χ 2 = 2.143, df = 1, P = 0.1432). Although the comparison Suspected adverse drug reaction 1 -1 Infected by L. infantum at the inclusion Table 2 Results of serology (IFAT) and qPCR on blood and conjunctival swab for Leishmania infantum in cats treated with the Seresto® collar (G1) or in untreated controls (G2) after being exposed to one transmission season in highly endemic area of results among different studies is not always possible, our findings are in overall agreement with those reported in previous surveys that combined serological and molecular tests to investigate feline L. infantum infection prevalence [20,31,32]. On the other hand, conjunctival swabs have recently been considered as a sensitive non-invasive technique for the molecular diagnosis of L. infantum infection in both dogs and cats [33][34][35], displaying positive predictive values in animals with active infection or diseased, and a substantial agreement between serological and molecular tests [34]. In the present study, the purpose of diagnosis was to either discover exposure to infective sand fly bites or active infections for which seroconversion had already occurred. Therefore, the variety of serological and molecular results observed reflects the different infection stages in which exposed animals may be found. According to this variety of patterns, it is strongly advisable to combine serological and molecular diagnostic tests when the purpose of diagnosis is to ascertain exposure to L. infantum infection. In many cases L. infantum-infected cats remain apparently healthy, and the progression to clinical illness may be associated with immunosuppressive conditions caused by concurrent diseases. A natural predisposition for a protective cell-mediated immune response pattern to Leishmania infection has also been hypothesized for cats [16]. Retroviral infections or other debilitating diseases (e.g. neoplastic diseases) have been sometimes associated with clinical FeL or subclinical L. infantum infection [14], but not in a previous study in the Aeolian archipelago where these infections are rare among the examined cat populations [20]. Also, it should be noted that the mean age of enrolled animals was less than three years and the ones testing positive at study closure were most likely infected for the first time. These findings may account for the absence of clinical cases of FeL in positive cats of this study, though leishmaniosis usually evolves as a chronic disease with a long period of incubation [16].
The collar proved to be safe and, with the exception of few local reactions at the collar application site, no adverse events were evaluated as being product related. Local reactions were mainly dermal irritations likely caused by the mechanical rubbing of the collar over the fur and skin of the cats and were similar, for frequency and typology, to those observed in previous studies [22,23]. All the skin reactions occurred in the first weeks (1-4) after the collar application and healed spontaneously after a slight loosening of the collar with the exception of one case for which the collar was removed to allow a better topical treatment of the lesion. The slow release formulation makes the collar an ideal device for the drug sensitive cat species and allows the use of flumethrin, a potent acaricide with fast acting, repellent properties, due to the differences in metabolic pathway in a species in which it is not possible to apply any of the other current pyrethroids [21]. Additionally, another safe feature of the collar is its safety release system that makes it particularly secure in free roaming cats. Indeed, although all the collared cats enrolled in the study had access to the outdoors, not a single case of hooking or strangling caused by the collar has been observed.
The entomological survey confirmed the presence of competent vectors of L. infantum in all the monitored sites, namely P. perniciosus and P. neglectus, both regarded as the most important vectors of L. infantum in the Mediterranean basin. This finding is in agreement with previous surveys conducted recently at the same latitude [29]. Few studies have investigated the sand fly fauna in the Aeolian islands and in the sole survey carried out in the same archipelago (Lipari and Filicudi islands) using sticky traps, P. perniciosus was the single species captured [36]. Therefore, the present study complements the number of the sand fly species reported in the archipelago with other two species, one of which (P. neglectus) is a proven vector of L. infantum. Interestingly, during the survey P. perniciosus was found in sites featured by different environments, i.e. urban, Table 3 Sites and months of capture of Phlebotomus perniciosus in the study area. In each site one light trap and sticky traps for a total of 2 m 2 were used peri-urban and rural. The longest presence and activity of P. perniciosus was however recorded in rural sites of both Lipari and Vulcano islands with a constant activity from late May to October and peaks in July and August. This may represent the period of higher risk of exposure to L. infantum infection, especially in the middle of summer, when tourists and their animals come in large numbers to spend holidays in these islands.
Conclusions
This study shows that the Seresto® collar containing a combination of imidacloprid and flumethrin is safe and effective in reducing the risk for feline L. infantum infection. This collar currently represents the only possible preventive measure for FeL. Treatment should strategically be adopted either for providing individual protection to cats living in or travelling to L. infantum endemic areas, or for reducing the potential of infected cats to act as reservoirs of the pathogen. | 2023-01-19T22:12:55.750Z | 2017-07-14T00:00:00.000 | {
"year": 2017,
"sha1": "f62826ff2d1627727f3b6d496fc51070b548c54c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13071-017-2258-6",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "f62826ff2d1627727f3b6d496fc51070b548c54c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
201165857 | pes2o/s2orc | v3-fos-license | Interleukin-13, Interleukin-10, Interferon-g and IDO Production in Response to Home Dust Mites in Allergic Asthma
1Magister Program in Biomedical Sciences, Universitas Indonesia, Jl. Salemba Raya No.5, Jakarta, Indonesia 2Department of Parasitology, Faculty of Medicine, Universitas Indonesia, Jl. Salemba Raya No.5, Jakarta, Indonesia 3Faculty of Medicine, Universitas Indonesia, Jl. Salemba Raya No.5, Jakarta, Indonesia 4Division of Allergy Immunology, Department of Internal Medicine, Universitas Indonesia, Dr. Cipto Mangunkusumo Central General Hospital, Jl. Diponogoro No.71, Jakarta, Indonesia
Introduction
Allergic diseases are the most common disorders of immunity in the entire world. More than 30% of the world population has allergic symptoms whose pattern of abnormalities is categorized as type-1 hypersensitivity. (1) In the last decade, diseases that are based on allergic mechanisms such as rhinitis, bronchial asthma, and prevalence dermatitis continue to increase. The prevalence of asthma in the world has increased in the past few decades, estimated to increase to 400 million by 2025. (2,3) Approximately 50-80% of the incidence of atopic asthma in the world is triggered due to a hypersensitive response to allergens from house dust mite (HDMs). Meanwhile, 90% of asthma patients in Indonesia are vulnerable to exposure to house dust and HDMs. HDMs globally exist in almost all regions and are a significant factor underlying bronchial asthma allergies, making them the largest source of allergens in the room. HDMs affect 2% from the world population, the genus that has the greatest influence in Indonesia with high humidity is Dermatophagoides pteronyssinus (Der p) of 85% and Dermatophagoides farinae (Der f) of 47%. (2,3) The pattern of the balance of T Helper1 (proinflammatory type-1), T Helper 2 (pro-type 2 inflammation) and the control function of the regulator T expressing antiinflammation are referenced as factors that are closely related to pathogenesis. The difference in immunity sensitization of asthma and non-atopic atopies (normal) to allergic symptoms after exposure to HDMs were related to a reduced regulatory function of regulatory T cells that work through anti-inflammatory pathways. Atopy has an allergenspecific memory Th cell contrast pattern.(3) HDMs allergy Th2 cells play an important role in allergic inflammatory responses including production of immunoglobulin E (IgE), recruitment of eosinophils in tissues, mucus production, ease of endothelium to recruit inflammatory cells into infected lungs, and modulate contractions of respiratory smooth muscles. One of the regulatory roles of regulatory T cells mediated by interactions with dendritic cells, this interaction will produce indoleamine 2,3-dioxygenase (IDO) molecules which act as enzymes to control lymphocyte cell growth.
(3) It is expected that the results of this study can contribute several approaches in the development of management of cases of bronchial asthma due to allergies.
Methods
The population of this study was moderate persistent bronchial asthma and non-atopic asthma subjects. The inclusion criteria were subjects aged 30-59 years, living in Purwokerto for at least 3 years, and had a history of moderate persistent bronchial asthma (for asthma atopic subjects), or had no history of asthma and atopy (for nonatopic subjects). The feasibility of the research ethics study had been approved from the Research Ethics Committee of the Faculty of Medicine, Universitas Indonesia, No. 1251/ UN2.F1/ETIK/2018.
The subjects were tested for skin prick test (SPT) with HDMs Der p allergen (Stallergenes Company, Kamp-Lintfort, France). This test was performed on the volar area of the forearm with a distance of at least 2 cm from the fold of the elbow and wrist. The superficial layer of the skin was pierced using a special needle. The reaction was considered positive if itching and erythema were confirmed by the presence of a distinctive induration that could be seen and touched. The largest diameter (D) and the smallest diameter (d) is measured by the reaction expressed as size (D+d)/2. Measurements were made by circling induration with a pen and pasted on a paper before the diameter measured. (4,5) Each subject was taken venous blood to isolate mononuclear cells. PBMCs were blood cells that have nuclei, such as lymphocytes, monocytes and macrophages. The principle of PBMC isolation were following method from Fiqoll Histopaque (Sigma Aldrich, St. Louis, USA), where the difference or density gradient between the cell and the medium were differentiate into several layers through a centrifugation process.(6) PBMCs were given 3 stimulants, namely Phytohemagglutinin (PHA) (Sigma Aldrich) as a positive control, Roswell Park Memorial Institute (RPMI) medium (Sigma Aldrich) as a negative control, and allergen extract Der p (Stallergenes Company). About 500.000 cells/ mL PBMCs were cultured on microwell plate 96 wells and incubated in 5% CO 2 incubator, 37 o C for 72 hours. After 72 hours, the supernatant was taken from each PBMC culture by centrifugation.
Measurement of interleukin (IL)-13, IL-10 and interferon (IFN)-γ cytokines in the supernatant were done by using Multiplex Immunoassay Luminex (Thermo Fisher Scientific, Waltham, USA) with the results of the data in the form of net median fluorescent intensity (MFI) values then converted through a regression curve from each cytokine to obtain cytokines in units of pg/mL. The measurement of IDO levels in the supernatant were done by using the sandwich enzyme-linked immunosorbent assay (ELISA) (Wuhan USCN Business Co, Houston, USA) method with the results of the data in the form of optical density (OD) and then converted through a standard curve from each sample to obtain cytokines in ng/mL unit.
Data analysis was done by using Windows 8 Statistical Package for Social Sciences (SPSS) software version 22 (IBM, New York, USA) to analyze the relationship of variables between patterns of activation of IL-13, IL-10, IFN-γ and IDO for people with asthma due to HDMs and non-asthma sufferers because of HDMs. To compare the cellular responses to HDMs allergens from asthma atopic and non-atopic subjects as controls, Mann-Whitney nonparametric test was done. Meanwhile, non-parametric Spearman Rho correlation test was done to determine the magnitude of allergic events to cellular responses.
This in vitro study was conducted on the PBMC culture. Total number of subjects in this study was 46 subjects, which consist of 25 moderate persistent asthma and positive SPT against allergens (atopic asthma) subjects and 21 subjects who did not suffer from negative asthma and SPT on allergens. The characteristics of the subject of this study was shown in Table 1.
The results of IFN-γ levels measurement from PBMC cultures stimulated by PHA (as positive control), RPMI (as negative control), and allergen Der p (as allergen factors) in the atopic and non-atopic groups were shown in Figure 1. The mean IFN-γ production value in the positive control was not significantly different between groups (p=0.903). In the negative control group, IFN-γ production was higher in the non-atopic group (p=0.08). Meanwhile and in the allergen-stimulated supernatants group, the production of IFN-γ was found to be higher in the non-atopic group even though it was not statistically significant (p=0.078). The comparison of stimulant administration PHA, RPMI and Der p in the atopic and non-atopic asthma groups showed very significant differences (Kruskal Wallis Test, p=0.000).
The measurement result of IL-13 levels from PBMC cultures stimulated by PHA, RPMI and Der p allergens in the atopic and non-atopic asthma groups were shown in Figure 2. Figure 2 showed that the mean of IL-13 production in positive controls (p=0.522), negative controls (p=0.475) and allergens (p=0.523) were not significantly different between the atopic and non-atopic asthma. But when looking at the mean values IL-13 production, the atopic asthma group produced more IL-13 compared to the nonatopic group.
Total (n) Percentage (%)
The measurement results of IL-10 levels from PBMC cultures stimulated by PHA, RPMI and Der p allergens in the atopic and non-atopic asthma groups were shown in Figure 3. Allergen-stimulated supernatants Der p showed IL-10 production results were higher in the non-atopic group and were statistically significantly different (p=0.015). The measurement results of IDO levels from PBMC cultures stimulated by PHA, RPMI and Der p allergens in the atopic and non-atopic asthma groups were shown in Figure 4. From Figure 4, it was known that the mean IDO production values in positive controls (p=0.462) and negative controls (p=0.549) did not differ significantly between groups of atopic and non-atopic asthma subjects. But Der p allergen stimulated supernatants showed higher IDO production in the non-atopic group and statistically significantly different (p=0.007). The comparison of stimulant administration PHA, RPMI and Der p in the atopic and non-atopic asthma groups showed mean differences that tended to be higher for stimulant Der p, although statistically not significant (Kruskal Wallis Test, p=0.084).
Discussion
The measurement results of IFN-γ levels from PBMC cultures stimulated by PHA, RPMI, Der p and allergens showed that IFN-γ levels of positive control, negative controls, and allergens differing significantly in both the atopic and non-atopic asthma groups. This shows that PBMC has a good response to stimulation with PHA and Der p allergens. The response to PHA as positive control is higher because PHA is a nonselective mitogen (can affect various types of lymphocyte subpopulations), while allergens are selective in T cell lymphocytes. (7,8) Although the comparison of intergroup levels did not show a statistically significant difference, the trend pattern for producing IFN-γ cytokines was higher in the nonatopic group compared with the asthma atopy group.
Der p allergens in the form of cysteine proteases have been demonstrated in the previous study to reduce the tight junction in epithelial cells of healthy subjects and can activate protease-activated receptor (PAR)2 thereby inducing E-cadherin disruption in intercellular contacts. (9) This facilitates allergens to submucosal cells and increases allergic inflammation. The decrease in E-cadherin in the bronchial epithelium increases the expression of proinflammatory factors and promotes Th1 cell differentiation. In addition, activation of PAR2 by protease induces intracellular signaling activity including nuclear factor-ĸB (NFĸB) and extracellular kinase (ERK) signaling pathways and there is an increase in IFN-γ proinflammatory cytokines as the main effectors. (9)(10)(11) In the atopic asthma group, IL-13 levels, which are Th2 proinflammatory cytokines, were seen to be higher than those of the nonatopic group even though the differences did not appear to be statistical, but it could be seen the tendency of different IL-13 activation patterns in the two groups. This is in line with a previous research. It was found that Der p can activate toll-like receptor (TLR)4 signal transduction so that it increases the expression of IL-6 and major histocompatibility complex (MHC) II which helps the process of differentiating naïve T cells into cells Th2. (10,12) This is confirmed by a study, which states that Der p which has cystein protease allergens can increase the polarization of naive T cells into Th2 cells and increase the production of proinflammatory cytokines Th2.(9,13) The secretion of Th2 cytokines has the clinical effect of increasing eosinophil recruitment, mast cell activation, and B cell differentiation until class-switching becomes IgE. The levels of cytokine IL-13 are also a marker that the inflammatory reaction that occurs has entered the chronic phase. (9,11) Non-significant conditions on IL-13 levels can be caused by the effects of asthma exercises carried out by research subjects for 4 months. Physical exercise can reduce Th2 proinflammatory cytokines and increase anti-inflammatory cytokines as reported by other studies that was done before. (14,15) After analyzing IL-10 activation patterns in two groups of subjects, it was found that a higher increase in IL-10 levels in the non-atopic group was found compared to the atopyic asthma group. Increasing levels of antiinflammatory cytokines IL-10 can inhibit the production of proinflammatory cytokines through a direct mechanism that is towards the immune system effector targets and also an indirect mechanism that modulates the function of the immune system, for example preventing differentiation of dendritic cells. This will inhibit the co-stimulatory process, antigen presentation, and decrease the ability of chemokine secretion. In addition, increased production of antiinflammatory cytokines can also be caused by continued proinflammatory production. (16) This has been proved before that low IL-10 levels correlate strongly with the occurrence of the pathogenesis of asthma. (17) The same thing was shown in IDO measurements from PBMC cultures stimulated by RPMI, PHA, and Der p allergens that IDO levels in the non-atopic group increased significantly compared to the atopic group. PBMC cells cultured with stimulant antigens derived from Der p allergens can respond in the form of IDO expression, but the results can be different if stimulated with other antigens because the response is the memory PBMC cell response with secondary exposure so that it is specific. (8,11) is an enzyme and limits the level of tryptophan (Trp) along the kynurenine (Kyn) pathway. IDO is widely expressed in various types of cells, including leukocytes and APCs. (18) IDO is induced in dendritic (DC) cells, which limits inflammation and prevents excessive host responses. Low IDO activity has previously been observed in asthma and atopic non-asthma patients. (19,20) In several studies, it was explained that IDO inhibits airway proinflammation mediated by Th2 cells, but does not seem to influence tolerance to inhibition in the immune response of proinflammatory cells Th1. (20,21,22) Conclusion Cellular immune profile of subjects with allergic asthma to Dermatophagoides pterronyssinus (Der p) is characterized by a type-2 inflammatory response that is dominant compared to type-1 inflammation (higher IL-13 ratio compared to IFN-γ) and to the role of anti-inflammation (higher IL-13 ratio compared to IL-10). The decline in IDO production in allergic asthma subjects to Der p is thought to be related to the low cellular immune response in expressing IFN-γ compared to IL-13. | 2019-08-23T02:03:42.886Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "7a291e65626cf4bf61c23d170b033fcb5e9566b0",
"oa_license": "CCBYNC",
"oa_url": "https://inabj.org/index.php/ibj/article/download/714/434",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8a16958e894c12e87e9b59b8fafc8c1bf88e97b8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17802150 | pes2o/s2orc | v3-fos-license | Beyond directed evolution: Darwinian selection as a tool for synthetic biology
Synthetic biology is an engineering approach that seeks to design and construct new biological parts, devices and systems, as well as to re-design existing components. However, rationally designed synthetic circuits may not work as expected due to the context-dependence of biological parts. Darwinian selection, the main mechanism through which evolution works, is a major force in creating biodiversity and may be a powerful tool for synthetic biology. This article reviews selection-based techniques and proposes strict Darwinian selection as an alternative approach for the identification and characterization of parts. Additionally, a strategy for fine-tuning of relatively complex circuits by coupling them to a master standard circuit is discussed.
Natural selection is the blind watchmaker, blind because it does not see ahead, does not plan consequences, has no purpose in view. Yet the living results of natural selection overwhelmingly impress us with the appearance of design as if by a master watchmaker, impress us with the illusion of design and planning Richard Dawkins, The blind watchmaker Under changing conditions of life, there is no logical impossibility in the acquirement of any conceivable degree of perfection through natural selection Charles Darwin, On the origin of the species
Synthetic biology: features and tools
Synthetic Biology (SB) is more an approach than a discipline; a framework that includes bio-engineering, systems biology, metabolic engineering and many other disciplines, encompassing the design and construction of new biological parts, devices and systems, as well as the re-design of existing components. SB has also been defined for its potential to embrace the emerging field of designing, synthesizing and evolving new genomes or biomimetic systems. The fundamental principle behind SB is that, analogous to artificial objects, any biological system can be considered as a combination of individual functional elements (de Lorenzo and Danchin 2008). SB approaches are based on three main principles: part-by-part construction of functional elements or biological parts, standardization of these particular parts, and abstraction of the complex underlying information (e.g. the particular DNA sequence).
SB has a very short history; less than a decade in fact. However, the novelty of the discipline is in contrast with the relative antiquity of the tools it uses. In fact, the idea of living organisms as cellular robots might be new, but the techniques used to select, re-design and combine biological parts, in addition to automated sequencing, are standard biotechnological protocols. Unfortunately, experience demonstrates that often rational robot-like, part-by-part approaches simply do not work. For example, Chan et al. (2005) reported that bacteriophages re-designed in order to behave in a more 'logical' way, in fact made smaller lysis plaques than their wild-type precursors and might even evolve to get rid of the man-made components. Unlike robots, all living beings tend to perpetuate reproductively over time. Unfortunately, unlike robots, genetically engineered organisms are prone to die. Mutations, changes in the environment and interactions with cellular components thus make synthetic components context-dependent. This means that they might work in one context but not in others. The ultimate factor responsible for this disparity between theory and practice is natural selection.
Creating diversity, selecting fitness
The identification, characterization and optimization of biological parts to be used in SB are often carried out by selection-based approaches. Many screenings aimed at identifying sequences that are suitable as parts for SB combine a method to create a library with high genetic diversity with a Darwinian selection step. Alper et al. (2005) quantitatively characterized a promoter library by muting a constitutive promoter through error-prone PCR and constructing a library with the mutant sequences cloned upstream of a GFP gene. The authors carried out a ''pick and test'' screening of the resulting library in E. coli on the basis of the fluorescence of the clones, and an additional ''dead or alive'' confirmation of the constitutive nature of the promoters was performed by cloning them upstream of a chloramphenicol acetyltransferase (cat) gene with chloramphenicol as the selection agent.
In another work, Alper et al. (2006) reported a very interesting approach based on what the authors call global transcription machinery engineering (gTME) on yeast. This is an error-prone PCR-based method in which mutations of a key protein regulating the global transcriptome are produced and a library with the mutants, exhibiting a wide range of diversity at the transcriptional level, is screened. The cited work carried out selection in a medium with high ethanol and glucose concentrations, allowing only tolerant clones to survive. Following this approach, the authors were able to identify several mutants with enhanced ethanol and glucose tolerance, one of which was analysed in detail and found to exhibit differential expression of hundreds of genes compared to the wild type.
These works focused on the selection of particular genetic variants for useful purposes. However, it has to be noted that differences in biological fitness on a particular trait depend, particularly in microorganisms, not only on genetic variations in terms of homology but also on the copy number. Thus, copy number manipulation has been proposed as a potentially powerful strategy to engineer microorganisms displaying new phenotypes: Christ and Chin (2008) reported that evgA gene amplification allowed E. coli to survive at extreme and otherwise lethal temperatures.
Besides error-prone PCR, there are also non-recombinant alternatives to the rapid production of genetic variants. For example, DNA shuffling is known to mimic evolutionary processes. Whole genome shuffling, a process combining multi-parental crossing obtained by DNA shuffling with standard breeding has been successfully implemented in bacteria (Zhang et al. 2002). This method has been found to be faster than sequential random mutagenesis and screening for the production of improved organisms.
Protein engineering and directed evolution
The linkage between genotype and phenotype is the basis of selection-based evolution, and it is also the basis of protein engineering approaches for identification and characterization of gene products. Persson et al. (2008) combined random mutagenesis and phage display selection strategies of various stringencies, which gave a considerable increase in apparent affinities for several of the selected populations. Phage display has also been combined with proteolysis selection in order to generate novel proteins with stable folds (Riechmann and Winter 2000). Ribosome display (Hanes and Plückthun 1997) and in vitro compartmentalization (Tawfik and Griffiths 1998) can be also used to evolve proteins for their binding interactions. For example, an approach has been developed aimed at mimicking natural selection (fitter genes having more ''offspring'') by coupling the amplification of a gene to the formation of product by the enzyme it encodes (Kelly and Griffiths 2007). All these works suggest that directed evolution strategies could successfully complement in vitro selection.
Improving the fittest: adaptive evolution
Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat. An experimental strategy based on adaptive evolution has been developed as a tool for improving genetic circuits. The experimental procedure basically involves maintaining exponential growth by daily passage of cultures into fresh medium under the selection pressure. It has been reported that this simple strategy can be used to achieve in silico predicted biological functions. For example, Ibarra et al. (2002) showed that placing E. coli under growth selection pressure by using glycerol as the sole carbon source led to an increase in the growth rate from a sub-optimal to the optimal rate predicted from a whole-cell in silico model. The compatibility of selection-based approaches with in silico design was also reported by Fong et al. (2005a), who used an integrative in silico ? adaptive evolution approach in order to select for lactic acid production in E. coli. The authors succeeded in constructing highly productive strains based on the computationally predicted designs, and the production of lactic acid was further increased after adaptive evolution was carried out, mainly because growth rate was coupled to lactate secretion rate.
Adaptive evolution approaches such as those cited above often use adaptation as a final improvement step, with the main source of genetic variants being achieved by muting an in silico design and further selection implemented to perfect the fitness of the design. However, adaptive evolution might be used as a simple and rapid tool for the selection of desired phenotypes (Fong et al. 2005b). Although the mechanism underlying this fast adaptation is not fully understood, it is known that compensatory gene expression changes occur as part of the initial adaptive response followed by further positive selection of beneficial gene expression changes.
Learning from Darwin: further simple selection approaches
It is well known that Darwin compiled a huge amount of data on biological diversity and adaptations during the Beagle's voyage, and also during his fieldwork in the United Kingdom. However, the strength of Darwin's argument lies mainly in artificial, rather than natural, selection. The origin of the species by means of natural selection (Darwin 1859) is, in fact, full of dozens of examples of artificial selection of cattle and crops, and the power of artificial selection to produce biological diversity was described in detail 9 years later in a book dedicated to the topic (Darwin 1868). The mechanism behind both natural and artificial selection is the same: the fittest (with respect to the environment in natural selection and with respect to human requirements for artificial selection) survive. Domestication confirms the tremendous power of selection in order to fix genetic variants and produce phenotypic traits that are mainly (but not only) quantitative and often astoundingly distant from those of the original natural population they originated from. A classic example is the dairy cow, many breeds of which can produce 10,000 l of milk or more per cow every year.
The simple mechanism of natural and artificial selection has only been partially mimicked in SB, which often uses a directed evolution approach. By this approach, natural intraspecific variation is substituted by random mutations and selection is usually performed on the basis of screenings for a desired trait (e.g. enzymatic activity). Often, hundreds of bacterial colonies are individually picked from the pool of mutants, the desired trait analysed and clones are selected or discarded depending on the existence and/or intensity of the desired trait.
It is tempting to envisage variations of this strategy applying strict Darwinian selection, in which a large pool of natural genetic variants would be selected on the biological fitness of the variants in a given environment. The use of naturally occurring DNA sequences would imply a first round of selection, since wild-type coding sequences have already been shaped by natural selection. In fact, natural variants that are selected are a priori superior to randomly produced mutants, as demonstrated by the success of pharmaceutical screenings on natural compounds to be used as antibiotics, anti-tumorals or for many other therapeutic applications (Li and Vederas 2009). Ideally, the initial genetic pool might be genomic, meta-genomic or even a combination of several meta-genomic DNA libraries, although experimental handling limitations would define the size of the starting pool. Figure 1 shows four examples of a strategy based on Darwinian selection applied to the identification of biological parts to be used in SB, such as strong promoters (Fig. 1a), protein coding sequences (1B) and sequences coding for transcription regulators (1C and 1D).
Interestingly, there are many simple biotechnological screenings that are very similar to the Darwinian strategy proposed here for SB. In classical biotechnology, screening for enzymes and, less frequently, promoters or other regulatory sequences often follows procedures utilizing the deleterious effect of a selective medium on the vast majority of the screened clones. For example, screening for cellulases is usually carried out on CMC (carboxymethylcellulose)-based media, which results in the selection of cellulolytic isolates (Yan et al. 2001). A particularly interesting selection-based strategy is that reported by Kubota et al. (1991), where E. coli promoters were screened by cloning hundreds of DNA fragments from a genomic library upstream of the ampC gene into a promoter-probe plasmid. By selecting with antibiotic so that only clones with sequences promoting ampC expression survived, the authors were able to identify and characterize several naturally occurring strong promoters. This strategy and the aforementioned screening for cellulases are in fact two examples of Darwinian selection approaches for SB, which are shown in Fig. 1a, b, respectively.
Complexity problems and evolutionary solutions
Genetic networks able to integrate multiple inputs in the generation of cellular responses have been constructed.
Darwinian selection as a tool for synthetic biology 3 Sayut et al. (2009) developed an AND logic gate displaying clear, logical responses that could be described using a mathematical model. However, the analysis of even relatively simple synthetic networks often reveals that there is a surprisingly large diversity of complex behaviors (Guet et al. 2002). A common problem when multiple genes are used in a synthetic circuit is that the expression level of each gene must be controlled independently. In practice, certain promoters are known to suffer from cross talk (an inducer of one of the promoters affects the expression of the other). This has been reported for the IPTG-inducible Plac and the arabinose-inducible PBAD promoters, with IPTG being in fact an inhibitor of PBAD. This problem has been overcome by applying a directed evolution approach to screen mutants of the arabinose-binding regulatory protein AraC in order to construct an arabinose-inducible system compatible with IPTG (Lee et al. 2007).
The sensitivity of complex circuits to a range of parameters, from protein and RNA stabilities to culture temperature, might result in synthetic circuits working imperfectly. Again, directed evolution can be used to complement rationally designed circuits, which can be optimized by screening randomly mutated circuits (Yokobayashi et al. 2002).
Theoretically, any pool of living organisms with genetic variation and vigorous reproduction is suitable for Darwinian selection. However, fully selection-based shaping of sophisticated synthetic systems such as oscillators or computational biological devices (i.e. counters) is difficult to implement because of their complexity. Genetic circuits that yield simple ON/OFF outputs can be tuned by specifically designed selection modules (Yokobayashi and Arnold 2005). However, selection of oscillatory circuits would need oscillatory levels of selection pressure, which might be difficult to implement in liquid cultures. For this particular case, a possible selection approach could be implemented whereby (1) individual components of the circuit are first screened on the basis of their fitness (Fig. 1), and (2) the complete network is subjected to further selection through specifically designed selection modules yielding an oscillatory selection pressure. This could be achieved by combining the circuit to be tested with a master circuit exhibiting a standard behavior. The output of the master circuit would be the selection pressure aimed at maintaining the behavior of the tested circuit within certain levels. Figure 2 shows an example of this proposed approach in which the fitness of an oscillator is selected by a master circuit through transcriptional activation/inhibition of a death gene on a third coupling circuit. Using previously defined robust synthetic circuits as master, fine tuning of developing circuits might be achieved. With this strategy, fluctuations in the selection pressure Suitable media allowing selection of the fittest components are indicated. Notice than solid, rather than liquid media should be used, since B-lactamase, as well as some cellulases are secreted and bystander cells might survive if liquid media was used might be more easily achieved than with chemical modification of the growth medium. Directed evolution on a higher level of biological complexity, i.e. multicellular systems, is a particularly challenging field. Microchemical interface technology has been proposed as a powerful tool to interface with developing biological systems thus achieving unprecedented levels of spatial and temporal control of chemical environments (Ismagilov and Maharbiz 2007). Theoretically, selection pressure could be administered using this technique and multicellular systems tuned by directed evolution. However, true Darwinian selection of multicellular systems would require a vast pool of genetically diverse multicellular systems (each composed of genetically identical cells) which would need to be exposed to a range of selection factors in order to discard imperfectly working systems. Such a strategy is technically unapproachable. Additionally, it should be noted that group (multicellular system) rather than individual (cell) selection should occur. Selection needs genetic variations to shape, and although a multicellular system can mute, mutations will certainly originate and operate at an individual (cell) level, and the whole system might not be affected. Darwinian selection of multicellular systems would need beneficial mutations (in terms of the desired behavior of the circuit) to somehow extent from individual cells to the whole system, in order for the new genetic trait to produce a particularly behavior (phenotype) of the system that would be sensitive to selection pressure. Today, there are no strategies for Darwinian selection of multicellular systems: in SB, the individual cell seems to be the threshold of complexity that Darwinian selection can cope with.
Concluding remarks and future prospects
By comparing the complexity and diversity of natural organisms with those of man-made biological constructions, it can be concluded that adaptation is the key factor behind the superiority of the evolutionary process on rational design. Thus, it seems logical that artificial construction of living circuits relies, at least partially, on selection processes. Selection-based approaches have been successfully used in SB, mainly as a complement of in silico designs. Identification, characterization and improvement of biological parts can be partially achieved through selection-based approaches, and directed evolution has proven a successful strategy to adapt rationally designed simple circuits to a context-dependent environment. Fully Darwinian selection strategies (applied on natural rather than artificially mutated parts and with ''dead or alive'' screenings) might also be implemented.
The difficulty of applying true selection to complex circuits might be partially overcome by using specifically designed selection modules (Yokobayashi and Arnold 2005) as well as ad hoc master circuits, like the one proposed in this work. However, highly sophisticated biological networks as well as multicellular systems are still recalcitrant to Darwinian selection mainly because of the difficulties of producing and selecting among a sufficiently vast pool of genetically diverse systems.
As a final remark, it seems that the growing complexity of synthetic circuits is linked to a need for powerful evolutionary approaches in order to adapt the circuits and to improve their performance in a context-dependent environment. Such strategies based on selection, that brilliant Fig. 2 Strategy for directing the evolution of an oscillatory circuit though a well calibrated master circuit. An oscillatory circuit A (white line) is subjected to fluctuating selection by a master B oscillator (thick grey line) through the action of a coupler circuit (shown below). P1 and P2, promoter sequences. The behavior of circuit A must be within certain limits marked by the output of B by transcriptional activation/inhibition of a death gene on the coupling circuit. When the output of master circuit B decreases, so too should the output of circuit A, in order to keep the death gene under the control of P1 inhibited. When the output of master circuit B rises, high inhibitory levels of A must balance activation of P2 by B Darwinian selection as a tool for synthetic biology 5 'blind engineer', may prove key to the construction of complex living systems in the near future. | 2014-10-01T00:00:00.000Z | 2009-10-10T00:00:00.000 | {
"year": 2009,
"sha1": "543c19c63af6637b152fe90cdcf0b6a568a58d01",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11693-009-9045-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "543c19c63af6637b152fe90cdcf0b6a568a58d01",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
5524798 | pes2o/s2orc | v3-fos-license | Tracheobronchitis in a Patient With Crohn's Disease
We report a 63-year-old woman who presented with 1 month of non-productive cough and non-bloody diarrhea. She was on maintenance therapy for a 15-year history of Crohn's disease. Treatment with systemic corticosteroids resulted in rapid improvement of both her diarrhea and respiratory symptoms. Our patient is unique in that she presented with tracheobronchitis during an acute flare of her Crohn's without obvious lung pathology on chest imaging. Tracheobronchitis is a rare manifestation of inflammatory bowel disease that should be considered in Crohn's disease patients presenting with persistent non-infectious cough.
Introduction
Crohn's disease (CD) is an inflammatory bowel disease (IBD) that presents with extraintestinal manifestations more frequently than ulcerative colitis. Extraintestinal manifestations of CD include erythema nodosum, pyoderma gangrenosum, arthritis, uveitis, episcleritis, mouth ulcers, renal stones, thromboembolic disease, and primary sclerosing cholangitis. Though once considered to be rare manifestations of IBD, studies have shown that subclinical pulmonary abnormalities occur in 50-60% of the IBD population. 1 These abnormalities are typically independent of smoking status. 2 It has been hypothesized that the pathogenesis of these pulmonary manifestations involves the common embryonic origin of the intestine and the lungs from the primitive foregut, the coexistence of mucosa-associated lymphoid tissue in both organs, and the bacterial translocation from the colon to the lungs. The most common lung manifestations of CD include bronchitis, bronchiectasis, bronchiolitis, and subglottic stenosis, but involvement of the trachea is rare. 3
Case Report
A 63-year-old, non-smoking woman with a 15-year history of CD presented with 1 month of worsening cough and diarrhea. Her CD had been limited to the colon, and she had been taking mesalamine since diagnosis. Her cough was dry with intermittent post-tussive vomiting. Additionally, she was having up to 10 loose bowel movements daily that were green and non-bloody. The diarrhea was associated with intermittent abdominal pain with no relief on defecation. She denied fevers, chills, rhinorrhea, dysuria, headache, itchy eyes, any sick contacts, new medications, antibiotics, or recent travel. She endorsed mild dyspnea on exertion, but denied any history of pre-existing lung disease or asthma. She denied any changes in appetite and was still tolerating oral intake well.
On admission, she was febrile to 100.4° F and tachycardic to 119 beats per minute. Labs were significant for an elevated white blood cell count of 13.3 cells/mcL, hemoglobin of 8.6 g/dL, and erythrocyte sedimentation rate of 105 mm/hour. Empiric antibiotics were started until a full infectious work-up returned negative, so antibiotics were discontinued after 24 hours without improvement. Chest and abdominal computed tomography (CT) revealed wall thickening of the colon from the proximal sigmoid to the cecum. No abnormalities were noted of the lung or large airways. Guaifenesin with codeine and ipratropium bromide/ albuterol sulfate inhalation with diphenhydramine provided mild cough relief. Colonoscopy with random biopsies confirmed moderately active colitis in her terminal ileum and her entire colon, consistent with CD ( Figure 1). Bronchoscopy revealed ulcerations throughout her trachea with associated abnormal mucosa described as nodular, erythematous, and edematous, having an almost "cobblestoned" appearance ( Figure 2). Pathology showed a squamous papilloma. She was treated with intravenous methylprednisolone with significant improvement of her symptoms over 2 days, and was discharged on a prolonged oral steroid taper with a plan to start outpatient infliximab.
Discussion
Upper airway disease in IBD is rare and has been described in both ulcerative colitis (UC) and Crohn's disease, although cases occur predominantly in UC patients. 4 Although clinically significant airway disease is rare, the most common pulmonary manifestations are bronchiectasis and chronic bronchitis, both of which are accompanied by large amounts of sputum production. Since patients with pulmonary manifestations of CD are usually asymptomatic, diagnoses are often made incidentally through abnormal screening tests. 4 Chronic inflammation is common in the bronchi and alveoli of patients with CD, with 61% of asymptomatic CD patients exhibiting bronchoalveolar lavage features of overt lymphocytic alveolitis. 5 The differential diagnosis for lung disease in the patient with active IBD still includes predominantly common lung and airway diseases such as reactive airway disease, infectious causes, and other inflammatory granulomatous diseases such as Wegener's granulomatosis. Specific to this population, mesalamine has also been reported to cause interstitial lung disease in rare occurrences. 6 Airway disease in IBD patients usually does not present during an acute flare, but tends to occur during inactive phases or even after colectomies. 4,6,7 Our patient is unique in that she presented with a non-productive cough during an acute CD flare.
Imaging is usually helpful for diagnosing patients with tracheobronchitis from CD. CT more frequently reveals airway changes than lung parenchymal findings. Such CT findings in tracheobronchitis associated with UC have been described as "circumferentially thickened tracheobronchial wall involving both the cartilaginous and membranous components." 8 Chest radiographs and chest CT may show circumferential or nodular narrowing of the trachea or the bronchi, although bronchoscopy is still the diagnostic procedure of choice. 9 Pulmonary function testing shows an obstructive pattern. 6 Our patient's squamous papilloma findings were likely an incidental finding unrelated to her lung pathology, as squamous cell papillomas are common benign tracheal neoplasms. 10 Although the biopsy was non-diagnostic, the gross appearance of her trachea on bronchoscopy was consistent with Crohn's disease.
Treatment of upper airway disease in CD patients usually involves oral or inhaled corticosteroid therapy, which is effective in improving both respiratory and gastrointestinal symptoms. Our patient was treated initially with intravenous steroids given her concurrent gastrointestinal flare. Intravenous steroids are reserved for cases of subglottic stenosis, stridor, continued symptoms despite oral treatment, or those Publish your work in ACG Case Reports Journal ACG Case Reports Journal is a peer-reviewed, open-access publication that provides GI fellows, private practice clinicians, and other members of the health care team an opportunity to share interesting case reports with their peers and with leaders in the field. Visit http://acgcasereports.gi.org for submission guidelines. Submit your manuscript online at http://mc.manuscriptcentral.com/acgcr. with concurrent illness that requires intravenous steroids. In her case, though she did not have true subglottic stenosis, the severity of her tracheobronchitis had escalated to a point where her cough was becoming stridorous. In patients experiencing only respiratory symptoms, good results have been reported from inhaled corticosteroids alone. 11,12 Disclosures Author contributions: V. Yeung wrote the manuscript. AG Govind, S. Arastu, and CH Henry wrote and edited the manuscript. CH Henry is the article guarantor. | 2017-08-27T06:41:20.263Z | 2016-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "36d86fe7b49fa83c30fe7b962e4391468cd54369",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.14309/crj.2016.43",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "36d86fe7b49fa83c30fe7b962e4391468cd54369",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
336639 | pes2o/s2orc | v3-fos-license | Temporal Trends in the Use and Comparative Effectiveness of Direct Oral Anticoagulant Agents Versus Warfarin for Nonvalvular Atrial Fibrillation: A Canadian Population‐Based Study
Background Direct oral anticoagulants (DOACs) are noninferior to warfarin for stroke prevention in atrial fibrillation (AF). We aimed to determine the population risk of stroke and death in incident AF, stratified by anticoagulation status and type, and the temporal trends of oral anticoagulation practice in the post‐DOAC approval period. Methods and Results We conducted a population‐based cohort study of incident nonvalvular AF cases using administrative health data in Alberta, Canada. We used Cox proportional hazards modeling with anticoagulation status as a time‐varying exposure and adjusted for age (continuous), sex, congestive heart failure, hypertension, diabetes mellitus, prior transient ischemic attack or ischemic stroke, myocardial infarction, peripheral artery disease, and chronic kidney disease. Primary outcome was the composite of stroke and death. Among 34 965 patients with incident AF (56.0% male, median age 73 years), relative to warfarin, DOAC use was associated with decreased risk of all stroke and death (hazard ratio: 0.90; 95% confidence interval, 0.83–0.97) and decreased hemorrhagic stroke (hazard ratio: 0.60; 95% confidence interval, 0.40–0.91]) but a similar risk of ischemic stroke (hazard ratio: 1.12; 95% confidence interval, 0.94–1.34]). During this time period, DOAC use increased rapidly, surpassing warfarin, but the total oral anticoagulation use in the population remained stable, even in the subgroup with the highest thromboembolic risk. Conclusions In a real‐world population‐based study of patients with incident AF, anticoagulation with DOACs was associated with decreased risk of stroke and death compared with warfarin. Despite a rapid uptake of DOACs in clinical practice, the total proportion of AF patients on anticoagulation has remained stable, even in high‐risk patients.
T he increased risk of stroke and death associated with atrial fibrillation (AF) can be effectively mitigated with anticoagulation. 1,2 Oral anticoagulation for nonvalvular AF has been revolutionized by the emergence of direct oral anticoagulants (DOACs) as alternatives to dose-adjusted warfarin. [3][4][5][6] Meta-analyses of randomized controlled trials 7,8 as well as observational data 9,10 confirm the efficacy and real-life effectiveness of these agents. DOACs have the added benefit of favorable pharmacology resulting in convenience for patients, with rapid onset of action, fixed dosing, no laboratory monitoring, and fewer food and drug interactions. 11 However, DOACs have higher drug costs and need adjustment based on renal function.
Population-based analyses have reflected the effectiveness and safety of DOACs in routine clinical practice. [12][13][14][15][16] A common feature of published population studies is that the exposure to the type of anticoagulant is determined at entry into the study and is assumed to be constant throughout follow-up. In reality, anticoagulation status and type change with time. Furthermore, because dabigatran was the first DOAC to be approved, apixaban and rivaroxaban have been relatively less well studied. [14][15][16] Finally, although some studies report declining temporal trends of AF-related stroke and mortality in the population (1958-2007 17 and 1980-2000 18 ), more contemporary studies (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010) show no further decline in the AF-related stroke trends. 19 Temporal trends of oral anticoagulation prescription pattern and AFrelated ischemic stroke, hemorrhagic stroke, and mortality in the post-DOAC approval period are less well understood.
Using the complete population of Alberta, Canada, from 2009 to 2015, we aimed to determine the risk of stroke and death in incident AF, stratified by anticoagulation status and type, defined as a time-varying exposure variable. An important secondary objective was to study the temporal trends in oral anticoagulation practice and outcomes during this post-DOAC approval time period. We hypothesized that DOACs are associated with decreased risk of stroke and death compared with warfarin and that the temporal trends in the occurrence of these outcomes may decrease in response to an increase in DOAC use.
Methods
Using Alberta linked administrative data, we performed a population-based cohort study of incident nonvalvular AF diagnosed between January 1, 2009 and June 30, 2015, and followed through December 31, 2015, allowing a minimum follow-up of 6 months for each patient. All residents of Alberta (population of 4.2 million people) have access to publicly funded and universal health care. The Alberta Health Care Insurance Plan (AHCIP) provides medical coverage to most Alberta residents (>99%) with the rare exceptions of the members of the military, federal inmates, individuals who opt out of the AHCIP, and the Royal Canadian Mounted Police. Each resident covered by the plan is assigned a personal health number that acts as a unique lifetime identifier. There is no universal drug coverage in Alberta, and residents pay for drugs out of pocket, through private insurance (usually through employment), or through publicly funded drug programs for seniors (people aged ≥65 years) and a few selected groups administered through Alberta Blue Cross. Under the Alberta Blue Cross public program, DOACs are covered under specific circumstances: recurrent thromboembolism on warfarin, labile international normalized ratio, or difficult access to international normalized ratio test centers.
AF Cohort Identification
AF was identified using administrative data with the International Classification of Diseases (ICD) codes 427.3x (ICD-9-CM) or I48.x (ICD-10-CA) in any diagnosis field in any of the hospital inpatient, ambulatory, or emergency department encounters or physician claims databases. 20,21 Two diagnoses of AF were required at separate healthcare encounters >30 days apart within the first year of diagnosis to minimize misclassification of transient single episodes of AF or flutter. AF was defined as incident if no prior diagnosis of AF was made in Alberta from the date that the patient obtained an AHCIP number or April 1, 1994. We excluded valvular heart disease, defined as any of the following codes in any of the databases preceding the incidence date: mitral or aortic disease (ICD-9
Anticoagulation Status
Because a patient may change anticoagulation regimens during the follow-up period, anticoagulation was considered a time-varying exposure. If, for example, a patient was followed for 1 year and was on warfarin for 6 months and a DOAC for 6 months, the patient contributed 0.5 person-year to each of the warfarin and DOAC-exposed groups. Interruption in treatment was defined as a gap in prescription refills of ≥30 days between the date of the last refill plus the number of days of drugs dispensed and the date of the next refill. Treatment type was determined by the Pharmaceutical Information Network, which contains all drugs dispensed by 98% of the community pharmacies in Alberta regardless of insurance status. We considered warfarin, apixaban, rivaroxaban, and dabigatran (Anatomical Therapeutic Chemical codes B01AA, B01AF02, B01AF01, and B01AE07).
Statistical Analyses
The primary outcome was the composite of all stroke (ischemic and hemorrhagic) and all-cause mortality. Secondary outcomes were the individual components of the composite outcomes, myocardial infarction, and hemorrhagic
Clinical Perspective
What Is New?
• In a population with access to universal health care, realworld prescription patterns for oral anticoagulants in the post-direct oral anticoagulant approval period show that this treatment remains underused in patients with nonvalvular atrial fibrillation, even in the patients at high risk for thromboembolism.
What Are the Clinical Implications?
• We confirm that direct oral anticoagulants are safer than warfarin but show that overall rates of stroke and death are unchanged despite having more choices of oral anticoagulation drugs, which highlights the need for prospective studies to understand the barriers of oral anticoagulation in atrial fibrillation.
complications (gastrointestinal and subdural). We calculated the age-sex adjusted event rates per 1000 person-years with 95% confidence intervals (CIs). We used Cox proportional hazards modeling and the mean of covariates and corrected group prognosis method to calculate risk-adjusted event rates for patients on no anticoagulation, warfarin, and a DOAC. 22 We adjusted for the elements of the CHADS 2 -VASc score: age (continuous), sex, congestive heart failure, hypertension, diabetes mellitus, prior transient ischemic attack or ischemic stroke, prior myocardial infarction, peripheral artery disease, and chronic kidney disease, which could be a relative contraindication to treatment with DOAC. The proportionality assumption cannot be interpreted because we examined our exposure (anticoagulation status) as a time-varying exposure covariate. We graphically examined the age-sex standardized rate ratios at different survival times (0-200 days, 201-400 days, etc.) to confirm that the actual hazard did not vary significantly over time or show any converging or diverging trends.
We performed a sensitivity analysis to additionally adjust for coverage by Alberta Blue Cross public drug insurance because it is possible that DOAC users with and without public insurance have different sociodemographic characteristics, such as age, employment, or socioeconomic status. The sensitivity analysis compares only warfarin versus DOAC and excludes the category of "never anticoagulated" because the type of reimbursement for a prescription can be determined only when a prescription is filled.
Outcomes and comorbidities were determined using administrative data codes (Table 1). 21,[23][24][25] Hypertension and diabetes mellitus were defined using 1 hospitalization discharge code in any position or 2 outpatient claims in ≤2 years. Congestive heart failure, peripheral artery disease, myocardial infarction, stroke, and transient ischemic attack were defined using 1 hospitalization discharge code in any position. 24 Chronic kidney disease was defined using hospitalization discharge codes in any position or dialysis codes (V45.1, V56, 39.95, 54.98, Z99.2, Z49) in 1 hospitalization or 1 outpatient claim. 25 Dialysis-related hospitalizations were not counted as kidney failure if a concurrent acute kidney injury code (584) was present. We graphed the temporal trends of oral anticoagulation prescriptions for all patients and stratified by high-risk (CHADS 2 ≥2 or age ≥75 years), moderate-risk (CHADS 2 =1 or age 65-74 years), and low-risk (CHADS 2 =0 or age <65 years) groups. Risk was determined at entry into the study. We also graphed the temporal trends in the rates of ischemic stroke, hemorrhagic stroke, and death per person-years. The temporal trends for drug prescription and stroke and death outcomes are adjusted for age and sex only. All analyses were conducted using SAS 9.4 (SAS Institute Inc), and graphs were created using Excel 2013 (Microsoft Office). This study received approval from the University of Calgary institutional review board for research, and a waiver of consent was granted (REB16À1859).
Results
Among 34 965 patients with new diagnosis of nonvalvular AF, there were 19 579 (56.0%) male patients, the median age was 73.0 (interquartile range: 62.1-81.9), and 9628 (27.5%) patients were never anticoagulated during follow-up. Table 2 shows the baseline characteristics by anticoagulation status and type. At study censor date (occurrence of a primary outcome or end of study), 16 077 (46.0%) patients were not anticoagulated, 9292 (26.6%) patients were on warfarin, 3156 (9.0%) were on dabigatran, 1786 (5.1%) were on apixaban, and 4654 (13.3%) were on rivaroxaban. Among the 25 337 patients who received anticoagulation, 6844 (27.0%) patients AMI indicates acute myocardial infarction; CHADS2, congestive heart failure, hypertension, age ≥75 years, diabetes mellitus, history of cerebral ischemia; CHA2DS2-VASc: 1 point for each congestive heart failure, hypertension, age 65-74 years, diabetes mellitus, vascular disease (myocardial infarction or peripheral artery disease), female sex, and 2 points for each history of cerebral ischemia and age ≥75 years; CHF, congestive heart failure; CKD, chronic kidney disease; DOAC, direct oral anticoagulant; IQR, interquartile range; PAD, peripheral artery disease; TIA: transient ischemic attack. *Filled prescription for warfarin and DOAC during follow-up, not on both therapies simultaneously. Categories are not mutually exclusive because a patient could have been exposed to different anticoagulation status and types during the follow-up time. Table 3.
The age-sex adjusted event rates with 95% CIs are presented in Table 4, as well as the multivariable hazard ratios (HRs) and 95% CIs. Considering anticoagulation as a time-varying exposure variable, patients on DOACs were less likely to suffer the composite outcome of all stroke and death compared with warfarin (HR: 0.90; 95% CI, 0.83-0.97). Patients treated with oral anticoagulation were less likely to suffer an ischemic stroke compared with those without anticoagulation, but there was no additional reduction in ischemic stroke risk associated with DOACs compared with warfarin (HR: 1.12; 95% CI, 0.94-1.34). DOACs were associated with less hemorrhagic stroke compared with warfarin (HR: 0.60; 95% CI, 0.40-0.91). Myocardial infarction occurrence was similar in all groups except for a slight decrease in the warfarin group compared with no anticoagulation, but the CI approached the null. For the safety outcomes, warfarin, but not DOACs, was associated with increased subdural hemorrhage (HR: 1.70; 95% CI, 1.27-2.29). For gastrointestinal hemorrhages, we only present age-sex standardized event rates because the event rate ratios changed with time, and the absolute number of events was too small to present HR estimates stratified by time. The sensitivity analysis with Myocardial infarction 8.6 (7.7-9.7) 7.9 (6.7-9.2) 1.0 (0.85-1.29) Hemorrhagic complications GI hemorrhage 8.5 (7.6-9.6) 7.2 (6.1-8.7) N/A Alberta Blue Cross public insurance flag in the multivariable analysis did not change the direction of the effects (Table 5). Figure 1 shows the temporal trends in occurrence rates of ischemic stroke, hemorrhagic stroke, and death as well as the prescription patterns of oral anticoagulation for the full cohort. Temporal trends for the outcomes of interest were stable. During the study follow-up period, the use of DOACs increased rapidly, whereas the use of warfarin declined so that the total proportion of patients on oral anticoagulation remained stable. When stratified by risk, the use of DOACs increased most steeply in the high-and moderate-risk groups ( Figure 2). Prescriptions for DOACs have not yet surpassed that for warfarin in the high-risk group. Temporal trends in prescription patterns remained stable for all oral anticoagulation types, regardless of risk group (Figure 3).
Discussions
This study of %35 000 nonvalvular AF patients from a complete population shows that treatment with DOACs is associated with reduced risk for a combined end point of all-A B C Figure 1. Temporal trends of oral anticoagulation prescription and occurrence of ischemic stroke (A), hemorrhagic stroke (B), and death (C). Age-sex adjusted rates per 1000 person-years. In 2009, the first year of the study, the occurrence of outcomes was high and likely artificially inflated because only patients with incident atrial fibrillation (AF) were included as opposed to the following years in which a combination of incident and prevalent AF patients were followed. Incident AF is often diagnosed in the context of a stroke or other medical condition, leading to higher apparent risk of stroke or death in the immediate period after diagnosis. DOAC indicates direct oral anticoagulant. cause stroke or death compared with warfarin, even after adjustment for baseline differences. Consistent with the pivotal clinical trials comparing warfarin and DOACs, 3-6 metaanalysis of clinical trials data, 8 and other population-based analyses, 12,14,15 the protective effect of DOACs in our study is driven by lower rates of hemorrhagic stroke and death. DOAC treatment is not associated with increased risk of myocardial infarction. Only warfarin, and not DOACs, is associated with increased subdural hemorrhage. We confirm that treatment with an oral anticoagulant is associated with less ischemic stroke compared with no anticoagulation. In real-world clinical practice, our findings suggest DOACs are simply safer than warfarin.
Although we show a reduction in risk of death among DOAC users, our data do not fully explain the reasons for the observed decrease in mortality. The ischemic stroke and myocardial infarction risks are similar between warfarin and DOACs. Although there were fewer hemorrhagic strokes in the DOAC group, the absolute number of events was low (n=106, n=31, and n=42 in the warfarin, DOAC, and no anticoagulation groups, respectively). Neither the reduction in hemorrhagic stroke nor the reduction in ischemic stroke fully accounts for the reduction in mortality. Although residual confounding may be a partial explanation, additional contributors to a reduction in mortality could be a reduction in the severity of stroke. If stroke severity per event is reduced in the DOAC group, then the risk of death will fall. This is an important hypothesis to test in future studies.
The real-world prescription patterns for oral anticoagulants show that since the approval of DOACs in Canada in October 2010, DOACs have been fully adopted into clinical practice. Earlier studies found a moderate uptake of DOACs in the United States 26 and Canada, 27 but we demonstrated that in 2015, DOAC use surpassed that of warfarin. Importantly, these trends highlight a greater challenge: The total proportion of AF patients on anticoagulation has remained relatively stable, even in the high-risk category that includes patients aged ≥75 years or with a CHADS 2 score ≥2. Not surprisingly, the incidence rates of ischemic stroke, hemorrhagic stroke, and mortality did not significantly change throughout the study period. Although rates of anticoagulation were shown to be rising 15 to 20 years ago with an associated decrease in ischemic stroke, 28,29 our results are consistent with recent studies showing a plateau in anticoagulation rates and stroke incidence. 19 Two recent US studies found a slight increase in anticoagulation rates since the introduction of DOACs. 30,31 These studies, however, used data from a US national registry of cardiovascular care practices, which may favor enrollment of highly motivated patients under specialist care, and the generalizability of these results to the population may be limited. Our findings suggest that the increasing use of DOACs is not yet closing the gap between scientific evidence and clinical practice in the general population. Stroke prevention remains suboptimal because anticoagulation is routinely underused. 32,33 It is important to explore and address patient preference and physician perception of the risk-benefit balance, particularly because evidence from this and other studies confirms the greater safety of DOACs. 34,35 Intervention trials based on education, measurement, and feedback and electronic alert systems aiming to improve anticoagulation rates are relevant and currently under way. 36,37 Our study has several strengths, including the analysis of a complete population and a long duration of follow-up. In addition, we studied all DOACs currently available in Canada (dabigatran, rivaroxaban, apixaban), and we treated anticoagulation exposure as a time-varying variable to reflect realworld treatment patterns. Nevertheless, our study has limitations. Given the relatively small number of DOACs in each category, we could neither study the effects of the individual DOACs nor differing doses. Acetylsalicylic acid is available over the counter and could not be reliably assessed. Although we carefully considered baseline characteristics for risk adjustment, including drug insurance status, unmeasured patient, clinician, and health-system factors associated with the selection of an oral anticoagulation regimen may result in residual confounding. The temporal trends for stroke and death outcomes also need to be interpreted with more caution because they are adjusted only for age and sex. Our study is vulnerable to limitations inherent to the use of administrative data. Because the Pharmaceutical Information Network contains data only on dispensed drugs, we could not assess for primary nonadherence. We could not link with laboratory information (including international normalized ratio, such that we could not estimate the time in the therapeutic range) and could not adjudicate outcome events. However, we used validated case definitions to identify comorbid disease and outcomes, and the use of administrative data allowed for the study of a complete population over a long period of time.
Conclusions
The results of this contemporary comparative effectiveness study on DOACs, warfarin, and no anticoagulation are expected to aid physicians in choosing the most effective and safe oral anticoagulant in routine clinical practice. Because medication reimbursement for DOACs is still lacking in Canada, the results of our study may be used as support to improve the accessibility to DOACs. Overall rates of anticoagulation, stroke, and death are unchanged despite having more choices of oral anticoagulation. Prospective studies evaluating and intervening on the barriers of oral anticoagulation in AF continue to be needed. | 2017-11-30T08:57:19.685Z | 2017-10-28T00:00:00.000 | {
"year": 2017,
"sha1": "5bde9e43f38ff2a9fe3a3fe86a0dcd32740bbe8c",
"oa_license": "CCBYNC",
"oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.117.007129",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5bde9e43f38ff2a9fe3a3fe86a0dcd32740bbe8c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3435000 | pes2o/s2orc | v3-fos-license | Taming the Notch Transcriptional Regulator for Cancer Therapy
Notch signaling is a highly conserved pathway in all metazoans, which is deeply involved in the regulation of cell fate and differentiation, proliferation and migration during development. Research in the last decades has shown that the various components of the Notch signaling cascade are either upregulated or activated in human cancers. Therefore, its downregulation stands as a promising and powerful strategy for cancer therapy. Here, we discuss the recent advances in the development of small molecule inhibitors, blocking antibodies and oligonucleotides that hinder Notch activity, and their outcome in clinical trials. Although Notch was initially identified as an oncogene, later studies showed that it can also act as a tumor suppressor in certain contexts. Further complexity is added by the existence of numerous Notch family members, which exert different activities and can be differentially targeted by inhibitors, potentially accounting for contradictory data on their therapeutic efficacy. Notably, recent evidence supports the rationale for combinatorial treatments including Notch inhibitors, which appear to be more effective than single agents in fighting cancer.
Diversity of Notch Signaling Mechanisms
This is the centennial of the discovery of the Notch gene, which was first identified by Thomas H. Morgan and colleagues in 1917 in fruit flies, where spontaneous mutations in a specific locus of the X chromosome produced notches at the wing margin [1,2]. Molecular genetic studies followed, but it took until the 1980s to accomplish the actual cloning of Drosophila Notch gene by Artavanis-Tsakonas, Young, and colleagues [3,4]. Notably, the Notch pathway is highly conserved from sea urchins to humans. In contrast to flies that have only one gene, there are four Notch receptors in mammals, Notch1 to Notch4. Notch signaling regulates differentiation, proliferation, apoptosis, migration, and angiogenesis, as well as stem cell growth and survival during development and disease [5][6][7][8][9].
Notch proteins are single pass transmembrane heterodimeric receptors. They are synthesized as single-chain precursors that undergo processing and modification in the endoplasmic reticulum and the Golgi apparatus to produce the mature forms [10]. In particular, Notch precursor undergoes a first proteolytic cleavage (S1) in the Golgi to form a heterodimer, which represents the mature form of the receptor. This heterodimer sits on the plasma membrane and is composed of a large extracellular domain (N-ECD) and a membrane-tethered intracellular domain (N-ICD). Notch receptors are activated by ligands of the Delta-like (Dll) and Jagged (Jag) families that are exposed on the surface of adjacent cells, thereby binding the N-ECD in trans and acting in a juxtracrine manner. In total, there are five Notch ligands that belong to Delta like ligand (DLL) and Jagged families-Dll1, Dll3, Dll4, Jag1, and Jag2. Dll and Jagged ligands contain Delta-Serrate-Lag2 (DSL) domain and EGF (Epidermal Growth Factor) repeats that interact with EGF-like repeats found in the Notch ectodomain to activate the signaling cascade. Interactions between Notch and its ligands in cis, instead, are reported to be inhibitory [11]. The glycosylation of EGF repeats is an important modulator of Dll vs Jagged signaling; for instance, Fringe glycosyltransferases (that add GlcNAc to O-fucose on Notch EGF repeats) can enhance the binding of Dll ligands, compared to Jagged, to the Notch receptor [12]. Upon ligand binding, Notch is sequentially cleaved at the plasma membrane by two other proteases. Extracellular S2 cleavage is due to the tumor necrosis factor-alpha converting enzyme (TACE), whereas S3 intramembrane cleavage is operated by a gamma secretase, which catalyzes the release of the intracellular domain of Notch (N-ICD) into the cytoplasm. N-ICD then translocates inside the nucleus and forms a complex with RBPjk/CSL (CBF1/Su(H)/Lag-1) [13]. RBPjk is a DNA-binding transcription factor, which acts in a large complex with other proteins. It is the prime effector of Notch-associated functions. Normally RBPjk acts as a repressor of Notch transcriptional target genes due to its association with SMRT corepressor, CIR, and histone deacetylase-1 (HDAC1). N-ICD binding disrupts this inhibitory complex, releasing the repressor proteins and fostering the formation of a new complex of RBPjk with SKIP, MAML1, PCAF, and GCN5, which is able to induce gene expression. Notch signaling regulates a diverse set of genes that impinge on several cellular functions [6]. Among the known Notch-RBPjk targets are the genes belonging to the HES and HEY families, NF-κB1, NF-κB2, STAT6, p27/Kip1, ErbB2, CyclinD1, cMyc, p21.
DSL-ligand-and RBPjk-independent Notch-initiated signaling has also been reported, referred to as "non-canonical" Notch pathway [14,15]. While the canonical pathway plays a major and well defined role in stem cell maintenance and cell fate determination during development, and it regulates several cancer hallmarks in tumorigenesis, the functions of the non-canonical pathway are poorly characterized [14]. Neither the targets nor the mediators of the non-canonical pathway are clearly defined; however, N-ICD has been reported to associate with other transcription factors distinct from RBPjk, such as SMAD3, YY1, HIF1α, and NF-κB [16][17][18][19]. In one study, it was shown that IL-6, an important pro-inflammatory cytokine in tumors, was upregulated by Notch1-ICD in breast cancer cells with p53 mutation or deficiency, independent of RBPjk [20]; IKKα and IKKβ were implicated in this mechanism, but whether they can form a complex with N-ICD needs to be elucidated. Several studies illustrated the crosstalk between hypoxia-induced HIF and Notch transcription factors; for instance, HIF1α was found to interact and stabilize the N-ICD, leading to enhancement of the canonical Notch pathway [21][22][23]. On the other hand, it was shown that the negative regulator FIH-1 (Factor Inhibiting HIF1) can bind and functionally inactivate N-ICD by hydroxylation [18]. In addition, RBPjk-independent Notch signaling was reported to activate the PI3K pathway in cervical cancer cell lines, implicating the activity of Deltex1, an E3-ubiquitin ligase [24].
Diversity of Notch Signaling Activities in Cancer
Ellisen and coworkers were the first to associate Notch with cancer; they reported that in T-cell acute lymphocytic leukemia (T-ALL), Notch1 gets constitutively activated due to chromosomal translocation t(7;9)(q34; q34.3), found in about 1% of T-ALL patients [21]. Later studies showed that more than 50% of T-ALLs actually show activating Notch1 mutations in the heterodimerization (HD) and 'proline, glutamic acid, serine, threonine-rich' (PEST) domains [22]. Remarkably, the majority of solid human cancers bear deregulated expression of Notch pathway genes, rather than genetic changes [23]. Notch1 is the best-studied member of the family, and it is often overexpressed in human breast, colorectal, lung, pancreas, and prostate cancers [6]. Notch1 expression is induced by hypoxia, which is a common event in growing tumors [24,25]. Moreover, Notch signaling is activated downstream of a range of pathways deregulated in cancer, such as cMyc, p53, PI3K, or RAS [26][27][28][29]. Several miRNAs can furthermore control the Notch pathway, e.g., miR34a and miR326 have been shown to target both Notch1 and Notch2 in gliomas and pancreatic cancer [30][31][32], and Notch3 3 UTR can be targeted by miR206 [33].
Notch activity regulates several cancer hallmarks, such as cancer cell survival, proliferation, migration, invasion, and metastasis, as it transcriptionally modulates a range of signaling pathways. For instance, the ERK pathway and cyclinD1 seem to mediate Notch1-dependent control of cell survival and proliferation [34][35][36]. Moreover, Notch regulates mTOR/Akt and NF-κB pathways, eventually inhibiting apoptosis in breast cancer cells [37][38][39]. Notch1 and its ligands were found to be overexpressed in prostate cancer compared to normal tissue [40]. Notably, prostate cancers are commonly characterized by inactivation of the tumor suppressor PTEN [41,42], which is negatively regulated by Hes1, a well-known effector of canonical Notch signaling [43]. In addition, Kwon and coworkers showed that in PTEN null mice Notch signaling was required for metastatic dissemination, through the induction of epithelial to mesenchymal transition (EMT) in a FOXC2-dependent manner [40]. Indeed, loss of E-cadherin and EMT are key events in prostate cancer metastasis [44]. EMT is characterized by increased expression of a series of transcription factors (including Snail, Slug, Twist, Zeb1, and Zeb2), and the Notch pathway induces their expression in prostate as well as in breast, pancreatic, and colon cancers [45][46][47]. Notably, in prostate cancer cells, Notch was also shown to regulate an EMT-like phenotype through the upregulation of the semaphorin receptor PlexinD1, impinging on SLUG [48]. Moreover, PlexinD1 expression correlates with Notch1 and Notch3 in different cancer types [48], and its role in metastatic tumor progression has been further demonstrated in colon cancer, melanoma, and ovarian cancer [49,50]. In apparent contradiction with these data, in endothelial cells Notch appears to negatively regulate PlexinD1 expression [51]. Thus, the Notch-PlexinD1 axis seems to exert a range of effects in diverse cells, warranting further studies to fully define its role in cancer.
Notably, in certain contexts, Notch signaling has been shown to be tumor suppressive, such as in head and neck carcinomas, and in pancreatic cancer [52,53]. In squamous carcinomas of skin and lung, about 75% of patients bear loss of function mutations in Notch1 or Notch2 [54]. In fact, Notch upregulates p21/Cip1/WAF expression in keratinocytes, which supports its tissue-specific tumor-suppressive function in the skin [55]. A similar pattern of Notch1/2/3 functional inactivation has been reported in about 40% of bladder cancers [56]. In pancreatic adenocarcinoma, acute myeloid leukemia, and angiosarcomas, the role of Notch1 remains controversial [52,57].
The functional role of the other family members, Notch2, Notch3, and Notch4 is less understood. For instance, Ortica et al. had comparatively tested the function of the four Notch receptors in mouse embryonic cells by over-expressing constitutively active forms, and verified their diversity in controlling cell proliferation and differentiation fate [58]. Notably, in colorectal cancer Notch2 levels are decreased and similar results were obtained in thyroid and ovarian cancers, suggesting a tumor suppressor function for this family member [59,60]. On the other hand, Notch3 is constitutively activated in one third of basal-like breast cancers, and a Notch3 specific antagonistic antibody, anti-N3.A4, hampered the growth of orthotopic HCC1143 breast cancer xenografts in mice [61]. In prostate cancer, Notch3 overexpression was associated with higher Gleason score and a high proliferative gene expression signature [62]. In hepatocellular cancer, both Notch1 and Notch3 levels correlated with tumor grade, invasion, and metastasis [63]. In oral squamous cell carcinomas, Notch3 signaling is activated in stromal fibroblasts, in turn eliciting tumor angiogenesis [64]. Besides NOTCH1, also NOTCH3 gene was found to be activated by mutations in T-ALL [65]; moreover, expression analysis of T-ALL cells revealed a common oncogenomic program triggered by both activated Notch oncogenes via RBPjk [65,66]. In other studies, Notch3 was instead linked to tumor suppressive activity, e.g., its overexpression in breast and melanoma cell lines was found to increase p21 and thereby inhibit cell proliferation and induce cell senescence [67]. Zhang et al. recently reported that Notch3 induces the tumor suppressor WWC1/Kibra, a regulator of the Hippo pathway, thus inhibiting EMT in breast cancer cells [68]. Recent data seem to support a pro-tumorigenic function of Notch4. For example, in triple negative breast cancer (TNBC) cell line MDA-MB-231, Notch4 overexpression induces proliferation and invasion and, conversely, its downregulation inhibits proliferation [69]. In another model of breast cancer-MCF7 cells-increased expression of Notch4 elicited EMT and invasiveness [70]. Notch4 is also upregulated in pancreatic cancer cell lines compared to non-transformed cells and its inhibition impairs viability, migration, and invasion [71]. Notch4 activity was also associated with EMT gene expression signature in melanoma cells and held responsible for increased metastasis [72].
An additional layer of complexity in the role of Notch signaling in cancer is added by the recent findings that the expression of several Notch target genes remains upregulated even after RBPjk depletion due to epigenetic changes (enrichment of H3K4me3 and H4ac marking the active promoters) [73]. Notably, there is controversy about the role of RBPjk in cancer: in fact, consistent with its transcriptional repressor function, RBPjk depletion can promote tumorigenesis [73,74]; however, RBPjk was also found highly expressed in glioblastomas, and its targeting decreased self-renewal of brain tumor-initiating cell and tumor formation [75].
Advances in Targeting of Notch Signaling by Small Molecule Inhibitors
The development of small molecules able to target signaling molecules, specifically active in tumor cells, has exponentially increased over the last decades. It is relatively easier to develop targeted drugs against catalytic sites, such as for oncogenic protein kinases. Instead, the identification of appropriate targeting strategies for non-enzymatic molecules is more challenging. Based on the structure of Notch receptors and their mechanism of activation, several types of inhibitors have been generated, which are able to inhibit either the binding of the ligands or the gamma secretase-dependent proteolytic cleavage of the receptor [76] (Figure 1). These inhibitors have been tested either alone or in combination, and some of them entered clinical trials with alternating success. Gamma secretase inhibitors (GSI), which were initially developed to prevent the formation of amyloid deposits in Alzheimer's disease, have been repurposed to block other targets of the same enzymes, including the Notch family receptors that, as mentioned, are often highly expressed in cancer. Gamma secretase is in fact a complex of proteins, including APH1, PEN2, Nicastrin, and presenilin. It has more than 100 type-I membrane protein targets, including Notch ligands, ErbB4, Syndecan, and CD44 [77][78][79]. The gamma secretase complex cleaves within the intramembrane region of its protein substrates and is very promiscuous regarding the target sequence. More than 100 GSI have been synthesized so far, with different affinity, specificity, and IC50 values, falling into 3 categories: (i) peptide isosteres, (ii) azepines, and (iii) sulfonamides [80]. GSI are either transition state analogs of the aspartyl proteinase active site (namely, they bind competitively to the catalytic site of presenilins) or non-transition state inhibitors, which bind in a site other than the active site (i.e., at the dimerization interface of the gamma secretase complex). An early generation non-transition state analogue is DAPT (N-[N-(3,5-Difluorophenacetyl-L-alanyl)]-S-phenylglycine t-Butyl Ester) [81]. Later, other compounds were generated, with 100-fold stronger activity, such as LY411575, LY450139, RO4929097, and BMS906024 ( Figure 2). RO4929097 has been shown to extend survival in an intracranial mouse glioma model [82], whereas BMS906024 is the only GSI able to efficiently inhibit all four Notch receptors, as well as APP [83]. The non-transition state analogs, like DAPT, are supposed to be more effective than the transition state inhibitors, as substrate docking into the enzyme interior can hinder inhibitor binding in the transition state. The overall efficacy of GSIs in cancer is still under debate, as several studies have been performed in diverse models, which makes it difficult to draw univocal conclusions (Figure 3). MCF7 breast cancer cells stably expressing N1-ICD underwent EMT and grew faster in a xenograft model and, conversely, treatment of SKBR3 xenografts with DAPT modestly reduced tumor growth [84,85]. The blockade of Notch pathway by GSI-18 in mouse GBM xenografts depleted CD133-positive stem-like cells, reduced tumor growth, and significantly promoted survival [86]. These and other studies paved the way for clinical trials, in which many GSIs have been administered orally (Figure 4). The current clinical trials are focused on RO4929097, MK0752, and PF03084014. In particular, RO4929097 has been tested in phase I and II trials in patients with advanced solid tumors. In patients with colorectal carcinoma it did not show any positive effects, but instead mild toxicity with nausea and vomiting [87]. An additional, frequent adverse effect of GSI is diarrhea, associated with excess secretory goblet cells in the intestine, possibly because Notch inhibition causes an imbalance in cell fate differentiation [88]. Due to the lack of clinical benefit in multiple studies, its clinical use was terminated prematurely [89]. MK0752 has also been tested in phase I and II clinical trials, showing partial clinical benefit in a cohort of 103 patients with advanced solid tumors; one patient with anaplastic astrocytoma showed complete response lasting for more than one year [90]. Another shortcoming of GSI is their lack of effectiveness upon the onset of drug-resistance mechanisms. This has been primarily reported in T-ALL, which showed lack of benefit from GSI despite the constitutive Notch activity. GSI-resistant T-ALL cells were found to harbor mutational loss of PTEN, and thereby aberrant activation of the Akt pathway, increased glycolysis, and carbon metabolism [43]. T-ALL cell lines were also found to harbor FBW7 mutations leading to residual Notch signaling, which contributed to GSI resistance [91]. Furthermore, GSI-resistant T-ALL cells can maintain upregulated Myc expression independently of Notch, due to Brd4 activity [92]. because Notch inhibition causes an imbalance in cell fate differentiation [88]. Due to the lack of clinical benefit in multiple studies, its clinical use was terminated prematurely [89]. MK0752 has also been tested in phase I and II clinical trials, showing partial clinical benefit in a cohort of 103 patients with advanced solid tumors; one patient with anaplastic astrocytoma showed complete response lasting for more than one year [90]. Another shortcoming of GSI is their lack of effectiveness upon the onset of drug-resistance mechanisms. This has been primarily reported in T-ALL, which showed lack of benefit from GSI despite the constitutive Notch activity. GSI-resistant T-ALL cells were found to harbor mutational loss of PTEN, and thereby aberrant activation of the Akt pathway, increased glycolysis, and carbon metabolism [43]. T-ALL cell lines were also found to harbor FBW7 mutations leading to residual Notch signaling, which contributed to GSI resistance [91]. Furthermore, GSIresistant T-ALL cells can maintain upregulated Myc expression independently of Notch, due to Brd4 activity [92]. As mentioned before, a major limitation of molecules targeting multiple Notch receptors stems from the different functions exerted by the different Notch members. They can all act as either oncogenes or tumor suppressor genes, depending on the context. Indeed, in some instances, GSI therapy was found to aggravate skin cancer [93,94]. It has also been reported that chronic treatment with Notch inhibitors leads to the formation of vascular tumors [95], consistent with the fact that Notch regulates angiogenesis, and its inhibition leads to exuberant vessel sprouting and defective maturation. Notably, only a few studies have investigated the long-term consequences of Notch inhibition. In transgenic mice undergoing progressive loss of Notch1 with age, a shorter life-span of about 10 months was observed due to widespread vascular tumors [96], such as liver hemangiomas, due to excessive proliferation of endothelial cells. In mice, GSI therapy was associated with immunosuppression and inhibition of stem cell renewal in normal tissues [97]. Intermittent regimens of systemic Notch inhibition, i.e., multiple drug cycles separated by a drug holiday, are currently being tested in an attempt to increase efficacy and minimize toxicity of GSIs. Specific targeting of the Notch pathway is also potentially achievable by tackling the N-ICD/RBPjk/MAML ternary complex (NTC) at nuclear level. For example, by using an innovative combination of molecular docking of As mentioned before, a major limitation of molecules targeting multiple Notch receptors stems from the different functions exerted by the different Notch members. They can all act as either oncogenes or tumor suppressor genes, depending on the context. Indeed, in some instances, GSI therapy was found to aggravate skin cancer [93,94]. It has also been reported that chronic treatment with Notch inhibitors leads to the formation of vascular tumors [95], consistent with the fact that Notch regulates angiogenesis, and its inhibition leads to exuberant vessel sprouting and defective maturation. Notably, only a few studies have investigated the long-term consequences of Notch inhibition. In transgenic mice undergoing progressive loss of Notch1 with age, a shorter life-span of about 10 months was observed due to widespread vascular tumors [96], such as liver hemangiomas, due to excessive proliferation of endothelial cells. In mice, GSI therapy was associated with immunosuppression and inhibition of stem cell renewal in normal tissues [97]. Intermittent regimens of systemic Notch inhibition, i.e., multiple drug cycles separated by a drug holiday, are currently being tested in an attempt to increase efficacy and minimize toxicity of GSIs. Specific targeting of the Notch pathway is also potentially achievable by tackling the N-ICD/RBPjk/MAML ternary complex (NTC) at nuclear level. For example, by using an innovative combination of molecular docking of small molecules with NTC and proximity-based AlphaScreen technology, Astudillo et al. designed a small molecule inhibitor of NTC, called Inhibitor of Mastermind recruitment-1 (IMR-1) [98]. IMR-1 selectively inhibited Notch-dependent transcriptional activation by interfering with Maml1 recruitment to the N-ICD/RBPjk complex, and impaired tumor growth in patient-derived xenograft (PDX) models.
Combinatorial Treatments with Notch Inhibitors
Due to the lack of success of GSIs as single agents, there is now growing interest on their use in association with standard cancer treatments, including hormonal, radiation, chemotherapy, or targeted inhibitors (see Figure 1). Interestingly, cancer treatment with a range of chemotherapeutic agents, such as paclitaxel, docetaxel, cisplatin, oxaliplatin, doxorubicin, and gemcitabine independently resulted in the upregulation of the Notch pathway [99]. This clearly supports the rationale for combination therapies, since the Notch pathway may represent a survival mechanism activated to endure drug treatment. For instance, in head and neck squamous cell carcinoma
Combinatorial Treatments with Notch Inhibitors
Due to the lack of success of GSIs as single agents, there is now growing interest on their use in association with standard cancer treatments, including hormonal, radiation, chemotherapy, or targeted inhibitors (see Figure 1). Interestingly, cancer treatment with a range of chemotherapeutic agents, such as paclitaxel, docetaxel, cisplatin, oxaliplatin, doxorubicin, and gemcitabine independently resulted in the upregulation of the Notch pathway [99]. This clearly supports the rationale for combination therapies, since the Notch pathway may represent a survival mechanism activated to endure drug treatment. For instance, in head and neck squamous cell carcinoma (HNSCC), cisplatin-resistant tumors showed high expression of Notch1 and a good response to GSIs [100]. Similarly, in colorectal and ovarian cancers, the combination of cisplatin and GSIs was more effective than either monotherapy. In colon cancer, 5-FU and oxaliplatin induced Notch activity, which was curbed by GSI association [101]. In pancreatic cancer, in which gemcitabine chemotherapy is a standard treatment, drug-resistant tumors upregulated Notch2, Notch3, and Jag1, while the inhibition of Notch3 and gemcitabine induced apoptosis [102]. Notch pathway inhibitors have been shown to synergize with DNA damaging agents like doxorubicin in a variety of breast cancer cell lines [103,104]. TNBC is a good example of the use of combination therapy. TNBC and basal-like breast cancers often express particularly high levels of both EGFR and Notch. Dong and coworkers showed that the combined blockage of both pathways by Gefitinib and Compound-E is highly effective in suppressing HCC1806 tumor growth in a xenograft mouse model [105]. Moreover, Pandya et al. showed that a combination therapy with anti-Her2 and GSI prevents drug-resistance and tumor recurrence of HER2-overexpressing xenografts [106]. In prostate cancer, in which anti-androgen or chemotherapy is often the standard treatment, combination of docetaxel and the Notch inhibitor PF03084014 was effective in preclinical models, resulting in reduced cancer cell EMT, prosta-spheres formation, and tumor microvessel density in vivo [107]. Jin and coworkers showed a synergistic effect of the Notch inhibitor MRK003 with the Akt inhibitor MK2206, mainly impacting on invasion rather than on cancer cell proliferation [108]. In addition, in KRAS-driven lung adenocarcinomas, the co-inhibition of DDR1 (by dasatinib) and Notch pathway (by demcizumab) curbed tumor cell proliferation with additive therapeutic benefit [109]. Synergy of GSIs with the inhibition of Bcl-2 and Bcl-xL by ABT737 was also reported in multiple myeloma, in which the combined therapy resulted in the increased activity of Bak, Bax, and release of cytochrome-c, with no effects on peripheral blood mononuclear cells [110]. In T-ALL, in which Notch itself is activated in about 50% of cases, there is a population of cells that is GSI-tolerant and expands even in the presence of Notch1 inhibitors. Interestingly, increased expression of Myc in more than 50% T-ALL is reportedly due to Notch activation [26,111]. In fact, the association of GSI with a BET-bromodomain inhibitor (JQ1) downregulating Myc and Myc target genes showed increased efficacy in inducing apoptosis in T-ALL [107].
Inhibition of Notch Signaling by Blocking Antibodies and Decoys
As discussed above, GSIs inhibit not only Notch receptor activation but also other gamma secretase targets, which often lead to adverse effects and ambiguous conclusions regarding safety and efficacy. The development of therapeutic antibodies, capable of specifically inhibiting Notch family members, stands as a promising strategy to overcome this limitation. Blocking antibodies to Notch can be divided into two groups: (a) those directed to the 'negative regulatory region' (NRR) that enable ADAM mediated cleavage, e.g., raised against Notch1-3 by phage display technology [112]; and (b) antibodies that block receptor-ligand interaction by hindering EGF repeats. Humanized antibodies against Notch1 or Notch2/3 have been generated and entered phase I and II trials. Moreover, a cross-reactive human monoclonal Notch2 and Notch3 antagonist, OMP-59R5 (Tarextumab), is effective in reducing cancer cell proliferation and the growth of breast, lung, ovarian, and pancreatic cancers [113]. OMP-59R5 was also found to be well tolerated in advanced pancreatic cancer patients and showed efficacy in combination with gemcitabine and Nab-paclitaxel [113,114]. Notably, the use of antibodies selectively targeting Notch1 and sparing Notch2 has been shown to prevent major adverse and toxic effects related with pan-Notch inhibition [115]. Notch ligands can also be targeted by blocking molecules. For instance, anti-Dll4 antibodies have been tested in pre-clinical and clinical trials. The humanized anti-Dll4 mAb OMP21M18 caused inhibition of tumor growth in PDX models and is currently being tested in several clinical trials, either as a single agent or in combination with other drugs [116]. A neutralizing anti-DLL4 humanized phage antibody, YW152F, was shown to both specifically block Notch-DLL4 signaling and hamper the growth of MDA-MB-435 cells, implanted in the mouse mammary fat pad [117]. Combination of DLL4-blockade by either Dll4-Fc or anti-DLL4 antibodies (REGN1035, REGN421) with VEGF-trap Aflibercept showed stronger efficacy in reducing tumor burden compared to monotherapy, by blocking angiogenesis and eliciting cancer cell apoptosis in murine tumors [118]. Soluble decoys blocking Notch signaling have also been tested. For instance, Funahashi et al. generated a soluble form of the Notch1 ectodomain, which competes with Dll1 and Jag1 binding to the receptor. This molecule was particularly effective in inhibiting VEGF-induced angiogenesis in normal skin and neo-angiogenesis in tumor models [119]. Kuramoto and coworkers generated a soluble form of Dll4, called Dll4-Fc, which suppressed liver metastasis of small cell lung cancer (SCLC) cells expressing high levels of Dll4 [120]. Intriguingly, the non-canonical Notch ligand DLK1 could also compete with canonical ligands of the DSL type to block the canonical signaling pathway [121]. The Notch receptor is a membrane-tethered protein, and physiologically no soluble forms have been reported to compete with its activity. The extracellular EGF-like11-13 domains in the Notch receptor have been artificially fused to the IgG1 Fc domain to form a soluble protein capable to trap Notch ligands. Using this approach, Klose et al. showed that Notch1-EGF11-13-Fc, which showed preferential binding to Jag1, inhibited in vitro tube formation and tip cell formation in the retinal angiogenesis assay [122]. Recently, Kangsamaskin et al. generated Notch decoys specific for the unique region of ligand-receptor interaction [123]. They showed that the N110-24 decoy effectively inhibited angiogenic sprouting in the mouse retina, as well as tumor growth, by specifically blocking Jag1/Jag2 mediated Notch signaling. In contrast, the N11-13 decoy led to vessel hypersprouting in the same models by interfering with Dll1/Dll4 mediated Notch signaling. An additional approach, less explored so far, is to prevent the assembly of Notch transcription complex, composed of N-ICD, RBPjk, and MAML. For instance, Mollering et al. generated 16-amino acid long peptides, based on MAML sequence motif required for interaction with N-ICD and RBPjk, called 'stapled α-helical peptides derived from MAML1 (SAHM). Indeed, SAHM1 prevented the assembly of active transcription complex by binding to N-ICD and RBPjk, thereby inhibiting the recruitment of the MAML1 co-activator [124]. SAHM1 peptide treatment suppressed Notch target genes in the T-ALL cell line KOPT-K1 and concomitantly halted proliferation and leukemia progression. Clearly, molecules interfering with Notch-activated transcriptional complex may be expected to recapitulate pan-Notch signaling inhibition, including some of the reported adverse effects. Moreover, the therapeutic use of peptides in human patients poses pharmacokinetic issues to be addressed.
Recent Trends in Notch Targeting by Oligonucleotide-Based Methods
Antisense oligonucleotides are short stretches of chemically synthesized DNA or RNA able to target genes of interests [125]. Due to high sequence specificity, they are expected to show less off-target effects compared to other therapeutic approaches. As nucleic acids do not easily enter inside the cells, they are generally chemically modified to increase their capacity to penetrate cell membranes. Nakazawa et al. reported that Notch signaling regulates the abnormal proliferation and differentiation of synoviocytes in Rheumatoid Arthritis. The proliferation of synoviocytes could be blocked by both Notch1 antisense oligonucleotides and the gamma secretase inhibitor MW167 [126]. Moreover, in the SCLC cell line NCI-H82, the use of Notch1-phosphorothioate antisense oligonucleotides reduced the expression of Notch target Hes1 [127]. To modulate angiogenesis, Zimrin and coworkers generated an antisense Jagged oligomer, which potentiated FGF2-induced collagen invasion by endothelial cells [128]. Very few studies have exploited these tools in vivo in preclinical models, with one limitation being the poor efficiency of intracellular delivery upon systemic administration. Notably, the systemic expression of Notch1 antisense oligonucleotides (NAS) under mouse mammary tumor virus long terminal repeat promoter was found to significantly reduce Notch activity in tissues, but also showed several Notch-associated defects on learning and memory [129,130]. Thus, an accurate titration of the systemic delivery of NAS is warranted, with regard to their application for cancer therapy.
Conclusions and Future Perspectives
The Notch transcriptional regulator lies at the intersection of multiple signaling pathways, which control cell stemness, differentiation, survival, proliferation, migration, and invasion during development and disease. In most instances, the inhibition of Notch signaling has been shown to block cancer cell growth and angiogenesis. In fact, the deregulated expression of Notch pathway components in human cancers supports the therapeutic use of GSIs. Recent studies tend to indicate that these drugs may be more effective in combination with chemotherapy or other targeted therapies, compared to their use as single agents. The modest clinical success reached by GSIs so far could be attributed to the broad range of gamma secretase targets, resulting in toxicity and inadequate drug efficacy, thereby warranting the development of selective inhibitors directed against specific Notch pathway components. Moreover, while multiple Notch receptors are co-expressed in tumors, they do not mediate the same effects, further challenging the therapeutic efficacy of pan-Notch inhibitors. Thus, more studies are needed to understand the specific function of the different Notch receptors in cancer, as they may form distinct transcription complexes and develop specific drugs targeted to individual family members. | 2018-03-25T20:31:33.342Z | 2018-02-01T00:00:00.000 | {
"year": 2018,
"sha1": "3f43ca1ef7e526b1246d2a18d3b67a7b809c2483",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/23/2/431/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f43ca1ef7e526b1246d2a18d3b67a7b809c2483",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
11348270 | pes2o/s2orc | v3-fos-license | Effect of hypoxia on integrin-mediated adhesion of endothelial progenitor cells
Homing of endothelial progenitor cells (EPCs) is crucial for neoangiogenesis, which might be negatively affected by hypoxia. We investigated the influence of hypoxia on fibronectin binding integrins for migration and cell-matrix-adhesion. AMP-activated kinase (AMPK) and integrin-linked kinase (ILK) were examined as possible effectors of hypoxia.Human EPCs were expanded on fibronectin (FN) and integrin expression was profiled by flow cytometry. Cell-matrix-adhesion- and migration-assays on FN were performed to examine the influence of hypoxia and AMPK-activation. Regulation of AMPK and ILK was shown by Western blot analysis. We demonstrate the presence of integrin β1, β2 and α5 on EPCs. Adhesion to FN is reduced by blocking β1 and α5 (49% and 2% of control, P < 0.05) whereas α4-blockade has no effect. Corresponding effects were shown for migration. Hypoxia and AMPK-activation decrease adhesion on FN. Although total AMPK-expression remains unchanged, phospho-AMPK increases eightfold.The EPCs require α5 for adhesion on FN. Hypoxia and AMPK-activation decrease adhesion. As α5 is the major adhesive factor for EPCs on FN, this suggests a link between AMPK and α5-integrins. We found novel evidence for a connection between hypoxia, AMPK-activity and integrin activity. This might affect the fate of EPCs in ischaemic tissue.
Introduction
The increasing evidence of postnatal neovascularization leads to new therapeutic concepts to restore function of damaged organs by infusion of ex vivo expanded circulating endothelial progenitor cells (EPC) [1]. Human EPCs are characterized by expression of endothelium specific proteins and by their ability to react similarly to endothelial cells [2]. They have been shown to home preferentially to areas of ischaemia and to increase vasculogenesis [3]. The cells exit the blood flow and migrate through the vessel wall by a complex series of adhesion processes to vascular endothelium and extracellular matrix [4].
Firm adhesion of EPCs to the vascular endothelium and the following invasion of the vessel wall is mediated by b 2 -integrins, while rolling and light adhesion are mainly mediated by selectins. The role of b 1 -integrins in this process remains unclear [4,5].
Nevertheless, the ex vivo expansion of EPCs on fibronectin (FN) as described by Asahara et al. requires the presence of b 1 -integrins with the subunits a 4 and a 5 mediating adhesion to fibronectin [2]. Furthermore, expression of b 1 as well as a 4 and a 5 integrin subunits has been demonstrated on RNA level in EPCs [4]. During the process of neovascularization EPCs specifically target hypoxic tissues. This is also known from leucocytes, which enter hypoxic areas as a result of inflammatory response and its consecutive high metabolism. The efficacy of EPCs to induce neovascularization is increased after hypoxic preconditioning. This might be due to superior homing capability after accumulation of b 2 -integrins on the cell surface [6].
The AMP-activated kinase (AMPK) plays a key role in energy metabolism of cells under hypoxic conditions. Its activation results in a metabolic switch from cellular energy storage to energy release under conditions of limited ATP-supply. Hypoxia as low as 0.3-5% oxygen does not decrease bioenergetics or induce cell death, but results in mitochondrial release of reactive oxygen species. This activates LKB1 and finally AMPK by phosphorylation at Thr172 of the a-subunit [7].
In this study, the role of FN-specific integrins for adhesion and migration of EPCs as well as the influence of hypoxia was examined. AMPK and ILK were studied as possible effectors of hypoxia. The findings of this study might clarify the behaviour of EPCs in ischaemic tissue.
Additionally, flow cytometry showed expression of CD31, CD34 and CD18 as shown in supplementary figures (online supplement, Figs 1 and 2).
Cell-matrix adhesion
The 96-well plates (Costar Corning, Amsterdam, The Netherlands) were coated overnight at 4°C with 10 lg/ml recombinant fibronectin (Sigma-Aldrich) in phosphate buffered saline (PBS). Wells were washed with PBS once and 50 ll adhesion buffer was added to prevent drying. Ex vivo expanded EPCs were stained with 5 ll CellTracker TM green (CMFDA; Molecular Probes/Invitrogen, Life Technologies, Darmstadt, Germany) per 3 ml culture medium for 5 min. and detached with trypsin. Trypsin was blocked, cells washed and resuspended in adhesion buffer (150 mM NaCl, 20 mM Hepes, 2 mM MgCl 2 , 0.05% fraction V bovine serum, 5% Glucose, pH 7.42). Experiments were performed as indicated in triplets with 10 5 cells per well. For inhibition experiments, EPCs were pre-incubated with antibodies for 15 min. at 4°C. Anti-a 5 (CBL497, clone SAM-1), antib 2 (CBL158, clone MEM-48), anti-b 1 (CBL481, clone TDM29), anti-a 4 (MAB1954Z, clone P4G9), anti-a 5 (MAB1956Z, clone P1D6), stimulating anti-b 1 (MAB1951Z, clone P4G11), blocking anti-b 1 (MAB2253Z, clone 6S6) were obtained from Chemicon International. Goat IgG (R&D Systems, Minneapolis, MN, USA) served as control. Alternatively, experiments with cyclic RGD peptide (Sigma-Aldrich) at 50 lM were performed. Pre-incubation was performed similar to antibody treatment. After pre-incubation, the plate was centrifuged for 3 min. at 300 r.p.m. (RZF 17 g) to achieve simultaneous contact of the cells to the plate. After 20 min. of incubation at 37°C, non-adherent cells were removed and the plates were washed twice with adhesion buffer and fixed with 4% paraformaldehyde. Adherent cells were quantified by fluorescence microscopy (magnification 209, Axiovert 100; Carl Zeiss, Oberkochen, Germany). Cells were counted in five randomly selected fields covering different areas of each well. Images were digitized and the software package ScionPro with custom macros was used for semi-automated counting. Data were normalized and presented as percentage of control.
Cells were incubated at 37°C in 5% CO 2 for 12 hrs. For inhibition experiments, EPCs were pre-incubated for 15 min. at 4°C with 5 ll of the indicated antibodies per 3.0 9 10 5 cells. The antibodies were free of sodium azide to reduce toxicity during the assay. Cells remaining on the upper surface of the filters were mechanically removed, and migrated cells at the lower surface were fixed with 4% formaldehyde and counted in five fields by using a fluorescence microscope (Axiovert 100; Carl Zeiss).
Hypoxic conditioning
According to the requirements of the experiment either adherent (Western blots) or resuspended cells (cell-matrix-adhesion, migration) were incubated in hypoxic environment. An electronically regulated incubator was used (Model C42; Labotect, Göttingen, Germany) to determine temperature and concentration of oxygen and carbondioxide during the experiments. Controls were preserved in regular incubators (37°C, 5% carbondioxide, oxygen as indicated, nitrogen ad 100%). The incubator was calibrated prior to each experiment.
Statistical analysis
Data were calculated using Microsoft TM Excel for Mac and statistical calculations were performed by Graphpad TM Prism (GraphPad Software Inc, La Jolla, CA, USA). Data are presented as mean ± S.D. Comparisons between groups were calculated using ANOVA with Bonferroni correction for multiple testing. For non-parametric distributions tests were used as appropriate.
Integrin profile of EPCs
As revealed by flow cytometry, EPCs express b 1 and a 5 but not a 4 ( Fig. 1). In contrast, MNCs before and after non-cell-specific cultivation show unaltered expression of both a 4 and a 5 . Both cell types were positive for b 2 (see online supplement). Preselected cells expressing CD14 showed a similar expression of integrin subunits (Suppl. Fig. 6).
Functionality of expressed integrins
The EPCs were pretreated with integrin specific antibodies and data were normalized for control. As expected, there is no significant loss of adhesion to fibronectin due to blockade of b 2 (81.8 ± 9.3% of control). Inhibition of b 1 leads to partial loss of adhesion and inhibition of a 5 to a complete ablation of adhesive capacity to fibronectin (49.4 ± 15.8% and 2.1 ± 1.3% respectively). The use of different clones of antibodies leads to the same results (clone TDM29 and clone 6S6 for b 1 , clone SAM-1 and clone P1D6 for a 5 ), although application of a stimulating antibody targeting b 1 leads to a slight albeit statistically not significant increase of adhesion (111.2 ± 18.7%). a 4 -blockade does not alter adhesion (106 ± 26%). In consistency with the expression profile seen in flow cytometry, only inhibition of the expressed fibronectin-specific integrins b 1 and a 5 leads to a significant loss of adhesion in this model. The effect of a 5 -blockade on EPCs was much higher than blockade of b 1 . In contrast, blockade of each of the integrin subunits a 4 , a 5 and b 1 showed to diminish adhesion to fibronectin in peripheral blood MNC. Using a cyclic RGD peptide, adhesion to fibronectin decreased by 50% in endothelial progenitor cells (see online supplement, Figs 4 and 5).
To determine the role of the expressed integrins in the dynamic process of migration, EPCs migrated in a fibronectin coated modified Boyden chamber along a SDF-a-gradient. Migration is markedly reduced when b 1 or a 5 are blocked (45.4 ± 22.1% and 57.5 ± 9.1% of control, respectively). Blockade of a 4 showed no effect on migration capacity (98.4 ± 16.9% of control, Fig. 2).
Adhesion of EPCs under hypoxic conditions
We further investigated the effects of hypoxia on integrin activity. Ex vivo expanded human EPCs were subjected to hypoxia. The static adhesion assay was performed as described above. With increasing duration of hypoxia, EPCs are less able to adhere to fibronectin in this experiment. Statistical significance is reached at 1.5 hrs of 1% oxygen (Fig. 3). After 2 hrs, adhesion capacity is reduced to 55.1 ± 22.3% of control and reaches a minimum of 39.9 ± 22.5% of control after 4hrs. Further increase of time under hypoxia up to 12 hrs had no additional effect. The cells show no signs of necrosis or apoptosis as determined by Roche ® Cell death detection kit (see online supplement, Fig. 3).
As the onset of this effect is too early to be caused by altered protein translation, we focused on protein interactions. AMP-dependent kinase is a sensor molecule for intracellular energy content expressed as ADP/ATP and AMP/ATP ratio. It is stimulated by hypoxia and the highly specific activator aminoimidazole carboxamide ribonucleotide (AICAR). In vitro expanded EPCs were treated with increasing doses of AICAR for 4hrs and then tested for their adhesion capacity to fibronectin. Even low doses of AICAR result in diminished integrin activity beginning with 42.8 ± 23.7% at 0.5 mM AICAR to 35.7 ± 12.2% at 5 mM AICAR (Fig. 3).
Activation of AMP in EPCs under hypoxic conditions
Activation of AMPK by phosphorylation at Thr172 has been shown in other cell types to activate AMPK during hypoxia. In EPCs the level of ª 2012 The Authors phosphorylation increased eightfold, whereas total expression remained unchanged (Fig. 4). As demonstrated by GSK3-assay, ILKactivity was not altered by hypoxia (Fig. 4).
Discussion
In this study we were able to demonstrate the expression of fibronectin-specific integrins on EPCs and the markedly decreased expression of a 4 compared to MNCs. We further showed the functional relevance of this finding for adhesion and migration on FN. Both hypoxic and pharmacological AMPK-activation lead to diminished activity of these integrins. We finally demonstrated the phosphorylation and therefore activation of AMPK during hypoxia. Our results add to present knowledge about behaviour of EPCs under hypoxic conditions as found in targeted tissues for cell based therapy.
In this study we were able to verify the expression of a 5 -and b 1integrins by flow cytometry. Although MNCs also expressed a 4 -integrin, this could not be seen after endothelium specific culture of the cells. Despite the previously described heterogeneity of ex vivo expanded early EPCs [9], our finding represents a common denominator of these subpopulations similar to their endothelial surface markers. Chavakis et al. compared the mRNA expression of EPCs to Human EPCs were either submitted to hypoxia (black) or stimulated with AICAR (white). Integrin activity was tested by adhesion to fibronectin. Increase of AMPK-activity by both hypoxia and AICAR-stimulation decreases integrin activity (n = 8, data presented as percent of control, mean ± S.D., *P < 0.05 versus control). HUVECs by using a microarray. In that investigation, both b 2 and CD11b/c were significantly increased in EPCs compared to HUVECs on mRNA level, whereas b 1 and a 5 were slightly decreased without statistical significance. a 4 -integrin was slightly increased without reaching statistical significance.
We perceived a high expression of b 1 and a 5 , whereas a 4 was markedly decreased during expansion. Both a 4 b 1 and a 5 b 1 integrins have a high affinity to fibronectin [10], which is an important regulator of various cellular processes including survival, differentiation, growth and migration. It is deposited actively in the ECM by cells and circulates freely in the plasma. Recent data suggested an important role in flow-induced vascular remodelling by influencing the invasion of leucocytes and the proliferation of vascular smooth muscle cells [11]. Most importantly, it is used as immobilized form for the in vitro expansion of EPCs [8]. Therefore, we investigated the functional relevance of the altered integrin expression. Our findings emphasize the importance of a 5 b 1 integrins for EPCs to adhere on the provided fibronectin matrix during in vitro expansion. Interestingly, inhibition of a 5 shows a markedly pronounced effect on the cells compared to blockade of b 1 . Furthermore, the possibility of an insufficient affinity of the antibody was addressed by comparing different clones of blocking antibodies without any effect on this finding. Hence, one can speculate about the signalling properties of integrins and the effects on the signalling pathways of such blockage. These pathways have been subject to investigation in case of b 1 , but data on a 5 remain scarce [12,13]. The used cell culture protocol has been shown to produce subpopulations of EPCs probably because of the short time for differentiation. The phenotypic characterization with both functional assays (diLDL-uptake) and flowcytometry for endothelial surface markers showed a high content of endothelial type cells (see online supplement) in concordance with previous literature [9]. The integrin profiles were found irrespective of this heterogeneity, but further analysis of subpopulations may help our understanding of cell based tissue regeneration.
Migration comprises a series of complex actions of the cell. It depends on a coordinated sequence of adhesion and release of molecules on the cell surface as well as the cell moves along a chemotactic gradient. This process, in which integrins and selectins are important effectors, is only incompletely understood [14]. In our experiments with EPCs we demonstrated the functional relevance of b 1 -integrins for migration on a fibronectin matrix. The inhibition of both b 1 and a 5 but not a 4 resulted in decreased adhesion and migration capacity of EPCs. This finding matches the expression profile.
Akita and coworkers found an increased efficacy of EPCs regarding vasculogenesis after hypoxic preconditioning. This was mainly due to an accumulation of b 2 -integrins [6]. Kong et al. investigated the effects of hypoxia on the integrin expression of leucocytes. They reported an up-regulation of b 2 but not b 1 under hypoxic conditions. Additionally they found an increased b 2 -integrin dependent increase of adhesion to endothelial cells after hypoxia [15]. Taking into account the importance of b 2 -integrins for the homing of EPCs [4], this mechanism explains the effect of hypoxic preconditioning of EPCs. The effect of hypoxia on b 1 -integrins remained unclear. In this study, we subjected EPCs to hypoxic conditions and demonstrated a decreased adhesion of EPCs on fibronectin. As the adhesion to fibro-nectin of these cells was strictly dependent from a 5 b 1 integrins, we deducted an influence of hypoxia on expression or function of these integrins. Consistent with the aforementioned literature, no increase of b 1 or a 5 could be detected after hypoxia (Suppl. Fig. 7A-C).
The rapid onset of the effect leads to the hypothesis of protein modifications due to hypoxia. Hypoxia activates AMP-dependent kinase (AMPK) in EPCs, which is a sensor molecule for metabolic stress and energy level [7]. Recent data suggest a role of mitochondrial ROS-release during hypoxia as the main activator of AMPK in this setting [7]. Hypoxia leads to phosphorylation of AMPKa at Thr172 and enhances its enzymatic activity. This influences an extensive number of pathways [16][17][18][19]. Aminoimidazole carboxamide ribonucleotide (AICAR) is a specific activator of AMPK. In our experiments we demonstrated the same dose-dependent reduction of adhesion by AICAR as was seen in hypoxia. As described before, we observed phosphorylation of AMPK on Thr172 during hypoxia. Taking into account these findings, we conclude that upon activation AMPK affects cell-matrix adhesion directly or indirectly by modification of either a 5 or b 1 .
Several phosphorylation sites on the cytosolic domains are believed to control conformation and alignment of the branches of a-and b-subunit causes altered adhesion [10]. Neither the exact impact of phosphorylation on these sites nor the phosphorylating enzymes have been sufficiently examined in this context. Our study contributes to knowledge of inside-out signalling of integrins. We have previously reported the down-regulation of the active conformation of b 1 in response to ex vivo deletion of ILK in endothelial cells. The mechanism led to apoptosis of the cells and was independent of akt [20]. Our experiments show neither decreased ILK expression in response to hypoxia nor decreased ILK-activity in contrast to previous reports [21] (Fig. 4).
Increasing intensity of both hypoxia and AICAR leads to marked reduction of adhesion. But this effect did not abolish the adhesion capacity. Therefore, one might speculate whether inactivation of b 1 or a 5 might be the possible mechanism of hypoxia and whether activation of AMPK is causative for this.
As endothelial nitric oxide synthetase is activated by AMPK [22,23], we carried out adhesion experiments using nitric oxide donators and found no impact on adhesion. Therefore, not only eNOS but also most heme-group dependent enzymes appeared unlikely targets.
In summary, we demonstrated the expression of a 5 b 1 -integrin on ex vivo expanded EPCs in contrast to a 4 b 1 and a 5 b 1 on MNCs. We were able to demonstrate the importance of a 5 for adhesion to fibronectin as well as the influence of hypoxia on functional capacity of a 5 b 1 -integrin. We then confirmed the activation of AMPK by phosphorylation of AMPKa-Thr172 in response to hypoxia in EPCs. Finally, data could be presented implicating a causative relation of AMPK and function of a 5 b 1 -integrin. As previously described, a 5 b 1 -integrins might not be important for the homing of EPCs, as their adhesion to endothelium is not affected. But binding of fibronectin to a 5 b 1 -integrins might play a role in cell growth and differentiation of EPCs during tissue repair. The influence of hypoxia on a 5 b 1 -integrins and its downstream targets LKB1, mTOR, eNOS, KLF and many more remains subject of further investigation.
Conflict of interest
The authors confirm that there are no conflicts of interest.
Supporting information
Additional Supporting Information may be found in the online version of this article: and CD34-APC (B&D) (B) and co-staining of CD31-FITC and CD18-APC (B&D) (C). As described previously, EPCs express highly CD31, CD34 and CD18 (beta2-integrin subunit) after in vitro expansion.
Figure S3
Cell death and apoptosis. Two experiments were performed to detect cell death during hypoxia at 1% oxygen for 8 hrs. The assay was performed following vendor instructions (Roche, Roche Diagnostics Deutschland GmbH, Mannheim, Germany).
Figure S4
Cell-matrix-adhesion to fibronectin. Peripheral blood mononuclear cells (MNC) and in vitro expanded endothelial progeni-tor cells (EPC) from the same individuals were compared. Integrin subunits were blocked selectively by antibodies by pre-incubation. Although adhesion is blocked by antibodies targeting a5 and b1, only MNCs are blocked by the anti-a4-antibody. Figure S5 Cell-matrix-adhesion to fibronectin. EPCs were pre-incubated with 50 lM of cyclic RGD-peptide (Sigma-Aldrich), an established inhibitor of aVb3, aVb5, a4b1 and a5b1. EPCs show a decreased adhesion capacity to the fibronectin matrix, when RGDrecognizing integrins are blocked. Figure S6 Flow cytometry for integrin subunits. Peripheral blood mononuclear cells were isolated by density gradient. The cells were enriched by CD14 labelling by using magnetic beads. After 4 days of endothelial specific culture, flow cytometry was performed (see methods). CD14-enriched in vitro expanded cells demonstrated the same integrin profile as endothelial progenitor cells obtained from the Asahara protocol (Fig. 1).
Figure S7
Flow cytometry for integrin subunits. Effect of hypoxia on integrin expression of EPCs. Normoxia is shown in blue, 18 hrs hypoxia at 1% oxygen is shown in brown. No differences in surface expression of integrin subunits could be detected. A: IgG control, B: integrin b1-integrin subunit, C: a5-integrin subunit.
Please note: Wiley-Blackwell are not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article. | 2016-05-04T20:20:58.661Z | 2012-09-26T00:00:00.000 | {
"year": 2012,
"sha1": "4b870b0f9b3f60ade45df522887648d6c095306a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1582-4934.2012.01553.x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b870b0f9b3f60ade45df522887648d6c095306a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221673004 | pes2o/s2orc | v3-fos-license | Data protection and biomedical research in Switzerland: setting the record straight
Ensuring the protection of privacy and the compliance with data protection rules have become central issues for researchers active in the biomedical field. Data protection law is often perceived as very complex and difficult to interpret, which can hinder the efficacious planning and implementation of new research projects. Indeed, the sophisticated legal architecture that governs data processing activities in general and biomedical research in particular might feel overwhelming for both legal practitioners and researchers.
Introduction
In the last few years, concerns about the protection of personal data have become an increasingly important subject of discussion in biomedical research. Although it could be argued that data in general, and personal data in particular, have always been a central component of research, it is only recently that discussions about the appropriate data processing standards in this field have intensified. Arguably, this could be due to two intertwined factors, one related to the research world and one to legal developments. On the one hand, due to the progressive digitalisation of healthcare, clinics, laboratories and other medical research institutions have become data-driven environments, where the processing of large amounts data has grown exponentially. Fuelled by innovative projects in fields like genomics (e.g., the human genome project [1]), neuroscience (e.g., the human brain project [2]) and by the development of precision medicine [3], the urge to accumulate vast amounts of personal data of different types has skyrocketed. This has been further intensified by the open science and open data movements in their different forms [4]. On the other hand, the law is taking an active interest in the regulation of data processing across all industries and particularly for research purposes. In the recent General Data Protection Regulation (GDPR) [5] by the European Union, for example, it is reasserted that research undoubtedly falls under the scope of data protection law and the law creates a specific "research exemption" for the processing of data for research purposes, especially in case of secondary processing [6] (see also the section "The relevance of consent for data processing in research"). In preamble 1 159, it is firmly asserted that: "Where personal data are processed for scientific research purposes, this Regulation should also apply […]. For the purposes of this Regulation, the processing of personal data for scientific research purposes should be interpreted in a broad manner including for example technological development and demonstration, fundamental research, applied research and privately funded research." The increasing importance of considering data protection aspects in biomedical research also holds true for Switzerland, where several projects have been put in place to facilitate the legal and ethical use of data for research. For example, fostering more comprehensive, coordinated and efficient processing of data in healthcare was one of the main objectives of the National Research Program 74 launched in 2015 by the Swiss National Science Foundation [7]. In the same spirit, the Swiss Personalised Health Network (SPHN) was recently started as a nationwide initiative with the specific mandate to leverage the potential of health-related data [8] with respect to the research sector [9]. Or else, in 2016, the Swiss Biobanking Platform was initiated to facilitate the harmonisation of biobanks and their research work with biological material and personal data [10]. All these initiatives are designed with particular attention paid to the legal ramifications of data protection, especially in reference to Switzerland's specific legislation for processing personal data in the research sector (the Human Research Act -HRA [11], see below). Issues related to the legal ramifications of data protection in relation to scientific research are also likely to remain a central concern for the scientific community in the future, since a revised version of the Federal Act on Data Protection (FADP [12]) is currently being discussed [13] (for the relationship between FADP and HRA, see below).
Within this context of biomedical research and data protection law, the latter is often perceived as a potential hindrance from the perspective of researchers. In international reports on the status quo of the health-data framework of Switzerland and other developed countries, it is often referenced to as "legal barriers" (e.g., [14]). Indeed, even in interviews with national stakeholders conducted for our ongoing research on the health-data framework in Switzerland 2 [15], a common complaint from researchers was that navigating data protection rules is demanding. Prima facie, such observation appears to have some factual basis. In Switzerland alone there are 26 different data protection regulations (the FADP and 25 cantonal data protection laws -the cantons of Jura and Neuchatel have a common data protection bill [16]), a law on biomedical research, several other sectorial regulations containing norms about personal data processing, and even additional rules related to data processing in the criminal code (see below). It is understandable that researchers in the biomedical field might feel overwhelmed by such a complex regulatory architecture. In fact, even for legal experts, the coordination of data protection rules and research poses many uncertainties [17]. In this respect, addressing difficulties concerning how to combine the potential of data-rich research projects with adequate privacy protections for participants (data subjects) necessitate open dialogue between the research and the legal fields.
The objective of this article is to offer an overview of the current debate in the legal field around three nodes of data protection law that concern biomedical research. It provides a critical review, according to the classification by Grant and Booth [18], since it aims to go beyond mere description of the reviewed literature and case law and includes a certain degree of conceptual innovation. Given space constraints, our focus is on three nodes that are considered of primary importance in the literature. Other relevant issues in the legal debate (e.g., the concepts of purpose limitation or data minimisation) are only indirectly addressed insofar as it is relevant to the other topics of the review. We start by tackling the meaning of personal data, since this is the primary criterion that determines whether any data protection rules apply -including in biomedical research. Then, we discuss specific data-protection rules concerning the processing of personal data for research purposes. Finally, we turn to the topic of consent and clarify its role in data processing in general and data processing for research in particular. This review draws mainly on legal literature and legal sources (both judicial decisions and legal texts), but our intent is to address the medical and research community. Moreover, although the focus is on Switzerland and its legal framework, this review is also of interest for a non-Swiss readership, since the three nodes under discussion are central to the relationship between data protection and research across borders. To help link the content of this review with the legal texts, we provide a conversion table (table 1) of the legal terminology discussed, to facilitate reference to original legislative acts not written in English.
Three "nodes" at the crossroad between biomedical research and data protection law
The meaning of personal data Data protection law is not relevant for the processing of data in general, but rather it is specifically applied to the processing of personal data. This is a common trait of virtually every piece of legislation on data protection. In Switzerland, for example, this is clearly established by the FADP (art. 3.a [12]), the HRA (art. 2 para. 1.e [11]) and most of the cantonal data protection regulations 3 . For the field of biomedical research, this implies that data protection rules apply only if researchers are using personal data. Biomedical research with non-personal (or anonymised, see below) data falls outside the scope of data protection rules (for more details, see [19] p. 109) and thus does not require, amongst other things, approval from ethics committees. The staggering difference in the regulatory regime between personal and non-personal data, clearly begs the question of how to distinguish between these two categories.
In the legal literature, the exact meaning of what constitutes personal data is extensively debated and the exact borders of this category are highly contested [17], especially after recent developments in the field of data science [20]. In regulations, personal data are usually defined as information relating to an identified or identifiable person. For example, the FADP states that personal data are "all information relating to an identified or identifiable person" (art 3.a [12]). Virtually every other cantonal data protection law contains a similar definition and even for the EU level, the GDPR (art. 4 (1) [5]) uses very similar words, although Table 1: Cross-language comparison for Switzerland of the legal terminology discussed as part of the first node.
Term in English discussed in this review
it refers only to natural persons. Along the same line, the HRA defines (health related) personal data as "information concerning the health or disease of a specific or identifiable person" (art. 3.f [11]). All these definitions are relatively open-ended and leave room for interpretation by legal doctrine and by courts. In practice, to understand within a specific biomedical research project whether the data being processed are personal, two 4 elements are of primary importance. First, it must be determined whether data are relating to a person. Secondly, it must be determined whether this person is identified or identifiable. If both conditions are satisfied, data must be considered personal and data protection rules will apply.
In the context of biomedical research, it will often be clear that data relate to a person, since most of the data used are about people. However, in order to be personal, data must not only be relating to a person, but the person must also be identified or identifiable. With respect to this requirement, it is difficult from a legal point of view to give clear-cut answers. Cases where the data relate to an identified person, i.e., when the identity of the person is evident from the data ( [22] p. 34), are easy to recognise. If, for example, the database of a research project contains the names or the addresses of the people whose data are processed, such data will obviously relate to an identified person and thus be personal data ( [19], p. 516). Cases where data relate to an identifiable person, i.e., when the identity of the person does not emerge directly from the data(set) itself but can be derived from the context or the combination with other data, are more difficult ( [22] p. 34). Whether such data can still be considered personal data depends on several factors, since the legal concept of identifiability -at least in Switzerland -is relative and not absolute ( [22] p. 34). Traditionally, legal doctrine has argued that both an objective (the existence of means to re-identify) and a subjective factor (a sufficient interest by the data-processor to re-identify) need to be present in order for data to be considered identifiable [23]. The relativity of the concept of identifiability and its dependency on context and intentions of the data-processors are also confirmed by case law. In a recent decision of the Swiss Federal Supreme Court [24], for example, the judges ruled that images of people on Google Street View are identifiable (and thus personal data) since the identity of the person can often be derived by the context (e.g. dress, location, etc.), notably this applies even if faces are blurred. In another decision by the same court [25], it was established that IP-addresses are data relating to an identifiable person, if the data-processor in the specific case has the concrete possibility to access additional information that can lead to (re-)identification of the person using the IP-address. Therefore, the relative nature of the concept of identifiability entails that even the same data that might be considered non-personal in a certain context may be deemed personal if the circumstances change.
In the biomedical research context, anonymisation represents the procedure through which data cease to be identifiable and thus personal. Indeed, rather than speaking of non-personal data, the term anonymised data is often heard in this context. From a legal perspective, anonymisation is defined as the procedure through which personal data are processed so that re-identifying the person becomes either impossible or disproportionately difficult ( [19] p. 512). Ar-ticle 25 of the Human Research Ordinance (HRO [26]) explains that "for the anonymisation of […] health-related personal data, all items which, when combined, would enable the data subject to be identified without disproportionate effort, must be irreversibly masked or deleted. In particular, the name, address, date of birth and unique identification numbers must be masked or deleted." Since the law provides a non-exhaustive list of elements that must be deleted in order to anonymise personal data, this leaves some room for interpreting what actual processes can be considered relevant to match the legal definition of anonymisation. Due to current advances in big data analytics, there are concerns that the legal concept of anonymisation is bound to become ever more elusive ( [22] p. 34), but in current practice anonymisation can be treated as the flipside of identifiability (see previous paragraph). In order to determine whether data are truly anonymised (and thus non-personal), both the material chances of re-identification and the interest in re-identifying must be evaluated on a case-by-case fashion ( [19] p. 513). This means, in turn, that the problems of relativity described above with respect to identifiability also apply to anonymisation. Therefore, the classification of a certain dataset as anonymised might not be definitive: if the circumstances change and the links to identities of individuals are re-established ( [19] pp. 515ss), this would turn anonymised data back into being classifiable as personal. This generates more legal uncertainty when compared to the legal situation in the US, where health data are considered definitively de-identified (i.e., anonymised) once a precise and exhaustive list of 18 personal identifiers are removed [27]. The porous differentiation between personal data and anonymised data in Switzerland also implies that even for research projects processing data that they deem anonymised, it could still be convenient to adhere to the rules of personal data processing (e.g., in terms of data security).
Pseudonymisation or coding are also often described as procedures through which data can somehow be made "less" personal. From a legal point of view, coding (and equally pseudonymising) is regarded as the process through which the elements that link data to the identity of a person are reversibly removed ( [19] p. 512). For example, if a research project aims at studying mortality rates after one type of surgery based on retrospective analysis of routinely collected data from two different hospitals, researchers might merge data from the two hospitals in a unified database, remove the original case-IDs of each single patient and substitute them with newly developed codenames. If it is possible to reverse such process and go back from the codenames to the original case-IDs of the two hospital, these data might be considered as pseudonymised from a legal point of view. In contrast to anonymising, which irreversibly prevents data from being connected to an identifiable person and thus renders the data non-personal, coding/pseudonymising simply represents a way to better protect personal data and to benefit, under certain circumstances, from better conditions concerning the reuse of data for research purposes (see also last section). To refer back to the terminology of the previous sections, if data are simply coded or pseudonymised, they will still be relating to an identifiable person (and thus still be personal data), although only indirectly by means of a key. 5 If, on the contrary, data are anonymised, the key to link them back to an identifiable person either does not exist or it has been eliminated. The exact boundary between these two categories can often be blurry, especially when the key exists, but it is not directly and easily accessible to the researchers and it is not their intent to re-identify patients. 6 Moreover, it has been argued that, aside from the legal requirements, the actual practices for producing anonymisation are far from uniform in Switzerland [29].
To help researchers navigate these different aspects, we summarised this section in a decision-tree ( fig. 1.) that can be used to reflect whether data used in a research project are indicatively personal or non-personal.
Sector-specific data protection rules for research
If they ascertain that the data in their project are personal and thus data protection rules apply, researchers still need to determine which specific regulatory framework they need to follow. Traditionally in Europe, data protection rules are contained in legislative acts that regulate the processing of personal data across sectors. In Switzerland, for example, the FADP contains general rules on the processing of personal data by federal bodies (e.g. federal universities) and private persons (e.g. pharmaceutical companies), while cantonal data protection regulations set the norms for the processing of data relating to or deriving from cantonal bodies (e.g. cantonal hospitals and cantonal universities). On top of these general regulations, a number of additional data protection rules are scattered across several sectorial legislative acts ( fig. 2). The principal ones in the field of interest for this article are the HRA [11], the law on electronic patient record (LEPR [30]), the Law on Health insurance (LHI [31]), the Epidemic Law (EL [32]), the Law on Cancer Registration (LCR [33]), the Federal Statistic Act (FSA [34]) and the Federal Act on Human Genetic Testing (HGTA [35]). The HRA covers the collection and analysis of data in the field of human research. The LEPR concerns the "processing of data in the electronic patient record" (art. 1 [30]), which hospitals and nursing homes have the duty to offer [36]. The LHI contains some data protection rules concerning duties of healthcare providers and healthcare payees to transfer data to federal offices with monitoring (art. 23 and art 59a [31]) or quality control purposes (art 58b and 58c [31]). The EL has some sectorial rules applicable to "process personal data, including data concerning health, for the purpose of identifying people who are ill, potentially ill, infected, potentially infected or that expel pathogen elements with respect to public health provisions, in particular to single out and surveil contagious illness and fight against them" (art. 58 [32]). The LCR regulates the "collection, recording and analysis of data concerning cancer illnesses" (Art. 1 [33]) for monitoring, prevention, quality development and research purposes (art. 2 [33]). The FSA delineates some data protection rules for the processing of data by the Federal Office of Statistics. The HGTA focuses on the regulation of genetic testing for the medical, employment, insurance and liability contexts and it contains some rules on the protection of genetic data. Lastly, the processing of data by healthcare professionals and researchers is also covered by the rules on confidentiality in the Criminal Code (art. 321 and art. 321bis Criminal Code [37]).
For researchers, this framework of data protection rules involving several legislative acts might look quite difficult to navigate. Indeed, even from a legal point of view, determining exactly which rules concerning data protection have to be followed in a single research project can be a challenge. There are, however, some general indications that can be given. One general principle of law is that lex specialis derogat legi generali, i.e., when two pieces of law cover the same subject matter the specific legislation derogates the more general one. In the case of data protection rules, the more general legislations are the FADP and the other cantonal data protection acts, since they regulate the processing of data across sectors. This means that their framework can be derogated if a specific legislation covering the processing of personal data in particular sector exists. This is the case for the field of biomedical research, where the passing of the HRA in 2011 created sector-spe- Figure 1: Ascertaining whether the data in a research-project are personal or not. A supportive decision-tree. This decision-tree has merely indicative and instructive (rather than prescriptive) purposes, since every schematisation involves some degree of simplification and approximation. Specificities of each single case (such as linking possibilities) might lead to different outcomes. cific data protection rules that apply to the processing of personal data for biomedical research. As noted, the HRA created a proper "data protection regime" for the field of biomedical research ( [19] p. 808). Data protection rules contained in the FADP and other general cantonal data protection regulations have thus a subsidiary function, i.e., they can be considered to supplement the rules of the HRA. In other words, the general data protection regulations remain applicable in cases where the provisions of the HRA are not exhaustive enough (see also [19] pp. 809ss).
Review article: Biomedical intelligence
The presence of a sector-specific regulation containing data protection rules for the field of biomedical research has both advantages and disadvantages. A considerable advantage is that the processing of data for biomedical research purposes has its own peculiar needs and features -for example, compared to data processing for marketing purposes, or for other types of research. In this respect, having data protection rules tailored to the field of biomedical research (rather than the more general rules contained in the FADP [12]) was perceived as particularly important by the regulator [38]. Another advantage is that the presence of a specific regulation for the field of research does, to some extent, allow for the harmonisation of rules throughout a country [39]. Other European countries, such as Germany, do not have a general regulation that comprehensively covers biomedical research [40], and data protection rules for this sector are scattered amongst several other laws [41]. Having a sector-specific regulation, however, also entails disadvantages. These include factors such as the coordination and the interplay with other existing regulations containing rules on data processing. We will turn to these two issues consecutively.
To determine whether a research project can benefit from the sector-specific data protection rules of the HRA, it must be determined if the project falls within the scope of this act. Art. 2 para. 1 [11] defines the scope of the HRA and states that the act "applies to research concerning human diseases and concerning the structure and function of the human body". As it has been clarified ( [19] p. 103), the scope of application of the HRA is based on the aim, and not directly on the type/design of the research, which is in contrast to an earlier draft of the HRA [42]. The scope of the HRA is thus quite broad: as long as a methodology recognised by the scientific community is used to produce knowledge with the (distant?) objective of improving medical standards or better understanding the human body (and its subparts), the HRA will apply [19,38]. For the HRA to apply, another important fact to consider is whether the project is using health related personal data. Although a very general definition of this concept is provided in art. 3.f [11] of the HRA, the exact meaning of health related personal data is bound to be influenced by technological advances and by the context in which data are processed [43]. No relevant rulings by Swiss Federal courts are present to help guide practice. Legal doctrine in Switzerland traditionally interprets the concept of health data very broadly, including all personal data that have a direct or indirect connection with the physical or psychological health of a person ( [22] p. 39; [44] p. 56). Moreover, as recently highlighted [43], it is increasingly difficult to distinguish from "traditional" health data and new forms of data (especially those collected digitally) that can be used to infer knowledge about the health status of a person. Since health data are universally considered as particularly sensitive 7 , the blurred edges of their definition are particularly problematic. In fact, determining whether or not the personal data being processed is health-related, not only determines the applicability of the HRA, but triggers specific (and usually more stringent) requirements for data processing. This is because health data -together with other types of data, such as those about religion or political orientation -are considered particularly sensitive and thus deserving special protection (on the notion of particularly sensitive data, see e.g., [22] pp. 37ss). For example, the FADP (Art. 4 para 5) stipulates that when personal data are processed based on consent, that consent must be explicit when particularly sensitive data such as health data are processed.
Figure 2:
An overview of parts of the legislative framework concerning data processing in Switzerland. The image does not aim to be exhaustive, but merely indicative of the relationship between different legislative acts concerning data protection and data processing in the healthcare sector.
However, even if the scope of sector-specific regulation like the HRA is clarified, some additional questions might emerge for researchers. What happens with "borderline" research projects, which may rely on innovative methodologies (e.g. mining of electronic health records), or make use of large datasets generated during clinical routine and are not aimed at singling out individual cases (e.g. retrospective registry-based studies)? What if a research project processes data from multiple sources and which were originally collected according to different data-protection regimes (e.g. combining data from insurance providers, cantonal hospitals and the Federal Office of Statistics)? How do the data processing rules of these different regimes interact? Unfortunately, such questions do not have onesize-fits-all answers from a legal perspective. Innovative healthcare service research that relies on data routinely collected is relatively underdeveloped in Switzerland ( [45] p. 28) and has only been recently encouraged by the scientific community (e.g. through the aforementioned NRP 74 [7]). How to combine existing rules on data protection and data processing with this type of research will require the effort of both the research and the legal field to develop efficient and accepted practices. The latter should help, for example, to simplify the combination of different sectorial legal regimes and of the federal and cantonal data protection law (see e.g., [46], and, to better clarify the distinction between processing of data for research and for quality improvement purposes, see [47,48]). Moreover, a balance should be found between easing the requirements for the processing of data for research (through the creation of a "research exemption" [6]) and the retention of ethical requirements, especially with respect to health data [49]. Lastly, particular attention should be given to the topic of consent, which we address in the next section.
The relevance of consent for data processing in research Consent, especially in the field of biomedical research, has considerable importance, since it has traditionally been one of the key requirements to legitimately enrol patients in clinical studies and it is one of the cornerstones of research ethics. This is due to the fact that consent has become a fundamental precondition to justify the intrusion upon the physical integrity of both patients and research participants [50]. When data processing techniques evolved so that more research could be undertaken without any physical contact with participants, but rather through the processing of their data, consent continued to remain a pivotal requirement, especially because of its ethical significance. Participants' protection rules like the requirement of consent were upheld, in the conviction that data processing for research entails an intangible (rather than physical) invasion of personal integrity [51]. Consent, thus, remained one of the central paradigms of data processing for research purposes to such an extent that, even when processing happens without traditional informed consent, it is often spoken of presumed consent solutions (e.g., for the collection of data in registries and the performance of epidemiological research with them in Denmark [52] and [53]).
From the legal perspective, however, the role of consent for data processing is quite different. While consent remains a fundamental instrument to protect informational self-determination 8 especially when it comes to health data (e.g., [56].), the concept may come into play at different levels. Where the law, such as the GDPR at the EU-level, requires a lawful basis for any data processing, the long list of grounds that permit data processing includes not only consent, but also several alternatives, such as the necessity to perform a contract, the pursuance of a legal obligation or the protection of a vital interest of a natural person (art. 6 GDPR [5]).
In Switzerland, one has to distinguish whether personal data are being processed by a federal or cantonal body or by private persons. If personal data are processed by private entities, the FADP [12] does not necessarily require consent to be obtained. If processing does not comply with general data protection principles, which could lead to a potential violation of the data subject's personality rights, it may be possible to justify such processing by several means: by obtaining consent ( [21] p. 350; [22] p. 165) a specific legislative act authorising such data processing, or by the presence of an overriding private (e.g., the execution of a contract) or public interest (e.g., the compilation of statistics) -Art. 13 FADP ( [12]; [22] p. 172). If personal data are processed by federal public bodies, a formal legislative act authorising the processing is necessary to use personal data, consent of the person being of minor relevance ( [22] pp. 220ss). In both contexts (processing by private persons or by federal public bodies), data processing for research, planning and statistics is privileged (art. 13 sec. 2 lit. e and art. 22 FADP) by the presence of less rigid conditions ( [22] pp. 184ss and 290ss; [44] pp. 124ss), which partly resemble the "research exemption" present at the EU level [6]. These considerations show that, from a legal perspective, the right question researchers should formulate when they design the data protection framework of a project is not "Do we have consent?", but rather "Do we need consent?".
To better understand what this means in practice, it is helpful to consider a case study offered by the rules of the HRA. As specified in the previous section, the HRA represents a sector-specific set of data processing rules for biomedical research. In articles 32-35 [11], this act sets some specific conditions for the "further processing" (or secondary processing) of personal data. Further processing refers to those cases where data are collected for a specific aim (e.g. during the provision of care) but can then potentially be re-used for research purposes. A classic example is the further processing for research purposes of routinely collected data from hospitals, which has received much attention and prompted both application (e.g., [57].) and implementation (e.g., [58]) projects. In such cases, the HRA offers multiple requirements and possibilities for further data processing (for more details see e.g., [59]). For genetic data and for non-genetic health data in an identified form (i.e. non-coded/non-pseudonymised), the requirement for further data processing is having the consent of the datasubject, in some cases even of a "general" nature (art. 32.1, 32.2 and 33.1 HRA ( [11]; [19] p. 484). For the further processing of non-genetic health data in a coded form or for the anonymisation of genetic data, the requirement is for the provision of information and the acknowledgment of the right to dissent (but explicit consent is not necessary [19] p. 499). However, when provision of consent (first case) or provision of information (second case) is not pos-sible, an alternative strategy for undertaking further data processing is to receive an exceptional exemption by the competent Research Ethics Committee (Art. 34 HRA [11]; see also [19] pp. 501ss). The latter needs to ascertain that: (1) providing consent (first case) or information (second case) is impossible or disproportionately difficult; (2) no documented refusal by the subject whose data are used is available; and (3) the interests of the research project outweigh the interests of the person concerned (art. 34 HRA [11]). Since this contingency is defined in theory as exceptional, it is disputed whether the application of this alternative route for (further) data processing should be regularly used [19,60]. In any case, this example shows how, from a purely legal perspective, consent often remains a very relevant aspect of lawful data processing, but it is not necessarily the only one. Other alternative requirements for data processing is a matter for the law to settle. How (and how often) they are used within the legal limitations, is a matter for practice to develop. In this context, it should also be kept in mind that, as mentioned above, data protection rules contained in more general regulations (such as the FADP for Switzerland) may continue to apply in a subsidiary function.
Conclusion
In this article, we explored three intersections of data protection law and biomedical research. We first focused on the concept of personal data, which represents the most important criterion to determine whether data protection rules apply at all. We then analysed the sector-specific data protection rules for research and their interaction with more general data protection norms. Finally, we reflected on the topic of consent for data processing from a legal perspective. Our aim was to help bridge the gap between the legal and biomedical sectors by providing an overview of the legal debate regarding several important elements of data processing relevant for the biomedical sector. Given the complexity of such elements, we explained why there cannot be an expectation to find the exact and exhaustive rules for correct data processing in one single document, be it a legislative act, a guideline or a policy statement. This is also due to the fact that data protection and privacy are important values, but they are not absolute and, especially with respect to research, have to be balanced with other important legal and ethical principles. Therefore, the establishment of such balance will require collaboration between biomedical research professionals and legal experts. This cooperative effort will be crucial for addressing the pivotal question of how to ensure adequate data protection while promoting important research in the future.
Currently, there is a discussion [61] ongoing in the legal doctrine about a renewed definition of anonymisation of data for research purposes, one that is sufficiently nuanced and comprehensive and that takes into consideration the specific features of the research context. The proponents for developing a new definition argue that once personal identifiers are eliminated from a dataset, researchers often 9 have no subjective motivations to re-identify subjects, even when re-identification remains technically (i.e., objectively) possible (by e.g., combining data from different datasets). In a similar fashion, also Voekinger et al. [27] propose a further category of data, namely pseudo-anonymised, to define all those data where every effort has been made to anonymise them, but re-identification cannot be excluded. These legal proposals should also be considered by the research community so that solutions for finding a definition of anonymisation that is both legally solid and research friendly. A good starting point for this collaboration between the legal and the research worlds is the creation of courses on data protection for the research community (see e.g., the initiative of the SPHN [62]). Another possibility for productive exchange between the research and legal community is a partnership between researchers and cantonal data protection officers, who could offer assistance for the interpretation and application of the law if they are evenly organised and properly funded [63]. 1 In European Law, preambles are claims attached to any approved law to indicate the motivations of the legislator in enacting such law and to indicate how it ought to be interpreted. They are not, however, legally binding. 2 Manuscripts in preparation. 3 Some cantons, like Zürich (Gesetz über die Information und den Datenschutz) and Basel-Stadt (Informations-und Datenschutzgesetz), have regulations that deal with the principle of transparency for public bodies and "information" more generally, but most of the rules are in reference to personal data. Additionally, there are some other federal regulations that contain rules concerning non-personal data (e.g., the LIH). 4 A third element sometimes considered is that of defining what information means (see e.g,, [21], pp. 25ss), which is normally interpreted extremely quite broadly as to include information in any form and on any support (e.g., digital, analogue). 5 The key can be, for example, a conversion table where every code-name is associated with the original ID, or another technical device that can recover the original ID starting from the codename. 6 See, for example, Baeriswyl and Parli ( [22] pp. [35][36] where it is argued that in such cases data can be considered anonymized (non-personal) from the perspective of the researchers. The same stance is argued in [28]. 7 This holds true also in Switzerland, where the FADP and each cantonal regulation on data protection considers data concerning health as worth of additional protection. 8 The right to informational self-determination (informationelle Selbstbestimmung) is not directly present in the law in Switzerland, but it has been introduced into case law and has been recognised by the doctrine (e.g., [54]), although sometimes in a critical fashion [55]. 9 But not always: for example, when they could return clinically relevant incidental findings. grant number 407440_167356). The funder had no role in the drafting of this manuscript and the views expressed therein are those of the authors and not necessarily those of the funder. | 2020-09-15T13:05:41.166Z | 2020-08-24T00:00:00.000 | {
"year": 2020,
"sha1": "de9fc6a9d48230f3df1bb3b87d962388606432b2",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4414/smw.2020.20332",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ed3ea55031d6f5df2d3e44a259573ff9369ccfde",
"s2fieldsofstudy": [
"Law",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21806259 | pes2o/s2orc | v3-fos-license | CPT-11-Induced Delayed Diarrhea Develops via Reduced Aquaporin-3 Expression in the Colon
While irinotecan (CPT-11) has a potent anti-cancer effect, it also causes serious diarrhea as an adverse reaction. In this study, we analyzed the pathogenic mechanism of CPT-11-induced delayed diarrhea by focusing on water channel aquaporin-3 (AQP3) in the colon. When rats received CPT-11, the expression level of AQP3 was reduced during severe diarrhea. It was found that the expression levels of inflammatory cytokines and the loss of crypt cells were increased in the colon when CPT-11 was administered. When celecoxib, an anti-inflammatory drug, was concomitantly administered, both the diarrhea and the reduced expression of AQP3 induced by CPT-11 were suppressed. The inflammation in the rat colon during diarrhea was caused via activated macrophage by CPT-11. These results showed that when CPT-11 is administered, the expression level of AQP3 in the colon is reduced, resulting in delayed diarrhea by preventing water transport from the intestinal tract. It was also suggested that the reduced expression of AQP3 might be due to the inflammation that occurs following the loss of colonic crypt cells and to the damage caused by the direct activation of macrophages by CPT-11. Therefore, it was considered that anti-inflammatory drugs that suppress the reduction of AQP3 expression could prevent CPT-11-induced delayed diarrhea.
Introduction
Irinotecan (CPT-11) is an anticancer agent that is widely used in the treatment of colon cancer, gastric cancer, pulmonary cancer, and cervical cancer. CPT-11 exerts its anticancer effects through the inhibition of DNA synthesis by inhibiting DNA topoisomerase I [1], but it also causes serious adverse drug reactions. The adverse reactions with high incidence include myelosuppression, nausea, vomiting, and diarrhea [2]. Among these reactions, diarrhea is a serious adverse drug reaction that occurs in approximately 80% of patients and is one of the dose-limiting factors of CPT-11 [3]. CPT-11-induced diarrhea is classified into early-onset diarrhea, occurring within a few hours of drug administration, and late-onset diarrhea, occurring more than 24 h after administration. Early-onset diarrhea is believed to be caused by the inhibition of acetylcholinesterase by CPT-11, leading to acetylcholine accumulation, as well as by the direct binding of CPT-11 to acetylcholine receptors, increasing peristaltic movement [4,5]. This early-onset diarrhea can be treated with anticholinergic agents such as atropine [6]. In contrast, late-onset diarrhea is very serious and is believed to be caused by the following mechanism. CPT-11 is metabolized mainly by carboxylesterases in the liver to form an active metabolite, SN-38, which is subsequently metabolized by uridine diphosphate-glucuronosyltransferase 1A1 (UGT1A1) into SN-38-glucuronide (SN-38-glu) [7,8]. After being excreted into the bile, SN-38-glu is deconjugated by enteric bacteria-derived β-glucuronidase in the intestinal tract and is re-converted into the active SN-38 [9,10]. This SN-38 is believed to induce severe, persistent diarrhea by damaging the intestinal mucosa and by accumulating in the body via the enterohepatic circulation [11]. Various methods to treat CPT-11-induced delayed diarrhea are being discussed based on this mechanism, including the following: (1) suppression of the production of SN-38 by killing enteric bacteria with antibiotics [12][13][14][15][16]; (2) inhibition of β-glucuronidase activity with Hangeshashinto, a traditional Kampo medicine [17,18]; and (3) adsorption of SN-38 in the intestinal tract using activated charcoal [19,20]. Other methods include the high-dose administration of loperamide, an antidiarrheal, as well as magnesium oxide, an antacid, to promote the excretion of SN-38 and CPT-11 by alkalizing the intestine [21,22]. Although these methods can alleviate diarrhea, treatment often becomes difficult in patients with severe diarrhea, and no reliable preventive methods have been established [23]. Severe, persistent diarrhea results in circulatory failure due to dehydration and electrolyte disturbance, which can then lead to death. Accordingly, it is crucial to suppress CPT-11-induced delayed diarrhea, not only because it improves the quality of life (QOL) of patients but also because it allows cancer treatment to be conducted smoothly.
In recent years, it has become clear that the water channel aquaporin (AQP) is expressed at significant levels in the colon, where the final fecal water content is controlled. In a previous study, we discovered that AQP3 is expressed at significant levels in colonic mucosal epithelial cells and that AQP3 plays a major role in the development of diarrhea and constipation [24][25][26][27]. These results led us to consider the possibility that AQP3 in the colon is involved in CPT-11-induced delayed diarrhea and that diarrhea may therefore be prevented from becoming severe by controlling AQP3 expression, thereby preventing dehydration. In this study, we investigated the mechanism of the development of CPT-11-induced delayed diarrhea by focusing on AQP3 to discover new preventive methods and/or treatments for CPT-11-induced delayed diarrhea.
Effect of CPT-11 Dose and Administration Schedule on the Diarrhea Score and Mortality
To create a model for CPT-11-induced delayed diarrhea, the optimal dose and administration schedule of CPT-11 were investigated.
When CPT-11 was administered via the tail vein at various doses and schedules, early-onset diarrhea was observed within 3 h immediately after administration. Although no diarrhea was subsequently observed, it was found that soft stool and diarrhea started on the second day following the final administration of CPT-11, and most rats developed severe diarrhea by the third day. The diarrhea grade was calculated on the third day following the final administration. The results indicated that diarrhea did not develop in rats receiving CPT-11 at 60 or 80 mg/kg/day for four days, although soft stool was observed. In contrast, when CPT-11 was administered at 100 or 120 mg/kg/day for three or four days, most rats developed diarrhea. Although rats receiving CPT-11 at 120 mg/kg/day developed diarrhea at the same level as rats receiving 100 mg/kg/day, the associated weight loss was severe. In addition, rats receiving CPT-11 at 150 mg/kg/day exhibited high mortality (Table 1).
Based on the above results, it was found that the administration of CPT-11 at a dose of 100 mg/kg/ day for four days causes a high incidence of diarrhea, while both weight loss and mortality remain low. CPT-11 at a dose of 100 mg/kg was administered to rats via the tail vein for four days, and feces were collected on the third day following the last administration to investigate the level of diarrhea by analyzing the total weight of fecal pellets, total number of fecal pellets, and fecal water content. In addition, colitis was assessed by analyzing the condition of the colon tissue and inflammatory cytokines.
Both the total weight and fecal water content were significantly higher in rats in the CPT-11 administration group than in the rats in the control group, indicating the development of severe diarrhea ( Figure 1). In addition, it was found that the mRNA expression levels of cyclooxygenase-2 (COX-2), inducible nitric oxide synthase (iNOS), and inflammatory cytokines (e.g., tumor necrosis factor-α (TNF-α), interleukin (IL)-6, and IL-1β) increased significantly in the rat colon in the CPT-11 administration group (Figure 2A). The condition of the colon tissue at that time was investigated by hematoxylin and eosin (HE) staining, and both crypt cell damage and infiltration of inflammatory cells into the lamina propria were observed in the CPT-11 administration group. Fibroplasia and muscle layer thickness in the colon also increased in the CPT-11 administration group ( Figure 2B). These changes were consistent with the characteristics of animal models of CPT-11-induced delayed diarrhea that have been previously reported [28,29].
Based on the above results, it was found that severe diarrhea develops in rats when CPT-11 is administered via the tail vein at a dose of 100 mg/kg/day for four days, at which point colitis is present. Figure 1. Assessment of diarrhea parameters after CPT-11 administration. CPT-11 (100 mg/kg/day) was administered in rats via the tail vein for 4 days. The total weight of fecal pellets (A), total number of fecal pellets (B), and fecal water content (C) were measured on the third day following the last administration. The fecal water content is shown with the mean value of the control group set at 100% (mean ± SD, n = 8, Student's t-test: * p < 0.05, *** p < 0.001 vs. Cont.).
Figure 2.
Changes in the inflammatory response and tissue morphology in the rat colon after CPT-11 administration. CPT-11 (100 mg/kg/day) was administered in rats via the tail vein for four days. The mRNA expressions of TNF-α, IL-6, IL-1β, COX-2, and iNOS in the colon were measured three days later using real-time polymerase chain reaction (PCR). After normalization with β-actin, data are presented with the mean value of the control group set at 100% (A) (mean ± SD, n = 8, Student's t-test: *** p < 0.001 vs. Cont.). The colonic tissue morphology was observed by HE staining (B).
Changes in AQP in the Colon in the CPT-11-Induced Delayed Diarrhea Rat Model
It has been reported that AQP1, AQP2, AQP3, AQP4, and AQP8 are found in the colon [30][31][32]. Therefore, we investigated the expression levels of these AQP using a rat model of CPT-11-induced delayed diarrhea.
The mRNA expression levels of AQP1, AQP3, AQP4, and AQP8 in the colon decreased significantly in rats in the CPT-11 administration group compared to the levels in the control group. No AQP2 was detected ( Figure 3A). Changes in the inflammatory response and tissue morphology in the rat colon after CPT-11 administration. CPT-11 (100 mg/kg/day) was administered in rats via the tail vein for four days. The mRNA expressions of TNF-α, IL-6, IL-1β, COX-2, and iNOS in the colon were measured three days later using real-time polymerase chain reaction (PCR). After normalization with β-actin, data are presented with the mean value of the control group set at 100% (A) (mean ± SD, n = 8, Student's t-test: *** p < 0.001 vs. Cont.). The colonic tissue morphology was observed by HE staining (B).
Changes in AQP in the Colon in the CPT-11-Induced Delayed Diarrhea Rat Model
It has been reported that AQP1, AQP2, AQP3, AQP4, and AQP8 are found in the colon [30][31][32]. Therefore, we investigated the expression levels of these AQP using a rat model of CPT-11-induced delayed diarrhea.
The mRNA expression levels of AQP1, AQP3, AQP4, and AQP8 in the colon decreased significantly in rats in the CPT-11 administration group compared to the levels in the control group. No AQP2 was detected ( Figure 3A). Changes in the inflammatory response and tissue morphology in the rat colon after CPT-11 administration. CPT-11 (100 mg/kg/day) was administered in rats via the tail vein for four days. The mRNA expressions of TNF-α, IL-6, IL-1β, COX-2, and iNOS in the colon were measured three days later using real-time polymerase chain reaction (PCR). After normalization with β-actin, data are presented with the mean value of the control group set at 100% (A) (mean ± SD, n = 8, Student's t-test: *** p < 0.001 vs. Cont.). The colonic tissue morphology was observed by HE staining (B).
Changes in AQP in the Colon in the CPT-11-Induced Delayed Diarrhea Rat Model
It has been reported that AQP1, AQP2, AQP3, AQP4, and AQP8 are found in the colon [30][31][32]. Therefore, we investigated the expression levels of these AQP using a rat model of CPT-11-induced delayed diarrhea.
The mRNA expression levels of AQP1, AQP3, AQP4, and AQP8 in the colon decreased significantly in rats in the CPT-11 administration group compared to the levels in the control group. No AQP2 was detected ( Figure 3A).
In a previous study, we found that AQP1 was expressed around blood vessels, AQP4 was expressed in muscular layer and AQP8 were low in the rat colon, using immunohistochemistry [24]. In this study, although we were not able to find AQP8, we found AQP1, AQP3, and AQP4 by immunohistochemistry ( Figure 4). Among these, AQP3 was expressed especially in colonic mucosal epithelial cells. This distribution of AQP3 in rat colon is similar to this in human colon [32,33]. In addition, AQP3 plays a significant role in the development of diarrhea and constipation [24][25][26][27]. Therefore, we analyzed the protein expression level of AQP3 in colon membrane fractions by Western blotting. The results showed that AQP3 decreased significantly in rats in the CPT-11 administration group to approximately 40% of that in the control group ( Figure 3B).
Based on the above results, it was found that the administration of CPT-11 markedly reduced AQPs in the colon. Specifically, AQP3 expression in mucosal epithelial cells was found to have decreased markedly, even at the protein level. In a previous study, we found that AQP1 was expressed around blood vessels, AQP4 was expressed in muscular layer and AQP8 were low in the rat colon, using immunohistochemistry [24]. In this study, although we were not able to find AQP8, we found AQP1, AQP3, and AQP4 by immunohistochemistry ( Figure 4). Among these, AQP3 was expressed especially in colonic mucosal epithelial cells. This distribution of AQP3 in rat colon is similar to this in human colon [32,33]. In addition, AQP3 plays a significant role in the development of diarrhea and constipation [24][25][26][27]. Therefore, we analyzed the protein expression level of AQP3 in colon membrane fractions by Western blotting. The results showed that AQP3 decreased significantly in rats in the CPT-11 administration group to approximately 40% of that in the control group ( Figure 3B).
Based on the above results, it was found that the administration of CPT-11 markedly reduced AQPs in the colon. Specifically, AQP3 expression in mucosal epithelial cells was found to have decreased markedly, even at the protein level. In a previous study, we found that AQP1 was expressed around blood vessels, AQP4 was expressed in muscular layer and AQP8 were low in the rat colon, using immunohistochemistry [24]. In this study, although we were not able to find AQP8, we found AQP1, AQP3, and AQP4 by immunohistochemistry ( Figure 4). Among these, AQP3 was expressed especially in colonic mucosal epithelial cells. This distribution of AQP3 in rat colon is similar to this in human colon [32,33]. In addition, AQP3 plays a significant role in the development of diarrhea and constipation [24][25][26][27]. Therefore, we analyzed the protein expression level of AQP3 in colon membrane fractions by Western blotting. The results showed that AQP3 decreased significantly in rats in the CPT-11 administration group to approximately 40% of that in the control group ( Figure 3B).
Based on the above results, it was found that the administration of CPT-11 markedly reduced AQPs in the colon. Specifically, AQP3 expression in mucosal epithelial cells was found to have decreased markedly, even at the protein level.
Effect of Celecoxib on the CPT-11-Induced Delayed Diarrhea Rat Model
Kase et al. reported that prostaglandin E2 (PGE2) production in the colon plays a major role in the development of CPT-11-induced delayed diarrhea and that Hangeshashinto, a traditional Kampo medicine, suppresses CPT-11-induced delayed diarrhea by reducing the production of PGE2 [34][35][36][37].
Effect of Celecoxib on the CPT-11-Induced Delayed Diarrhea Rat Model
Kase et al. reported that prostaglandin E 2 (PGE 2 ) production in the colon plays a major role in the development of CPT-11-induced delayed diarrhea and that Hangeshashinto, a traditional Kampo medicine, suppresses CPT-11-induced delayed diarrhea by reducing the production of PGE 2 [34][35][36][37]. In addition, Trifan et al. reported that celecoxib, a selective COX-2 inhibitor involved in the production of PGE 2 , suppresses CPT-11-induced delayed diarrhea [38]. Therefore, we investigated whether CPT-11-induced delayed diarrhea and the reduced expression of AQP3 in the colon improved when celecoxib, an anti-inflammatory drug, and CPT-11 were administered in combination.
The fecal water content increased markedly in the group receiving CPT-11 alone compared with that in the control group, and all rats showed severe diarrhea, with a diarrhea score of three (Table 3). However, it was found that although the fecal water content was increased in the group receiving a combination of CPT-11 and celecoxib compared with that in the control group, it was lower than that in the group receiving CPT-11 alone ( Figure 5C). In addition, when celecoxib was administered concomitantly, the diarrhea score decreased, and in particular, the incidence of severe diarrhea (grade 3) decreased (Table 3). This effect of celecoxib in improving diarrhea was found to be dose-dependent. When investigating the condition of the colon tissue, it was found that crypt cell damage, infiltration of inflammatory cells into the lamina propria, and edema caused by CPT-11 were reduced in a celecoxib dose-dependent manner ( Figure 5D and Table 4).
The mRNA and protein expression levels of AQP3 in the colon were both significantly reduced in the group receiving CPT-11 alone compared with those in the control group. In contrast, the reduction in AQP3 expression was milder in the group receiving a combination of CPT-11 and celecoxib than in the group receiving CPT-11 alone, and the change was dependent on the dose of celecoxib ( Figure 6A,B). In addition, the reduction in AQP1 and AQP4 mRNA expression by CPT-11 recovered after celecoxib administration. AQP8 mRNA expression remained decreasing ( Figure 6C).
Based on the above results, it was found that the concomitant administration of celecoxib resulted in a decrease in the level of CPT-11-induced delayed diarrhea and a reduction in AQP3 protein expression in the colon. Figure 5. Effect of celecoxib on CPT-11-induced delayed diarrhea and colon tissue. CPT-11 (100 mg/kg/day) was administered to rats either alone or in combination with celecoxib. The total weight of the fecal pellets (A), total number of fecal pellets (B), and fecal water content (C) were measured on the third day following the last administration. The fecal water content is shown with the mean value of the control group set at 100% (mean ± SD, n = 8, Tukey's test: ** p < 0.01 vs. Cont., † p < 0.05 vs. CPT-11). The colonic tissue was assessed by HE staining (D). CPT-11 (100 mg/kg/day) was administered in rats either alone or in combination with celecoxib. The level of diarrhea was evaluated on the third day following the final administration of CPT-11 using the diarrhea scale (Table 2), and the rate of weight loss was also investigated at that time (n = 8). Figure 6. Effect of celecoxib on AQP expression in the rat colons of rats with CPT-11-induced delayed diarrhea. CPT-11 (100 mg/kg/day) was administered in rats either alone or in combination with celecoxib. The mRNA expression of AQP3 in the colon was measured on the third day following the last administration by real-time PCR and was normalized with β-actin (A). The protein expression of AQP3 in the colon was analyzed by Western blotting and was normalized with β-actin (B). The mRNA expression of AQP1, AQP2, AQP4, and AQP8 in the colon was analyzed by real-time PCR and was normalized with β-actin (C). Data are shown with the mean value of the control group set at 100% (mean ± SD, n = 8, Tukey's test: * p < 0.05, ** p < 0.01, *** p < 0.001 vs. Cont., † p < 0.05, † † p < 0.01 vs. CPT-11).
Involvement of Colonic Macrophages in CPT-11-Induced Delayed Diarrhea
Based on the above results, it was found that the reduced expression of AQP3 in the colon plays a role in the development of CPT-11-induced diarrhea. In a previous study, we found that TNF-α and PGE2, which are secreted when colonic macrophages are activated, reduce the expression of AQP3 in mucosal epithelial cells in the colon [27]. Therefore, we investigated the mechanism of the reduction of AQP3 expression in the colon when CPT-11 is administered by focusing on the activation of macrophages.
RAW264 cells are monocyte-derived macrophages and are frequently used to evaluate the functions of macrophages [39]. When RAW264 cells were incubated for 48 h after the addition of CPT-11 (0-500 μM) or SN-38 (0-500 nM), cytotoxicity began to be observed at 20 μM in the culture Figure 6. Effect of celecoxib on AQP expression in the rat colons of rats with CPT-11-induced delayed diarrhea. CPT-11 (100 mg/kg/day) was administered in rats either alone or in combination with celecoxib. The mRNA expression of AQP3 in the colon was measured on the third day following the last administration by real-time PCR and was normalized with β-actin (A). The protein expression of AQP3 in the colon was analyzed by Western blotting and was normalized with β-actin (B). The mRNA expression of AQP1, AQP2, AQP4, and AQP8 in the colon was analyzed by real-time PCR and was normalized with β-actin (C). Data are shown with the mean value of the control group set at 100% (mean ± SD, n = 8, Tukey's test: * p < 0.05, ** p < 0.01, *** p < 0.001 vs. Cont., † p < 0.05, † † p < 0.01 vs. CPT-11).
Involvement of Colonic Macrophages in CPT-11-Induced Delayed Diarrhea
Based on the above results, it was found that the reduced expression of AQP3 in the colon plays a role in the development of CPT-11-induced diarrhea. In a previous study, we found that TNF-α and PGE 2 , which are secreted when colonic macrophages are activated, reduce the expression of AQP3 in mucosal epithelial cells in the colon [27]. Therefore, we investigated the mechanism of the reduction of AQP3 expression in the colon when CPT-11 is administered by focusing on the activation of macrophages.
RAW264 cells are monocyte-derived macrophages and are frequently used to evaluate the functions of macrophages [39]. When RAW264 cells were incubated for 48 h after the addition of CPT-11 (0-500 µM) or SN-38 (0-500 nM), cytotoxicity began to be observed at 20 µM in the culture with CPT-11 and at 20 nM in the culture with SN-38, which is 1/1000 of the level in the CPT-11 culture (Figure 7). In addition, when the changes in the mRNA expression of TNF-α and COX-2 were investigated at concentrations at which no toxicity was observed, significant increases in TNF-α and COX-2 expression were observed only in the culture with CPT-11 ( Figure 7A).
Based on the above observations, it was found that the cytotoxicity of SN-38 against macrophages was 1000 times greater than that of CPT-11. However, at concentrations at which no cytotoxicity was observed, CPT-11 exerted a more potent effect in activating macrophages. In addition, when CPT-11 or SN-38 was exposed for 12 h, no substantial degree of cytotoxicity or macrophage activation was observed. Based on the above observations, it was found that the cytotoxicity of SN-38 against macrophages was 1000 times greater than that of CPT-11. However, at concentrations at which no cytotoxicity was observed, CPT-11 exerted a more potent effect in activating macrophages. In addition, when CPT-11 or SN-38 was exposed for 12 h, no substantial degree of cytotoxicity or macrophage activation was observed. SN-38 (B), or lipopolysaccharide (LPS) as a positive control was added to RAW264 cells. After a 48-h incubation, cell viability was analyzed by the water-soluble tetrazolium salt (WST-1) assay. Data are shown with the mean value of the control group set at 100% (mean ± SD, n = 8, Dunnett's test: *** p < 0.001 vs. Cont.). Cells were collected after a 48-h incubation, and the mRNA expression of TNF-α and COX-2 was measured by real-time PCR. After normalization with 18S rRNA, data are shown with the mean value for the control group set at 100% (mean ± SD, n = 4, Dunnett's test: *** p < 0.01 vs. Cont.).
Discussion
In this study, the mechanism of the development of CPT-11-induced delayed diarrhea was analyzed by focusing on AQPs, which are water channels expressed in the colon, to discover new preventive methods and treatments for the severe delayed diarrhea that occurs at the time of CPT-11 administration.
When CPT-11 was administered at 100 mg/kg/day for four days, severe diarrhea developed by the third day following the final dose, and the model showed the characteristics of CPT-11-induced delayed diarrhea (Figure 1) [28,29]. Next, AQP3 expression in the colonic mucosal epithelial cells in SN-38 (B), or lipopolysaccharide (LPS) as a positive control was added to RAW264 cells. After a 48-h incubation, cell viability was analyzed by the water-soluble tetrazolium salt (WST-1) assay. Data are shown with the mean value of the control group set at 100% (mean ± SD, n = 8, Dunnett's test: *** p < 0.001 vs. Cont.). Cells were collected after a 48-h incubation, and the mRNA expression of TNF-α and COX-2 was measured by real-time PCR. After normalization with 18S rRNA, data are shown with the mean value for the control group set at 100% (mean ± SD, n = 4, Dunnett's test: *** p < 0.01 vs. Cont.).
Discussion
In this study, the mechanism of the development of CPT-11-induced delayed diarrhea was analyzed by focusing on AQPs, which are water channels expressed in the colon, to discover new preventive methods and treatments for the severe delayed diarrhea that occurs at the time of CPT-11 administration.
When CPT-11 was administered at 100 mg/kg/day for four days, severe diarrhea developed by the third day following the final dose, and the model showed the characteristics of CPT-11-induced delayed diarrhea (Figure 1) [28,29]. Next, AQP3 expression in the colonic mucosal epithelial cells in the model was analyzed. AQP3 in the CPT-11 administration group was markedly reduced at both the mRNA and protein levels ( Figure 3). Based on this observation, it was hypothesized that when CPT-11 is administered, AQP3 is reduced, which then prevents water transport from the intestinal tract, resulting in retention of water in the colon and leading to the development of delayed diarrhea.
We investigated why AQP3 is reduced when CPT-11 is administered. It was previously believed that the mechanism of the onset of CPT-11-induced delayed diarrhea was damage to mucosal epithelial cells by the active metabolite SN-38 [11]. However, the colonic epithelial cells were present in our model rats (Figure 2B), and CPT-11 and SN-38 had little effect on AQP3 expression in an in vitro study ( Figure S1). It was considered that the probability that mucosal damage is involved in the mechanism associated with the reduced AQP3 is low.
It has been reported that the levels of various inflammatory mediators, such as PGE 2 , are increased in the colon when CPT-11-induced delayed diarrhea develops [29,40,41]. In our model, the degree of CPT-11-induced delayed diarrhea was improved by the administration of celecoxib, an anti-inflammatory, and the reduced expression of AQP3 recovered to a level similar to that of the control (Figures 5 and 6). These findings suggested that AQP3 plays a certain role in CPT-11-induced delayed diarrhea and that the increased production of PGE 2 mediated by the increased COX-2 expression is involved in the reduced AQP3 expression. In addition, based on the results of the in vitro study, CPT-11 directly activates macrophages (Figure 7), and this action does not occur within a short period of time. Therefore, it was considered that the inflammation caused by the activation of macrophages and the accompanying reduced expression of AQP3 in the colon are characteristic of CPT-11-induced delayed diarrhea. Although the details of the mechanism of macrophage activation by CPT-11 are not clear, Li et al. reported that CPT-11 caused inflammation by activating NOD-like receptor protein-3 (NLRP3) and nuclear factor-kappa B (NF-κB) via c-Jun N-terminal kinase (JNK) in macrophages [42]. In addition, it is widely known that when cells are damaged, the production of TNF-α increases, causing inflammation. Therefore, it was considered that when CPT-11 was administered, the reduction of AQP3 expression was caused by the activation of macrophages by CPT-11 itself, as well as by inflammation that occurs following the loss of crypt cells.
The expression level of AQP1, AQP4, and AQP8 in the colon was also reduced significantly in the CPT-11 administration group, as was that of AQP3 ( Figure 3A). In the colon, AQP1 is expressed in vascular endothelial cells [43,44]. When AQP1 is reduced, it is believed that water in the colonic tissues cannot be transferred to blood vessels efficiently, and this may have caused the swelling of the colon observed in the CPT-11 administration group. Although AQP4 is found in the muscle layers of the colon, when an AQP4 knockout mouse and a transgenic mouse with an overexpression of muscle AQP4 were analyzed, neither showed a change in myofunction [45,46]. It has also been reported that AQP4 was reduced in regenerated muscle cells [47]. AQP4 may be reduced during muscle tissue regeneration caused by the proliferation of active myofibroblasts and by the increase in muscle layers and fibers triggered by chronic inflammation in the colon, which occur at the time of CPT-11 administration. AQP8 is expressed in the mucosal epithelial cells of the colon and Fischer et al. have suggested that AQP8 could be a marker protein for a normal large intestine [48]. Laforenza et al. also reported that AQP8 may play a role in water transport in the proximal colon [49]. However, the role of AQP8 in diarrhea or constipation has not been clarified completely. AQP3 is intensively expressed in the mucosal epithelial cells of the colon, and it was found that the regeneration of epithelial cells was advanced when inflammation was suppressed by celecoxib (Table 4). It has been reported that AQP3 not only functions as a water channel but is also involved in cell proliferation and cell migration [50,51]. Thiagarajah et al. reported that colitis became severe and that the regeneration of epithelial cells was delayed in AQP3 knockout mice [52]. Therefore, it was expected that an increase in AQP3 expression may not only attenuate CPT-11-induced diarrhea but may also play a major role in the regeneration of colonic tissues. In addition, recovery of the expression of other AQPs may also be useful in normalizing the swelling and muscle layers and in attenuating diarrhea.
In this study, the administration of celecoxib improved the reduction of AQP3 expression levels induced by CPT-11; however, it did not completely resolve the diarrhea (Table 3 and Figure 5C). The AQP3 localization in the CPT-11-treated group was similar to that in celecoxib-treated group ( Figure S2). Therefore, we considered that the diarrhea was not resolved for the following reasons: (1) AQP8 might play a critical role in CPT-11-induced delayed diarrhea ( Figure 6C) [49]; (2) CPT-11 activates cystic fibrosis transmembrane conductance regulator (CFTR), a chloride ion channel, which disrupts the balance of electrolyte transport in the colon [53]; (3) CPT-11 disturbs the composition of mucin, which is the mucus component that protects intestinal epithelial cells [54]; and (4) CPT-11 changes the intestinal microbiota [55]. In addition, it is known that at the time of CPT-11 administration, damage is observed not only in the colon but also in the lower section of the small intestine [28]. Therefore, to improve CPT-11-induced diarrhea, we considered that it is necessary not only to control the expression of AQP3 but also to improve these factors in an integrated manner.
In summary, AQP3 expression in the colonic mucosal epithelial cells is markedly reduced in CPT-11-induced delayed diarrhea. Celecoxib reduces CPT-11-induced delayed diarrhea. As celecoxib has been reported to enhance the anticancer effects of CPT-11 [38], celecoxib is considered useful in combination with CPT-11 as a symptomatic treatment. The results of this study also uncovered the potential of AQP3 in the colon as a new functional molecule in the mechanism of the development of CPT-11-induced delayed diarrhea. We consider the search for a comprehensive treatment to be crucial, including the control of AQP3 for the management of CPT-11-induced delayed diarrhea.
Animals
Male Wistar rats (eight weeks old) were purchased from Japan SLC, Inc. (Shizuoka, Japan). The rats were housed at 24 ± 1 • C and 55 ± 1% humidity with 12 h of light (08:00-20:00). The study was conducted upon approval (approval No. 29 June 2017) in accordance with the Hoshi University Guiding Principles for the Care and Use of Laboratory Animals.
Treatment
Rats were given lactic acid buffer (45 mg/mL D-sorbitol, 0.9 mg/mL lactic acid; pH 3.4) or CPT-11 (60, 80, 100, 120, or 150 mg/kg in lactic acid buffer) via the tail vein, and the level of diarrhea was evaluated on the third day following the final administration.
One day prior to the administration of CPT-11, rats were started on oral celecoxib (30 or 100 mg/kg/day in 0.5% methylcellulose) twice daily for eight days. CPT-11 (100 mg/kg in lactic acid buffer) was given via the tail vein for four days, and the colon was isolated under diethyl ether anesthesia on the third day following the final administration.
Assessment of Diarrhea
On the third day following the final administration of CPT-11, rat feces were collected to measure the total number of fecal pellets and total fecal weight. The fecal water content was calculated by freeze-drying the collected feces in a lyophilizer for 24 h, and the water content per gram of feces was calculated based on the moist weight and dry weight. The degree of diarrhea was assessed based on past reports (Table 2) [38].
HE Staining
The colons isolated from the rats were immersed in 10% neutral buffered formalin to fix the tissues. The tissues were embedded in paraffin and sectioned into 3 µm slices on glass slides. The slides were deparaffinized and stained with hematoxylin followed by eosin. The slides were dehydrated in alcohol, cleared in xylene, and covered for microscopic examination. The slides were read blindly by a pathologist and the colonic damage was assessed.
RAW264 cells were plated in a 96 well-plate at a cell density of 1 × 10 4 cells/well and incubated for 24 h. After CPT-11 (0-500 µM) or SN-38 (0-500 nM) was added, cells were incubated for 48 h, and cell viability was measured using the WST-1 assay. RAW264 cells were also plated in a 12 well-plate at a cell density of 2 × 10 5 cells/well and incubated for 24 h. The expression level of each gene was analyzed using cells that had been incubated for 48 h after the addition of LPS (10 ng/mL), CPT-11 (0-10 µM) or SN-38 (0-10 nM).
WST-1 Assay
RAW264 cells were plated in a 96 well-plate and incubated for 48 h after the addition of drugs. After washing each well, WST-1 was added at a ratio of 10/100 µL medium, and cells were incubated for 2 h at 37 • C in a CO 2 incubator. The absorbance at 450 nm was measured using a microplate reader.
Total RNA Preparations and Real-Time RT-PCR
TRI reagent was added to colon tissue or RAW264 cells, and total RNA was extracted. A highcapacity cDNA reverse transcription kit was used to synthesize cDNA from 1 µg of RNA.
Real-time PCR was performed using the primers listed in Table 5 and the following mixture: 2 µL of cDNA solution (2.5 ng/µL), 0.6 µL of forward primer (5 pmol/µL) and reverse primer (5 pmol/µL), 5 µL of SsoAdvanced SYBR Green Supermix, and 2.8 µL of RNase-free water. The reaction conditions included denaturation at 95 • C for 15 s, annealing at 56 • C for 30 s, and elongation at 72 • C for 30 s. The fluorescence intensity of the amplification process was monitored using the CFX Connect™ Real-Time PCR Detection System (Bio-Rad Laboratories). β-Actin and 18S rRNA expression levels in reagent-treated group and control group did not differ.
Extraction of the Plasma Membrane Fraction from the Rat Colons
The mucosa was scraped from each rat colon sample, suspended in dissecting buffer (0.3 M sucrose, 25 mM imidazole, 1 mM ethylenediaminetetraacetic acid, 8.5 µM leupeptin, and 1 µM phenylmethylsulfonyl fluoride; pH 7.2) and homogenized on ice. The homogenate was centrifuged (800× g at 4 • C for 15 min), and the resulting supernatant was further centrifuged (17,000× g at 4 • C for 30 min). The supernatant was removed, and dissecting buffer was added to the precipitate, which was then dispersed using an ultrasonic homogenizer. This solution included the plasma membrane fraction with abundant cell membranes [26,56].
Western Blotting
Protein concentrations were measured by the bicinchoninic acid methods using bovine serum albumin as standard. Each sample was diluted with loading buffer (84 mM Tris, 20% glycerol, 0.004% bromophenol blue, 4.6% sodium dodecyl sulfate, and 10% 2-mercaptoethanol; pH 6.8), and samples were loaded in each lane. After polyacrylamide gel electrophoresis, the proteins were transferred to a polyvinylidene difluoride membrane. After blocking with skim milk, the resulting membrane was incubated with rabbit anti-rat AQP3 or anti-β-actin antibody for 1 h, followed by washing and incubation with donkey anti-rabbit IgG-HRP antibody for 1 h. The membrane was washed and then reacted with the ECL prime Western blotting detection reagents, and the bands detected by the LAS-3000 mini-imaging system (FUJIFILM, Tokyo, Japan) were analyzed.
Immunohistochenistry
The colon was post-fixed in 4% paraformaldehyde. The tissues were embedded, and the frozen blocks were sectioned into 10 µm slices on glass slides. The sections were reacted with a rabbit anti-human AQP1 antibody, rabbit anti-rat AQP3 antibody, rabbit anti-rat AQP4 antibody, and rabbit anti-rat AQP8 antibody. The sections were treated with an Alexa Fluor 488 donkey anti-rabbit IgG antibody. The slides were covered and observed under a fluorescent microscope.
Statistical Analysis
Numerical data are expressed as the mean ± standard deviation (SD). The significance of the differences was examined using Tukey's test and Student's t-test. | 2018-04-03T01:00:10.489Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "95c8f8e242f5561607ef9f772b8cf5b629bd859f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms19010170",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "95c8f8e242f5561607ef9f772b8cf5b629bd859f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216179411 | pes2o/s2orc | v3-fos-license | Development Potential as an Economic Background for Regulation of Regional Economic and Social Processes
Nowadays more and more attention is paid to the issue of ensuring a stable regional socio-economic situation by developing approaches that treat regional potential as a background and a key focus in generation and application of tools for regulating economic and social processes. Taking a regional development potential as a result of successful accomplishment of state and municipal programs with an updated set of quantitative and qualitative features allows us to solve a number of topical problems of regional development. Among them are: resource support of socio-economic processes of regions; saving and redistribution of development costs; optimal use of resources allocated through state programs. In terms of chronic underfunding, developing principles for introducing amendments to state programs at a next stage of their execution will allow us to keep the most significant activities from the program list intact. A broaden concept of the development potential as a background of a regional strategic planning system contributes to an objective assessment of effective socio-economic development of the region.
Introduction
During a crisis period, importance of a territorial development strategy increases together with the role of a program-target approach as an effective balancing tool of financial support for remote regions [12,17,19,20]. Despite a fairly wide and authoritative research pool and well-established scientific theories that explain many challenges of socio-economic sphere, the issue of regulating regional socioeconomic development need further consideration [11,14]. The need to improve institutional processes in the regions requires an updated content and further development of a program-target approach towards regulation of regional socio-economic systems (SES). Researchers complain of, first of all, no universal methods to calculate a fiscal effect of the programs, and no methods to determine budgetary resources allocated for regional development.
Relevance
This issue is becoming relevant due to serious imbalances in socio-economic development of certain regions, where lack of resources gets chronic [9,15,21]. Due to procedural difficulties, often programs are not officially closed, but rather cease to be funded -in whole or in part. In this respect, a solution of a prompt funds transfer between programs and their activities gets a scientific and practical interest.
There are objective reasons that result in clarification of, amendments and additions to the approved programs. For example, a number of researchers explain the need for adjusting state programs by an annual budget approval, amendments thereto, changes of templates; unpredictable events (changes in the policy); necessity to refine the contents or eliminate errors, inefficiency of programs, etc. [2,3,6,7,10]. Given the above, it is necessary to develop principles of introducing amendments to programs at a next stage of their execution.
Statement of the problem
The focus of economic, institutional and other regional resources is development potential of regional socio-economic systems [18]. Its analysis reflects objective changes in financial, economic and organizational plans of the regions [1,4,5]. Therefore, a methodological principle of developing a program-target approach is to present an algorithm that structures the SES development potential as separate elements (economic, innovation, institutional) and components (fiscal-budget and investmententrepreneurial). A novelty here is that the separation principle can be treated as a key principle for structuring state programs of any constituent entity of the Federation [4].
The object-subject approach to systemic representation of state programs allows us to treat differentiation criteria as a group of indicators that characterize property-budgetary and investmententrepreneurial activities of the authorities within the activities list of a certain program. Interests of all parties to property-budgetary and investment-business relations in a region are united by a common drive to meet the needs in the course of production of collective goods as a condition for an effective regional (municipal) development.
Creation of methodological tools for quantitative assessment of the development potential allows us to determine how programs funding depends on a potential capacity of the region; we can estimate the scope of amendments (adjustment of funding); as well as interrelation of indicators of regional socio-economic development and capacity of the development potential
Theoretical part
In order to formalize the program-target approach as a tool for regulating socio-economic processes, we used an economic-mathematical method of experiment planning [8]. This allowed us to study the area of the target function Y -the SES regional development potential -corresponding to the starting development potential (factor x 1 ) in terms of quantitative assessment of property, budget, business, investment risks (factor x 2 ):.
where Y is the target function (response) (program indicators and components of the development potential), which serves as an object of optimization and (or) a factor of limitation over the study area (as target indicators and the funding sum for program activities).
While collecting, rationing and statistical processing of the data pool for assessment of the development potential (Ip) and risk (Ir), we formulated the input factors of the model (development potential in terms of property, budget, investment and business risks). We considered changes in the input factors over periods of time so that a certain year shall have a certain level of the development potential (funding sum for the program stage).
In order to prepare experimental data for targeted functions based on performance analysis of existing state programs and making the content of investment-entrepreneurial and fiscal-budget components of the development potential, we selected a number of funding periods. The regression equation is a quadratic polynomial, since it gives more accurate approximation of the target functions models.
Y n I p , I r = b 0 + b 1 * I + b 2 * I r + b 3 * I p * I r + b 4 * I 2 + b 5 * I r 2 (2) To get a mathematical model as a regression equation, we made a regression analysis of the response function. Factors b0, b1, etc. were calculated as follows: the sum of products of the prepared data for target functions (Y) and elements of a corresponding column in the planning matrix was Advances in Economics, Business and Management Research, volume 128 divided by the sum of squares of elements in the same column. Thus, having conducted a series of "experiments" with different ratios of factors, we got a response function in three-dimensional space and a possibility of experimental optimization of the object of study.
As a result, we found elements of investment-entrepreneurial (IE econ , IE inst , IE inn ) and fiscal-budget (FB econ , FB inst , FB inn ) components of the SES regional development potential; and we constructed the response surfaces in the factor space for the following dependencies: ИП econ = f I п , I р , ИП inst = f I п , I р , ИП inn = f I п , I р , (3) ФБ econ = f I п , I р , ФБ inst = f I п , I р , ФБ inn = f I п , I р Using the method of straightforward enumeration in the TurboBasic program, we found the point on the response surface that corresponds to minimal required values of components of the SES regional development potential, as well as funding of programs in the monetary terms.
In terms of a program-target approach, state and municipal programs serve not only as a background for strategic plans for regional development, but also as a mechanism of expanded reproduction of the development potential as a sum of its material and non-material components. The final result of the programs, in our opinion, can be considered as an assessment of the entire system of regional management as a whole [5].
Analysis of relations between the components of the development potential and indicators of the regional socio-economic level of allowed us to obtain statistically relevant correlation and regression dependencies as two-factor linear models (three-dimensional regression). Effect of components of the development potential was determined through specific elasticity coefficients; ß, ∆ are coefficients demonstrating reserves for growth of socio-economic development indicators [16].
As a result of research, we established that the applied elements of the SES regional development potential have a dominant effect on indicators of its medium-term development. It explains a dynamic nature of the development potential, expressed herein as a generalizing indicator of a SES regional socio-economic development (P): where N i r is a calculated value of the i-th estimated indicator, received by equations of correlationregression dependence; N i b is a value of the i-th indicator selected as basic; K is a quantitative value of indicators selected as components of a comprehensive indicator of the socio-economic development level.
Methodological background for assessing of how the predicted values of elements of the development potential structure act on the generalizing indicator of SES regional socio-economic development (4) relies on identification of priority components of the development potential that have the greatest effect on the growth of indicators of regional socio-economic development.
Practical relevance
In order to test the proposed methodological approach, we analyzed a set of state programs in Khabarovsk Region in the period since 2012. The result showed that regression equations were significant by Fisher's F-criterion only for six out of eleven models that were defined according to the assessment criteria for effective performance of executive authorities [13]. Among others, they include the following criteria: turnover of small enterprises; ratio of the number of employed people to the working-age population; tax and non-tax revenues of the regional budget in the total revenues of the consolidated budget; investment in fixed assets; share of high-tech and knowledge-intensive industries in the GRP (in the total output), etc.
Specific effectiveness of a model to regulate the development potential was calculated by quantitative effect of each component of the potential (factor) on the growth of a certain indicator of socio-economic development (specific elasticity coefficient). Using ßI and ∆i coefficients, we differentiated effect of factors on the regional development criteria and identified reserves for improving the indicator through the variation degree of the factors included in the correlationregression model.
Analyzing effect of components of the development potential on a general indicator of social and economic development (4), we determined dominant elements of the development potential (institutional and economic elements of the investment and business component, innovation and economic element of the fiscal and budget component).
Results
Thus, taking into account the preset parameters of budget efficiency, balance of interests of all parties, prescribed level of the budget funds use, we made an optimal ratio between funding of state programs at the regional level and priorities for effective (efficient) use of development potential. It will allow us to configure innovative, economic and institutional components of targeted programs, taking into account their degree of effect on indicators of regional social and economic development, as well as monitor the effect of SES regional development on achievement of planned indicators of socioeconomic dynamics in the region. | 2020-04-02T09:33:24.452Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "fcc612cee0840c949b42b8beb8ead71a32e83797",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/aebmr.k.200312.144",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cfdf4fd3401f78f209202c281e34ef84550c1e82",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
214246247 | pes2o/s2orc | v3-fos-license | Changing the notation that represents a force changes how students say it
To facilitate both learning about forces and coordinating forces with the system schema, force symbols in University Modeling Instruction carefully represent forces as detailed descriptions of interactions. For example, ! F E → B g represents the gravitational force by Earth on a ball, where “g” represents gravitational (i.e. the type of interaction), “E” represents Earth, → represents “by” and “on”, and “B” represents ball. Although students are taught to say ! F E → B g as “gravitational force”, audio data from student-led whole-class discussions shows that more than 40% percent of the time ! F E → B g was referred to as “force gravity” instead. Symbols for contact force, such as ! F H → B c , were also similarly referred to as “force contact” rather than “con-tact force” more than 40% of the time. Because language plays such a crucial role in learning physics, several years ago, as an experiment, the notation was changed from ! F E → B g to g ! F E → B to make it more closely match how it is to be read. After this experimental notation switch, student use of “force gravity” dropped to less than 2%, while use of “force contact” completely disappeared. While we make no claims that helping students read symbols more effectively also facilitates their learning about forces, it is clear that the simple change in notation was extremely effective at solving the reading problem.
I. INTRODUCTION
A core principle of University Modeling Instruction (UMI) is the introduction, use, and coordination of multiple representations [1][2][3][4][5].Quality representations that are consistent with each other are vital for helping students build quality scientific models, a central goal of UMI [6].In addition, students who are comfortable using a range of representations in problem solving better approximate physics experts, who routinely create many different representations (e.g.mathematical, graphical, diagrammatic) in the analysis of a single problem [7,8].
We will focus on just one representation in this paper: the symbol for force.Because learning about force is extremely difficult [9][10][11], and to help students coordinate forces with the system schema, force symbols in UMI carefully represent forces as detailed descriptions of interactions [2,3].In listening to how students referred to forces in classroom discussions, however, we noticed that they frequently did not read the symbol as it was intended.
Because proper use of language plays such a crucial role in learning physics [12], we wondered if we could help students improve their reading of force symbols.Thus our research question was: does the way notation for a force is written effect how students say the force?To answer this question, we changed the notation in a University Modeling Instruction (UMI) classroom, and looked to see if student speech patterns changed.We found a very clear effect.
In the rest of this paper we briefly explain how UMI teaches force as a description of an interaction, describe the classroom context for this study, present our initial results comparing data from two different years of the same course from before and after the change in notation, and end with a discussion of those results.Note that we do not address the further question of whether or not helping students read symbols better also facilitates their learning about forces.
A. A Coordinated Approach
University Modeling Instruction defines force as: one way to describe the interaction between two objects.To help visually represent that complex idea, UMI developed the System Schema, which shows all objects and interacions of interest for a given physical situation [2,3].It is a first level of abstraction after a pictorial representation, and serves as a conceptual bridge from that concrete representation to more abstract representations like force diagrams and Newton's Laws.When working a problem, students are encouraged to start with the schema and build their force diagrams from it.
Figure 1, taken from reference [3], shows a typical problem from a university introductory physics course represented three different ways.As described below, all [13] identified.The dashed ellipses represent system 1 (S1) and system 2 (S2) respectively."c" labels a contact interaction, and "g" labels a gravitational interaction.(c) Force diagrams for the two systems identified in (b).In a force label "c" means contact, "g" means gravitational.Also, for this particular scenario "B" means book, "R" means brick, "E" means earth, and "F" means floor.For example, the symbol c R B F → ! is read as the contact force by the book on the brick.
The mass of the brick has arbitrarily been chosen to be three times the mass of the book, so !F E→R g is three times the length of !F E→B g .three representations are strongly coordinated with each other, especially the force symbols with the system schema.This is to help students build a consistent and coherent model while providing them with multiple ways to check their answer.Consistency is a narrative that runs through all representations of a given model in the UMI classroom.
The two crucial ways a system schema and a force diagram coordinate follow directly from the UMI definition of force.The first is that for each interaction that crosses a system's boundary (the dashed ellipses in Figure 1b) there is one force exerted on that system.For example, three interactions cross the Book's system boundary, so the Book's force diagram should have three forces in it.If those numbers do not match, the student has direct feedback that they made an error somewhere.
The second way is that each force symbol in a force diagram describes only one particular two-headed interaction arrow in the schema and the two objects it connects.The symbol does this by identifying (i) the type of interaction it
S2
Floor g is describing (contact or gravitational here), (ii) the two objects at either end of the interaction arrow, (iii) which object is exerting a force on the system, and (iv) the system itself.
As a result, the super-script and sub-script in each force symbol make excellent bookkeeping devices when constructing a force diagram from the system schema.The super-script should match the type of interaction ("c" or "g") the force is describing.The second letter in the subscript (R for the Brick, and B for the Book) should represent the system of interest.It then follows that this second letter should be the same for all the forces in the force diagram for that system.For example, in Fig. 1c, the second letter is B for all forces acting on the Book.If the second letter differs within a given force diagram, then the student has made an error and gets direct feedback to that effect from seeing the inconsistency in their symbols.Finally, the first letter in the subscript should match the object outside the system at the other end of the interaction arrow the force is describing.
Because language plays a crucial role in learning physics, students are also instructed on how to "read" a force symbol.For example, !F E→B g from Figure 1c is read as "the gravitational force by Earth on the Book".This locution emphasizes the idea of force as a description of an interaction between two objects: the type of interaction (gravitational) is explicitly read, and the two objects involved in the interaction are also explicitly mentioned (Earth and Book) in a cause and effect manner (by Earth on the Book).This reading emphasizes that force is not a disembodied thing and that objects do not possess force or give force (in contradistinction to an impetus model of force that many novices bring into a college physics course [9,14]), but rather that objects merely exert forces on other objects.
B. The Problem and the Intervention
As we attended to student discourse in the classroom, however, we noticed that some students occasionally referred to a c R B F → ! as "force contact", or a !F E→B g as "force gravity" (they never said "force gravitational").That is, they read the symbols literally, from left to right, which is an extremely reasonable thing for them to do.Not to get into grammar too deeply, but those phrases are deceptively close to "force of contact" or "force of gravity" and such language could potentially reinforce student's novice preconceptions and interfere with their ability to construct a more expert-like understanding of the force concept.
For example, could such language possibly lead students to think that "contact" and "gravity" exert forces rather than actual objects?Or could it possibly lead them to think that "contact" and "gravity" "possess" force and give it or transfer it to various objects, thus reinforcing the problematic impetus model of force?Further, "gravitational" should be preferred over "gravity" because it is an adjective and is thus descriptive, whereas "gravity" is a noun whose use could lead to the possible problems as described previously.These are rather subtle, but potentially important points.And if there is a bi-directional interaction between language and thought [15][16][17], then it is important that we attend to the problem.Perhaps some students would somehow benefit if we could remove a possible impediment (incorrect reading of a symbol) to the challenging task of learning the force concept.
We wondered if changing the symbol for force would change the way students said it.If they were literally reading the symbol from left to right, why not put the super-script interaction label on the left, so that it would be the first thing they might process when reading the symbol in a conventional English fashion?Thus, the next year we taught the course we did just that.The force label was introduced as c ! F B→R or g !F E→R instead and used like that the entire year.Then we compared data to see the effect of the intervention.
A. Classroom Context
The context for this study is the calculus-based introductory physics course taught using UMI and taken by all science majors at Drury University.The heart of UMI is Modeling Discourse Management (MDM) [18], a learningcommunity approach that explicitly focuses on the epistemology of science.It is designed to help students understand that the conclusions of science are tentative and evolving and that knowledge and understanding of meaning are constructed and shared through dialogue with others.In MDM, students work in small groups to create a solution to the same problem on a 2'x 3' whiteboard.They then sit in a large circle with their whiteboards held facing in and conduct a student-led whole-class discussion ("board" meeting) to reach consensus [19,20].Each
B. Data
For our initial data, we looked at the year before the change in notation and the year after.We identified seventeen problems that were used in class both years.We (DS) listened to audio recordings of these thirty-four board meetings and tabulated counts on the different ways students referred to the symbols for contact and gravitational forces.We also attempted to count how many different students actually said a particular utterance (N in Tables I & II).We were able to determine N for 2015-2016, but have not yet determined it for 2016-17.In 2015-16 there were 703 minutes of audio in total for all seventeen problems and the average problem lasted 41 minutes, while in 2016-17 there were 606 minutes in total and the average problem lasted 35 minutes.
We only had audio of board meetings, we did not have video.But despite an occasional difficulty in hearing differ-TABLE I.The number of times during a given problem that a particular utterance was said when referring to a contact force.Quotes indicate verbatim what was actually said.A blank means that that particular utterance was not said during that problem.Note that "force contact" is said 79 times in total when the old notation is being used but is not said at all when the new notation is being used.Class size in 2015-16 was twenty-seven and in 2016-17 was twenty-eight.TABLE II.The number of times during a given problem that a particular utterance was said when referring to a gravitational force.Quotes indicate verbatim what was actually said.A blank means that that particular utterance was not said during that problem.Note that "force gravity" is said 45 times in total when the old notation is being used but is said only once when the new notation is being used.Class size in 2015-16 was twenty-seven and in 2016-17 was twenty-eight.ent students in the different recordings, we are confident in both kinds of counts (number of utterances as well as N).
2015-
To get an estimate on the uncertainties for both we plan to have a different researcher listen to the audio and compare their counts with this set of data, but we have not yet done this.However, given the stark difference in counts for "force contact" and "force gravity" from before the notation change to after, we do not think any uncertainties in those counts will be very relevant to any conclusions we might make in this paper.
IV. DISCUSSION
Tables I and II show the striking results.Before the notation change "force contact" utterances accounted for about 46% of the references to contact forces (if we include "contact" in the same category as "contact force") or 55% (if we ignore the "contact" counts), while "force gravity" utterances accounted for about 41% of the references to gravitational forces (if we include "gravitational" in the same category as "gravitational force") or 53% (if we ignore the "gravitational" counts).
There is some ambiguity in the "contact" and "gravitational" categories.Based on context clues, when students say either of those words, they are at least sometimes actually referring to interactions (in the schema) rather than to forces themselves.But because we only have audio, we are not always able to distinguish between the two uses.
There are many different things to notice in the data shown in Tables I and II.For example, there are a handful of different students who use "force contact" and "force gravity".That is, N varies from 1 up to 9 for "force contact" and from 1 up to 5 for "force gravity".Early on only one student is saying "force gravity", but more start using that locution in later problems.
It is very interesting that during the first three problems of the semester related to force (collision lab, man-scale, ball thrown up) there are no mentions of "force contact" or "force gravity" at all.It is only after the class does a lab related to Newton's Second Law (N2 lab) that students start to use those problematic locutions.It's possible this is because in addition to asking students to make well-labeled force diagrams, the first three problems also explicitly ask students to write out how they should read the symbols.There is an explicit requirement about reading the symbols that gets dropped for later problems.We could test this hypothesis by explicitly requiring students to write out how they should read the symbols throughout the sequence of problems related to force.
It is also possible that later problems, which require students to worry about drawing force diagrams to scale and make sure they are consistent with the second law lead to more confusion and thus possibly less careful language.The Atwoods data seem to support this idea, in that it is probably the hardest force problem all semester, and there are no mentions of "contact force" or "gravitational force" at all; every reference but three are to "force contact" or "force gravity".
Although we need to go back and carefully listen to the audio data, our impression is that for energy, the reading problem does not seem to occur, even though in UMI, energy symbols follow the same form as the original force notation.That is, in a UMI classroom, energy is denoted by E x , where x=k for kinetic energy, x=i for internal energy, x=c for chemical energy, etc.But we don't seem to find students saying "energy kinetic" or "energy internal", or "energy chemical".We think there are two possible explanations for that.One is that students are introduced to the different energies as early as third grade and so from early on are saying "kinetic energy", "potential energy", etc.It's possible that that language is such an ingrained habit that by the time they reach college the ideas and notation are not sufficiently different to trip them up.
In contrast, they usually don't get introduced to force at a technical level until at least high school, and even then they mostly likely learn about weight and tension and friction, rather than thinking about forces as descriptions of interactions.So all the emphasis in a UMI classroom on contact forces and gravitational forces is quite new to them and they struggle with the language a bit, especially when it does not match the symbol (as the old notation did not).
The other reason might be because of the different ontologies of energy and force.We talk about forms of energy [21] or represent it as a substance [22,23], but we categorize the concept of force into different ontological categories (matter, or process), depending on context [24].The more complex ontological nature of force possibly leads to more linguistic challenges for learners.
While we make no claims that helping students read symbols more effectively also facilitates their learning about forces, it is clear that the change in notation was extremely effective at solving the reading problem.Incorrect references essentially disappeared -there are no mentions of "force contact" & only one mention of "force gravity" -for seventeen problems that involved hundreds of minutes of student dialogue.However, field notes show that some students in 2015-16 felt frustrated by their constant stumbling over reading force symbols.With the change in notation, perhaps at a minimum they would simply feel more comfortable talking about force and that would facilitate their overall learning of the concept.
FIG. 1 .
FIG. 1.(a) Pictorial representation of a physical situation.All objects are at rest.(b) System schema of this physical situation, with two of many possible systems[13] identified.The dashed ellipses represent system 1 (S1) and system 2 (S2) respectively."c" labels a contact interaction, and "g" labels a gravitational interaction.(c) Force diagrams for the two systems identified in (b).In a force label "c" means contact, "g" means gravitational.Also, for this particular scenario "B" means book, "R" means brick, "E" means earth, and "F" means floor.For example, the symbol c | 2020-01-16T09:03:43.556Z | 2020-01-07T00:00:00.000 | {
"year": 2020,
"sha1": "22c2241d4dc4bf7e7f3ed20d3c163bdae515cb5e",
"oa_license": "CCBY",
"oa_url": "https://www.compadre.org/per/items/5193.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "7c4644c3696f6bc3e48effeefd0317ececdb8d56",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
269700803 | pes2o/s2orc | v3-fos-license | Contribution of advanced windows and façades to buildings decarbonization: A comprehensive review
of advanced
INTRODUCTION
Windows and façades are important components of a building that provide natural lighting and ventilation and establish necessary contact with the external environment necessary for healthy indoor ambient, but they also permit entry of undesirable solar heat in hot seasons and permit heat escape from the indoor ambient in cold seasons, which increase the building energy demands to cope with these inefficiencies and augment its contribution to offensive emissions.This scenario led to tremendous increase in research and developments of windows and façades to cope with their deficiencies and increase their capabilities to handle the ever-changing ambient conditions to keep the indoor ambient comfort and healthy.
Windows and façades are glazed areas whose U-values are high in comparison to other components of the building.Figure 1 shows the U-values of some of the commercially available glazing systems (Akram et al., 2023).Irrespective of this deficiency, large, glazed areas are widely used in modern and contemporary architecture.Several studies were dedicated to improving the thermal performance of glazed windows and façades by increasing the thermal resistance of the glazed system by using phase change material (PCM), inert gases and vacuum glazing, etc., and using smart windows to better control indoor heat and luminance.Smart technologies include active dynamic glazing, which modulates the entering amount of visible light, reduce energy demands, and enhance thermal and visual comfort (Casin, 2018).
Figure 2 shows the classification of established glazing technologies (Michael et al., 2023).
In regions of moderate winter and summer single glass windows are still in use mainly because of the cost ability to effectively reduce entering solar radiation, luminance, and glare besides promoting cooling energy savings.Figure 3 shows the energy flow for a single-pane window glass (Chow et al., 2010).
Low-e coatings are almost fully developed and widely applied in many applications.Investigations on electro thermal coatings to convert electricity to heat by the Joule effect are ongoing but limited by the need of power supply, while photo thermal coatings were investigated to improve the glazing performance by absorbing ultraviolet radiation (UV) and infrared radiation (IR).Figure 4 shows the mechanisms of heat transfer and daylight through a double-glazing unit (Mohammad & Ghosh, 2023).
Further improvements were incorporated in double glass windows including forced and induced airflow in the spacing separating the glass sheets, Figure 5 (Chow et al., 2010).
Another improvement was incorporated using liquid flow, especially water to avoid heat flow to the interior ambience.(Chow et al., 2010) Figure 6 shows water cooled double glass window (Chow et al., 2010).The reported results indicated a big reduction of heat gain and enhancement of indoor comfort.
Aerogel is a silica-based, porous material composed of 4.00% silica and 96.00% air prevents convection heat transfer, has a thermal conductivity of about 0.013 W/mK, but allows light transmission.
Studies showed that the inclusion of PCM in glazed units can decrease the building energy demands, enhance the indoor comfort and delay and attenuate temperature but has the drawback of possible leakage and overheating.
Vacuum glazing is widely used in very cold countries because of low heat loss and high visible transmittance.Heat radiation is the only mode of heat transfer through these windows.Figure 7 shows a general layout of the vacuum window.
The above highlights show that the glazed windows and façades received were and still are subject to intense research activities and dedicated developments to make them efficient, safe, affordable, and astatically attractive.
The main objective of this work is to present an updated review of the state of research, development and applications of windows and façades in the building sector.The review covers natural illumination and visual comfort, natural ventilation, thermal comfort, and the influence of window area on natural illumination and visual and thermal comfort.The review also covers the performance enhancements of windows and façades including reflective solar films, vacuum glazing, double glass windows with filling materials such as windows with water flow, window with PCM, window with stagnant inert gas filling, ventilated windows (VWs) and façades, windows with aerogel and smart glazing.
Since commercial programs and computational codes are handy tools for simulation and thermal performance calculations of components and buildings, a section is dedicated to these simulations codes and relevant publications that used these tools in the analysis and simulations.The conclusion section contains the most relevant conclusion of the present review as well as future trends in research and developments in windows and façades.
The novelty of this paper is that besides energy reduction and avoidance of emissions usually treated in most papers and reviews, this paper relates explicitly natural illumination, natural ventilation and window and façade geometry to thermal and visual comfort, stress reduction, and overall health and well-being of the occupants of the building.
The main contributions of this paper to the area include addressing explicitly natural illumination, natural ventilation and their positive impact on attention restoration, stress reduction, and overall health and well-being.As passive strategies they can significantly help increase the liveliness, performance, and visual comfort of the residents, besides reducing the overall electrical energy consumption of a building.The paper also highlights the importance of the appropriate configuration of windows to improve visual and thermal comfort, reducing glare, distributing light, and controlling solar energy gain.Also, the paper provides an updated review on the stage of research, development and applications of windows and façades in the building sector.This review is organized in five sections, as below: 1. Introduction: The actual scenario in window and façades technologies are presented and commented on.
The section ends by presenting the objectives of the review, contribution of the review to the area and finally the basic contents of the review.
Illumination, visual comfort, thermal comfort, & natural ventilation:
In this section natural illumination, visual comfort, natural ventilation, thermal comfort are treated and studies reporting scientific research and results are presented.Impact of Figure 6.Schematic illustration of solar absorbing glazing (Chow et al., 2010) Figure 7. Schematic of double evacuated glazing unit (Michael et al., 2023) the window to wall ratio on visual and thermal comfort are presented.
Performance enhancements of windows & façades:
In this section windows with water flow, window with PCM, window with stagnant inert gas filling, VWs and façades, windows with aerogel and smart glazing.
Commercial programs & codes:
In this section most used commercial codes for simulating the thermal performance of buildings and components are presented and commented on.
5. Conclusions, future trends in research, & developments: In this section the main findings of the review are reported and the gaps for future research and developments are highlighted.
ILLUMINATION, VISUAL COMFORT, THERMAL COMFORT, & NATURAL VENTILATION
Windows provides the contact between the internal space of a building and external surroundings.Internal natural illumination, natural ventilation, and reduction of heat transfer to and from the building can be achieved partially by traditional glazed windows under severe and sudden variations.Hence it is necessary to improve windows and façades to provide quick and adequate response to the momentary changes and to ensure continuously comfortable and healthy ambience for the occupants.
Natural Illumination & Visual Comfort
Daylight is a free energy and cost-effective alternative to artificial lighting and has favorable effects on comfort, health, well-being, and productivity.A fair amount of research work was done to evaluate natural daylight effects on occupants, and work efficiency.Yao (2014) conducted a study to investigate the effects of movable solar shades on energy consumption, indoor thermal and visual comfort.Results showed that movable solar shades provided 30.87% energy savings, improved thermal comfort by 21.00% and visual comfort by 19.90%.Carlucci et al. (2015) provided a review of indices for assessing visual comfort and presented recommendations and suggestions for improvement.Nasrollahi and Shokri (2016) reviewed the basic concepts of daylighting, and the architectural parameters in an urban context.The results indicate that these factors are of high significance.Hoffmann et al. (2016) investigated coplanar shades with different geometry and material.Energy use increased substantially when an additional interior shade was used for glare control.Acosta et al. (2016) conducted a study to investigate the impact of window design on energy savings for lighting and visual comfort in residential spaces.Silva et al. (2016) provided a literature review on the use of PCMs in glazing and shading solutions focused on PCM technologies developed for translucent and transparent building envelopes to improve energy efficiency.Costanzo et al. (2016) investigated the application of thermo-chromic windows to an existing office building for energy savings, daylighting, and thermal comfort.The results showed an overall energy savings from around 5.00% for cold climates to around 20.00% in warm climates.Vlachokostas and Madamopoulos (2017) conducted a study to investigate the thermal performance of a liquid filled prismatic louver façade and concluded that the liquid filled louver system enhanced the indoor natural light by 9.00%, reduced the glare by 20.00% and enhanced absorption of solar radiation by 52.40%.Shen and Tzempelikos (2017) presented details of a simplified model-based shading control.The method was generalized and can be applied to any shading/glazing properties, location, orientation, and room configuration.Makaremi et al. (2018) examined the influence of reflectance of surfaces and lighting strategies on energy consumption and visual comfort.The results show the possibility of electrical energy savings up to 45.00% by increasing surface reflectance properties.Eisazadeh et al. (2019) investigated the influence of glazing characteristics and shading device configuration and concluded that they severely impact energy cost, daylighting, and visual comfort.Gutiérrez et al. (2019) examined the design and daylight performance of a new louver screen for office buildings and concluded that it can provide adequate daylight levels and visual comfort.Dogan and Barch (2019) provided a review on daylighting metrics for residential architecture and introduced a concept for a new climate-based, annual evaluation framework.Zhang et al. (2019) presented the results of a human experiment of the effects of glazing types (color and transmittance) on participants' alertness, mood and working performance.For a high Circadian Stimulus CS level (≥0.3), glazing color and transmittance did not significantly affect human alertness.A low CS level (<0.3) caused significant negative mood to occupants.Day et al. (2019) reported similar results from a large-scale study in the US.Ke et al. (2019) provided a review on recent progress in smart windows including functional materials, device design and performance.Kaasalainen et al. (2020) investigated the direct and combined energy impacts of window related architectural design and concluded that cooling can be a significant factor and that energy efficiency should always be evaluated.Onubogu et al. (2021) presented a review on the existing technologies of daylighting systems covering both passive and active daylighting systems, besides providing recommendation for future research.Khaled and Berardi (2021) presented a review on coating technologies for glazing applications and commented that both static and dynamic technologies contribute to enhance optical and thermal performances.Foroughi et al. (2021) developed an optimization model to identify the optimum window design parameters.The results show that selecting optimum window dimensions and locations can reduce the total building energy consumption.Feng et al. (2021) reviewed and analyzed window design studies to achieve high performance, presented simulation-based optimization methods, and identified potential challenges and future research trends.Eleanor et al. (2022) identified critical needs in research, tools, and technologies to enable more effective use of daylight.Advanced window technologies and integrated design can enable achieving health, comfort, and net zero energy goals.Lee et al. (2022) proposed a technique for improving the efficiency of the light shelf using a solar module and evaluated experimentally its performance.The results showed that the energy consumption of the proposed system is reduced by 3.50%-32.70%.Li et al. (2022) provided an overview of research advances in optical transmittance, thermal resistance, and thermal inertia along with photothermal transmittance in glazing envelopes, with a special focus on the integration of PCMs.Li and Tang (2024) provided a review on PCM window for thermal and light dynamic regulation and identified the potential of these concepts to improve light and thermal regulation.Table 1 presents a summary of some studies on natural illumination and visual comfort in buildings.
Comments
Visual connection to nature has been demonstrated to have a positive impact on attention restoration, stress reduction, and overall health and well-being.Inside buildings, windows are the primary means of providing a connection to the outdoors, and nature views may have similar effects on the occupants.
Glazed envelopes suffer from the defects of high solar transmittance, poor thermal insulation, and low thermal inertia, while traditional windows, as the major source of daylight, have a common problem, which is uneven distribution of daylight in the room besides possible glaring.Day lighting is a passive strategy, which is significant in increasing the liveliness, performance, and visual comfort of the residents.It helps to reduce the overall electrical energy consumption of a building.Since day lighting is essential for the general comfort of the occupants and considering that artificial lighting accounts for a considerable part of electrical energy consumption in buildings, there is a need to design appropriate lighting scenarios for buildings in a way to reduce energy while meeting visual comfort requirements.
Natural Ventilation
Natural ventilation can provide occupants with thermal comfort and a healthy indoor environment, besides being an effective strategy for reducing energy consumption especially for commercial and office buildings.Aflaki et al. (2014) presented a review on ventilation techniques.Results showed that building orientation and apertures size are effective to increase indoor air ratio.Zhang et al. (2014) presented a review on diffuse ceiling ventilation (DCV) and examined thermal comfort, air quality, pressure drop as well as radiant cooling potential.Yu et al. (2015) provided a review on cooling and ventilation in office buildings including natural ventilation, building thermal mass activation, DCV and proposed a system combining the three technologies.Kasima et al. (2016) conducted a computational fluid dynamics (CFD) study to investigate the effect of different opening positions on wind-induced ventilation performance.Salcido et al. (2016) presented a review to analyze the use of mixed mode ventilation systems in office buildings.The authors showed the progress made and future challenges for use in office buildings.Nomura and Hiyama (2017) provided a review on natural ventilation performance of office buildings in Japan and indicated that natural ventilation performance depends considerably on the building design.Omrani et al. (2017) conducted a study to analyze the effects of natural ventilation mode on thermal comfort.Results highlighted better performance of cross ventilation over single-sided ventilation.Elshafei et al. (2017) investigated the effects of window parameters on indoor natural ventilation in a residential building.Modifications in window size, window placement, and shades improved the air temperature and the air velocity.Palme et al. (2017) presented natural ventilation as a mitigation strategy to reduce overheating in buildings.The overheating risk of a small house is evaluated with and without considering urban heat island effect.Results show that an important portion of the indoor heat can be removed.2019) evaluated the possibility of using natural ventilation in school buildings and concluded that the strategy was viable, and that single-side ventilation and cross-ventilation can improve cooling and air quality in school buildings.Chen et al. (2019) examined various types of control including spontaneous, manual, and the fully automatic window/heating, ventilation, and air-conditioning control system and concluded that the fully automatic system was more adequate and showed energy savings of 17.00-80.00%with zero discomfort.Guo et al. (2019) developed an approach integrating sensitivity and parametric simulation analysis and showed that the climatic conditions and night ventilation have strong effect.Wu et al. (2019) presented a review on DCV, highlighted the research findings, and proposed simplified modeling methods for DCV system design tool.Yang et al. (2019) reviewed advanced air distribution methods, limitations and solutions and analyzed measuring and evaluating methods for ventilation and air distribution.Solgi et al. (2019) studied PCM behavior when used with night ventilation and showed that insulated envelopes increase night ventilation efficiency and stabilize PCM transition temperature.Rahnama et al. (2020) evaluated the cooling capacity of DCV system and indicated that the highest cooling capacity is achievable with evenly distributed heat load in the room and active diffuse panels in the ceiling.Hu et al. (2020) experimentally investigated the performance of a phase change material enhanced ventilated window (PCMEVW) and found that the room inlet air temperature was 1.4 °C lower compared to the normal VW and the average energy saving was 1.6 MJ/day.Hu et al. (2020) proposed a PCMEVW system for ventilation preheating/precooling purposes and showed that the proposed system greatly decreases the energy demands for summer and winter applications.Piselli et al. (2020) investigated possible improvements from using natural ventilation on PCM performance in cooling application and showed that PCM inclusion in the building envelope resulted in significant cooling savings.Guo et al. (2020) evaluated the resulting thermal performance from coupling a cool roof with night ventilation and concluded this can result in 27.00% savings in the annual cooling energy.
Hati (2021) presented a review on ventilation systems, variable speed drive and discussed various energy efficiency strategies and artificial intelligence-based models.Zhang et al. (2021) presented a review of combined natural ventilation and commented that the coupling between different natural ventilation systems still requires more future research.Zhong et al. (2022) reviewed research on single-sided natural ventilation and commented that in future investigations, different methodologies should be coupled.Maghrabie et al. (2022) presented a review of natural ventilation of buildings based on solar chimney (SC) and concluded that combined SCbased cooling/heating energy systems can be an effective strategy for energy efficient buildings.Sadeghian et al. (2022) investigated the role of design parameters on the performance of DCV systems and concluded that dispersed configuration had the highest draft rate of 14.00%.Zaniboni et al. (2022) presented a review on natural and mechanical ventilation concepts and concluded that thermo-hygrometric comfort is an important parameter.Mateus et al. (2023) reviewed the methodologies applied to study the natural ventilation of large air masses and techniques used for the validation of CFD models and commented that greater agreements were found in the models´ formulations.Table 2 presents a summary of some studies on natural ventilation in buildings.
Comments
Buildings consume more than 40.00% of global energy use and ventilation is one of the largest sources of energy consumption.Also, the high cost of energy has intensified research interest in passive energy saving strategies for buildings.Night ventilation has been shown to reduce the energy demand for cooling buildings as well as significantly improve thermal comfort.DCV also has great energy saving potential and can handle high cooling loads without inducing thermal discomfort.PCMs have a big potential to be used as passive strategy for improving energy efficiency and occupants' thermal comfort in buildings.However, their performance still needs to be enhanced to have them effectively used.
Thermal Comfort
ASHRAE (2017) defines thermal comfort as "that condition of mind that expresses satisfaction with the thermal environment".The six factors considered in ASHRAE are temperature, thermal radiation, humidity, airspeed, activity level (metabolic rate), and occupant clothing (degree of insulation).Saadatjoo et al. (2019) investigated the effect of porosity distribution pattern on natural ventilation and concluded that porosity could be changed to fulfill most of the building environmental requirements.Hawila et al. (2019) conducted a study to quantify the interactions and optimize building glass façades.The results indicated that the optimized design enhanced thermal comfort and energy-savings.Krstic-Furundzic et al. (2019) presented the estimation of energy performance of different hypothetical models of façade design.Results showed the effects of the various alternatives of shadings on the reduction of environmental pollution and energy demands.Fahmy et al. (2020) simulated an educational building and concluded that shading the roofs and southern façade of building envelope were the most effective.
Abd El-Rahman et al. (2020) commented that a significant part of the building energy is consumed for achieving thermal and optical comfort.The important parameters include building shape, orientation and the window to wall ratio and need to be adequately combined to achieve thermal comfort and energy efficiency.Ko et al. (2020) investigated the influence of a window on the thermal and emotional aspects of the occupants and concluded this strategy is important for their comfort and concentration.Yang et al. (2021) performed sensitivity analysis on the correlations between indoor thermal comfort, energy consumption and design parameters of BIPV/T-double skin façades (DSFs) and concluded that solar heat gain coefficient (SHGC) of the BIPV/T-DSF's external window significantly affects indoor thermal comfort and energy consumption.Bahri et al. (2022) presented a review on tools and techniques used to improve thermal comfort in a double skin façade for residential buildings.Results suggest that simulation is the most accurate in comparison with other methodologies.Shahrzad and Umberto (2022) investigated the optimization of a novel opaque dynamic façade and concluded that the thermal resistance in the façade could be varied as a function of airflow in the façade.Jiang et al. (2022) conducted a study to explore the influence of natural views and daylight on health, thermal perception and energy savings and concluded that visual window improved the occupants' tolerance to the thermal environment.Table 3 presents a summary of some studies on thermal comfort in buildings.
Comments
Natural illumination and natural ventilation can reduce energy demands and improve the thermal comfort.One can also observe the recent concern about visual comfort especially in working areas and impacts human comfort and efficiency.The important parameters of buildings such as building shape, orientation, and the window to wall ratio, need to be adequately combined to achieve thermal comfort and energy efficiency.
Effect of Window to Wall Ratio
Window-wall ratio (WWR) is an important building design parameter that has significant effects on the fashionable appearance of the building as well as on the internal thermal, visual, and acoustic comforts.ASHRAE (2017) has established that WWR of 0.24 is considered ideal for indoor daylight and natural ventilation.Table 4 shows the summary of WWR values and their effects on building performance.Also, the ratio of glass area to floor area is relevant in building design.A value in the range of 20.00-30.00% of the floor area is considered adequate.
Goia ( 2016) conducted a study to seek for the optimal WWR in European climates and concluded that these values occur in the range (0.30<WWR<0.45)and that the total energy use may increase by 5.00-25.00%for the worst WWR configuration.Carlos and Corvacho (2017) assessed the influence of thermal performance and ventilation on human comfort and found that it was possible to minimize the air changes without compromising air quality and enhance thermal performance and comfort.Fang and Cho (2019) proposed a building performance optimization process that can evaluate daylight and energy performance of building design.Troup et al. (2019) conducted a study to evaluate the effects of WWR in actual office buildings and the results showed increased median total energy use intensity with increasing fenestration due to increasing cooling loads.Alwetaishi (2019) assessed the influence of WWR in various microclimate regions and suggested WWR of 10% in hot and dry and hot and humid climates.Ashrafian and Moazzen (2019) focused their study on the impact of different transparency ratios and window combinations on occupants' comfort and the energy demands of a classroom.The results indicated that a WWR of 50.00% can decrease artificial lighting by 15.00% and ensure indoor comfort.Phuong et al. (2019) proposed an integrated approach to determine window to floor ratio to provide the target Daylight Factor, energy efficiency for the tropical climate.The results showed that the recommended window to floor ratio is 15.20% to 18.50% for Daylight Factor of 1.35%.Ozel and Ozel (2020) investigated the effect of WWR on the thermal performance of different wall materials and concluded that for the case of bare wall the wall material affected the glazed area, whereas in the case of the insulated wall, the effect was marginal.Shao et al. (2020) used the Design builder software to determine the optimum value for WWR for rural house design considering other parameters such as building orientation and thermal performance among others.Their recommendation is that the effect of various factors should be considered in the design to determine the WWR of rural houses.Sayadi et al. (2021) investigated different cases in various climatic conditions to obtain the optimal WWR based on the minimum energy use of a building during a complete year.Boutet and Hernández (2021) focused his study on determining an optimized design proposal that achieves thermal and daylighting habitability conditions.The results showed significant thermal and daylighting behavior enhancements with important reductions in the cooling loads.Saber (2021) investigated the effects of the WWR and recycled panels on energy consumption.All panel cases showed that the least energy consumption with lighting included occurred for WWR of 45.00-55.00%.They recommended the use of recycled materials-based panels for the envelope with WWR less than 50%.
ASHRAE has established that window to wall ratio of 0.24 is considered ideal to allow optimum indoor daylight and natural ventilation, which reduces energy costs.The larger the window, the more heat or light will penetrate the room, which causes overheating and glare.An appropriate configuration of windows can effectively improve visual and thermal comfort by reducing glare, distributing light, and controlling solar energy gain.
PERFORMANCE ENHANCEMENTS OF WINDOWS & FAÇADES
Since glazed windows and façades are the weakest elements for heat transfer between the building and exterior ambient, many studies and developments were devoted to enhancing their thermal and optical characteristics to make them not only more efficient, but also dynamic enough to cope with ever-changing climatic conditions.
Reflective Solar Films
Sustainability and energy conservation are essential for buildings management together with satisfactory indoor environment and comfort.Windows and translucent elements are responsible for maintaining comfortable indoor thermal and visual environment (Figure 8).Many window geometries and attachments were developed to improve the performance of windows.Among these possible solutions solar control films (SCFs) outstanding as a cheap and effective solution that controls light and heat penetration, filters out UV and IR, reduces glare and minimizes the use of electric lighting.Figure 9 shows possible arrangements for windows with reflective film for summer and winter operations.Yu and Su (2015) presented a review of methods for indoor daylight assessment, and methods used for predicting energy savings from daylight to make available information for sustainable design and energy management.Hee et al. (2015) reviewed the impacts of window glazing on the energy and daylight performance and the optimization techniques used in choosing the emerging glazing technologies.Moretti and Belloni (2015), in their study to evaluate the effects of SCFs on the thermal, and daylight performance for moderate climate, reported a reduction of 60.00% in the incoming radiation, about two to three o C of the indoor temperature, increase of artificial lighting, decrease of the cooling demand by 29.00% while the heating demand increased by 15.00%.Li et al. (2015) investigated the effect of solar films on building energy consumption and concluded that solar films have good potential for energy saving.Rezaei et al. (2017) presented a review on various types of glass coatings and glazing systems and evaluated the potential of using different window technologies for hot, cool, and temperate climates.Xamán et al. (2017) presented the results of thermal evaluation of a room fitted with a double glass window and SCF.The results showed a reduction of 62.00% of the energy gains for hot climate, and insignificant effect on the indoor temperature in the cold climate.Teixeira et al. (2020) conducted a study to investigate the thermal and visual comfort performance of different types of SCFs and concluded that the highly reflective SCF has the highest thermal and optical performance, while the spectrally selective film showed an annual reduction of 38% of energy consumption.Abundiz-Cisneros et al. ( 2020) investigated alternative materials to substitute silver (Ag) in a low-e filter coated and concluded that an aluminum-based filter has a good cost-benefit performance.2019) evaluated the influence of SCF on natural luminance of a public hospital building in a Mediterranean climate and commented a reduction of 12.20% of the electric consumption of the artificial lighting system, Figure 10.The room with solar film showed a reduction of 3.30% in energy cooling demand.In contrast, heating energy demand increased by 6.50% in comparison with the room with no solar film.
Daylight luminance levels and their spatial distribution are important design parameters to achieve indoor visual comfort and sustainability in buildings.While a proper day lighting scheme increases the efficiency of the building, the excessive use of glazed surfaces can contribute to thermal and visual discomfort.Pereira et al. (2020) analyzed the impact of single glazing with different SCFs on the indoor luminance and its distribution on horizontal plane and concluded that all SCFs reduced the indoor luminance.In another work, Pereira et al. (2022) presented a review of the performance of SCFs applied to glazing systems and identified and discussed interactions of glass-film systems, climatic conditions on energy savings and comfort.
Vacuum Glazing
Vacuum insulated glazing is an effective technology suitable for severe thermal performance requirements, enhanced thermal efficiency, sound insulation, and can achieve U-values as low as 0.7 W/m 2 K. Table 5 shows comparison of vacuum-glazing windows commercially available (Aguilar-Santana et al., 2020).Fang et al. (2006) developed a technique to determine the heat transfer coefficient of the evacuated gap and the comparison between the measured and predicted temperature profiles showed good agreement.Jelle et al. (2012) conducted a market review of the best performing fenestration products including electrochromic vacuum glazing and evacuated aerogel glazing as potential candidates for enhancing thermal and daylight performance of windows.Ghosh et al. (2017) analyzed the variation of vacuum glazing transmittance with clearness index and showed that clearness index below 0.50 offers single value of transmittance.Alam et al. (2017) evaluated savings in space heating due to the installation of fumed silica and glass fiber and vacuum insulation panels (VIPs).The results show that VIP insulation reduced the annual space heating, energy 2019) reported on recent advances in the utilization of vacuum glass in contemporary window construction and highlighted the reduction of the weight of components, reduction of conduction and convection heat transfer are among the benefits of vacuum glazing.Aguilar-Santana et al. (2020) reviewed the performance of available window technologies with a special focus on the U-value.The authors concluded that further research is needed to develop window technologies with multiple functional attributions and characteristics such as high insulating properties, generation of energy, etc. Fang et al. (2020) analyzed the solar thermal performance of two configurations of air-vacuum layered triple glazed window and concluded that allocation of the vacuum gap near the indoor region reduces cooling load and enhances energy efficiency.Aguilar-Santana et al. (2020) experimentally determined the U-value of active insulated windows and compared them with other windows and concluded that vacuum glazing achieved a U-value 78.00% lower than traditional single glazing window units.Uddin et al. (2023) presented a review on PV combined with vacuum glazing, actual progress, and prospects for the design of energy-efficient buildings.
Double Glass Windows with Filling Materials
To reduce the heat gain or loss from double glazed windows and façades and improve the thermal performance adequate filling materials are inserted in the gap to provide the required effect.Varieties of gases and liquids were tried, tested, and evaluated such as stagnant inert gases, flowing air and flowing liquids, aerogels and PCM.Table 6 provides a comparison of window gap fillers.
Windows with water flow
The use of the concept of fluid flow in the double glass window can enhance its thermal performance while the heat gained by the flowing liquid can be used in other applications.Triple and multiple glass panes windows may integrate with the liquid flow some additional functional component such as the air layer, or vacuum gap to enhance the system thermal performance.Figure 11 shows the details of a water-flow double window.
An experimental investigation of a glazed façade had been conducted by Qahtan et al. (2011) utilizing a water film and concluded that the flowing water film on the glazed façade lowered the glazing surface temperature by 7.2 to 14.0 o C and decreased the indoor temperature by 2.2 to 4.1 o C. Chow et al. (2011aChow et al. ( , 2011b) ) connected the cavity of a double pane window to a water-flow circuit to absorb solar heat, reduce room heat gain, enhance thermal performance, and concluded that the proposed system is adequate for warm regions.Adu (2015) characterized the performance of water as gap-filler for double-glazing units and found that the thermal transmittance and SHGC were better than those of the traditional units, maintained lower indoor temperature swings, reduced the incident solar radiation, and maintained Romero and Hernández (2017) used a net energy balance radiation model to solve the spectral problem and determine the wavelength-averaged absorptances of the different layers of a multilayer water flow glazing system to be able to precisely simulate the performance of this type of windows.Lyu et al. (2019) proposed a triple glazing vacuum-water flow window and showed a heat reduction of about 44.00% due to the combined effects of vacuum gap and water flow.In another work Lyu et al. (2019) suggested using hot water flow to make the gap work as a heat radiator.The simulation results showed that the proposed system was viable.Li et al. (2019) investigated water-flow window for possible use in hospital patient wards with large demand of hot water.The numerical simulations showed reduction of penetrated solar energy, better thermal indoor conditions, and hot water for general use.The water inside fluid glaze façades creates a vertical hydrostatic pressure, which must be supported by the glazing.The simulations showed that pillars can solve this pressure problem (Escoto & Hernández,2019).Chow and Liu (2020) reported that the results of applying the dynamic simulation model of water-filled double-glazing indicated a thermal efficiency in the range 26.00-51.00%.Double-circulation water-flow window is composed of four layers of glass panes and two layers of flowing water, which utilizes solar energy for domestic hot water and regulates heat gain through window (Li et al., 2020).Results showed that the annual solar collection efficiencies were 16.20% and 4.30% for the external and internal water circulation, respectively.
A literature review showed that fire safety and reduction of energy needs is a challenge in buildings with glass façades.Water wall system as a building façade is a possible solution that can allow achieving both objectives (Rathnayake et al., 2020).The transparency of the water wall allows daylight to enter the building and maintains good visual performance, while the water layer acts as a fire safety mechanism when needed.
In a review by Piffer et al. (2021) on windows filled with liquids with spectrally-selective and high thermal capacity properties to enhance optical and thermal performance and commented that these windows can transmit more light than when it is empty, promote solar heat gain or reduce cooling demand.Huang et al. (2021) provided a review on the application of fluids and other materials as fillers for multiglazing windows to enhance thermal performance including airflow, flowing liquids, aerogels, and PCM.They included suggestions for future research and developments.Ghosh (2023) mentioned that diffuse transmission has several advantages like offering uniform daylight and reducing glare.Diffuse transmission windows prepared with aerogel, PCMs, and polymer dispersed liquid crystal had been reviewed and their potential for future buildings applications was discussed.
Window with phase change material
The use of PCMs as gap filler enhances the thermal inertia of windows, increases thermal resistance, and filters out unwanted radiation as UV and IR besides reducing window glare.A general arrangement for including PCM in double glass window is shown in Figure 12 reproduced from (Khetib et al., 2021).
A triple-glazed window can permit filling one of the window cavities with PCM to regulate indoor daylight, while the second provides the required thermal insulation.Two possible PCM-glazing arrangements are presented in Figure13.For summer conditions PCM should be placed near the exterior side, part a in Figure 13, while for winter PCM plays a role of dynamic insulation and should be located as in part b in Figure 13.Ismail and Henriquez (2002) presented the results of a numerical and experimental study on thermally efficient windows.The results of transmittance and reflectance tests indicated large reductions in the infrared and UV while maintaining good visibility.Goia et al. (2013) proposed a prototype of a simple PCM glazing system, compared the results with those of a reference conventional double glass window and showed improvement of thermal comfort.Zhong et al. (2014) investigated the effects of the inclusion of PCM in double glass window on the building thermal performance and reported good agreement between the experimental and numerical results.Li et al. (2016) investigated a configuration of triple-pane window, where PCM was placed in the cavity near the external side.The results showed a reduction of 5.5 • C of the internal surface temperature and a decrease of 28.00% of heat gains.Wieprzkowicz and Heim (2018) investigated the thermal performance of PCM glazed unit under real climatic conditions 2018) reviewed the experimental and numerical methods used to predict the thermal and optical behavior of window including methods that permit the prediction of the combined effects on the thermal, daylight and energy behavior of buildings.Ehms et al. (2019) reviewed the numerical approaches used to formulate the solidification and melting processes and determine the heat exchange and transition velocity between the phases.Li et al. (2019) evaluated the thermal performance of glass window with silica aerogels and PCM as in Figure 14 and showed that the thermal conductivity and thickness of silica aerogel are important parameters and concluded that the concept is viable for cold climates.Yang et al. (2020) investigated two models to determine optical properties of PCM based nanofluids.The results showed that the large concentration of the nanoparticles produced big extinction and scattering coefficients of the paraffin nanofluid.Li et al. (2020) reviewed the optical and thermal performance of glazed units with PCM, presented and discussed the research methods, and indicated future works on PCM glazed units.Liu et al. (2019) developed a simplified method to analyze the thermal performance of multilayer glazing façades and showed good agreement between measurements and numerical predictions.
DSFs are considered as sustainable design elements for reducing energy consumption in buildings.However overheating problems in warm seasons have been reported in various studies (Li et al., 2019).The authors evaluated experimentally and numerically the thermal performance of an integrated PCM blind system for DSF buildings and reported that the system stabilized DSF indoor temperature.Uribe and Vera (2021) analyzed the impact of PCM glazing on buildings' energy performance and thermal and visual comfort and reported reduction of energy consumption and enhancement of the thermal and visual comfort.Khetib et al. (2021) simulated the three modes of heat transfer in an air-filled double-glazed window (DGW) and showed that the highest heat loss occurred for the vertical window.Li et al. (2022) provided a review on optical transmittance, thermal resistance, and thermal inertia in glazed envelopes with PCMs and reported lack of information on acoustic data of glazed system with PCM.Li et al. (2022) investigated a roof based on silica aerogel and PCM glazed systems.The results showed improvement of the thermal performance.Kaushik et al. (2022) examined the heat transfer characteristics of a double-glazing window system having a nano-disbanded phase changing material and indicated a decrease of the indoor glass panel temperature by 8.5 o C and enhanced energy conservation by 4.61%.Liu et al. (2022)
Window with stagnant inert gas filling
Inert gases are used as fillers for double glass windows because of their thermal resistance.It is known that double glass windows filled with Argon can reduce the conductivity of the window by 67.00% in comparison with air-filled windows.Windows filled with Krypton can reduce the overall U-value by 17.20% compared to Argon filled windows, while Xenon-filled gap showed the lowest U-values of 0.28 W/m 2 K but is more expensive to manufacture.Table 7 presents comparative values of triple-glazing heat transfer solar coefficient and transmittance for gas fillers (Aguilar-Santana et al., 2020).Jelle et al. (2012) provided a review on actual fenestration technology at the time and possible future research and development to improve the fenestration products and make them not only efficient providing thermal and visual comfort but also affordable.The review covered U-values of commercially available glazed products such as aerogel, vacuum, low-emissivity coatings, and electro chromic vacuum windows.Arici and Kan (2015) evaluated the thermal performance of double, triple, and quadruple glass windows and presented correlations for predicting the glazing U-value for the cases of air and Argon as fillers.Aguilar et al. (2015) evaluated the thermal performance of a double pane window with three types of commercial glass and recommended the use of reflective film with double pane window in warm and cold climates.Lolli and Andresen (2016) conducted a study to evaluate emissions reduction due to substitution of tripleglazing units with double glass window with argon gas and monolithic or granular aerogel and concluded that the two options were viable.Windows are essential components of buildings, which provide vision, air ventilation, passive solar gain, and daylighting, but also contribute much to the thermal loads of buildings because of their high U-values, Figure 15, (Cuce, 2014).Hence, it is essential to reduce the U-value of windows and façades to improve thermal performance of components.Baek and Kim (2019) developed a hybrid triple glazing that combines vacuum and carbon dioxide (CO2) in gaps and concluded that the performance was comparable to that of the case with Argon gas.In another study Baek and Kim (2021) the authors analyzed the insulating effect and performance of double glazing with CO2 as filler.From comparison with other gas fillers, they confirmed that glazing with CO2 gas performance was like that of Argon gas.A review on thermal insulation materials, and factors that influence the choice for building applications was provided by Imhade et al. (2022).
Ventilated windows and façades
In VWs and façades part or total ventilation air is drawn through the gap separating the pair of glass sheets to be heated by solar radiation and inserted into the building, or alternatively to be induced from the indoor ambient by solar radiation action and removed to the exterior ambient.A dual airflow arrangement can allow preheating the fresh supply air or cooling it according to the season, and hence such window arrangement is suitable for different climates.Another possible solution is using reversible VWs, as shown in Figure16 (Lago et al., 2020).
Figure 17 shows a sketch of VW integrated with PCM (Hu et al., 2020).The ventilated double glass arrangement acts as a passive heating system, where part of the heat loss is returned by the airflow, while part of the incident solar radiation heat is removed by the flowing air to the indoor ambience.Carlos et 2012) analyzed the two functions and concluded that the concept is applicable on any facade.Carlos and Corvacho (2013) studied the performance of ventilated double window for the preheating of the ventilation air and concluded that the proposed system was viable.De Gracia et al. ( 2013) provided a review on DSFs modeling including analytical and lumped models, network models, control volume, and CFD and commented on their limitations and advantages.Barbosa and Ip (2014) reviewed application of DSFs technologies to provide guidelines to optimize designs in naturally ventilated buildings for improving indoor thermal comfort.Skaff and Gosselin (2014) investigated ventilated fenestration on energy performance and their benefits.They proposed a model to determine the total heat reduction provided by ventilated glazing.Carlos and Corvacho (2015) studied numerically and experimentally the variation of SHGC of a ventilated double window with working conditions and types of glass and concluded that the results can be helpful for building energy analysis.Gloriant et al. (2015) proposed simplified models to be used in building simulation codes.Predictions from these simplified models were compared with those from CFD modeling and showed good agreement.Carlos (2017) commented that the double VW can preheat the incoming air, reduce thermal losses through windows and decrease the heating load of the building.Souza et al. (2018) investigated the efficiency of a naturally ventilated DSF and showed that DSF contributes to the decrease of the indoor temperature.Tukel et al. (2019) investigated the effect of air layer thickness, glass coating emissivity and number of panes on the thermal characteristics in glazed roof and showed that the U-value was reduced to 0.77 W/(m 2 K) and energy saving potential achieved about 71.00%.Lago et al. (2019) developed and validated a model for the ventilated double glass window with reflective solar film.The results showed that the solar reflective film can reduce the penetrating solar energy by about 64.70%.Zhang et al. (2019) investigated a triple glazed exhaust-air window (TGEW) and concluded that TGEW can reduce 25.30% and 50.10% of the annual cooling and heating loads, respectively.Choi et al. (2019) analyzed the cooling energy performance of a slim double-skin window and showed that the proposed system can reduce the cooling load and decrease the solar heat gains.Hu et al. (2020) proposed a PCMEVW system for ventilation preheating / precooling applications and showed a building energy saving increase of 62.30% and 9.40% for summer and winter, respectively.Huang et al. (2021) provided review on fluid fillers for multi glass windows covering application technologies, performance analysis methods, and evaluation of different building applications.Khosravi and Mahdavi (2021) investigated the ability of VWs to preheat the incoming ventilation air and showed that taller cavities and a smaller cavity depth can enhance the incoming air temperature.Sadko and Piotrowski (2022) reviewed the investigations of the thermal properties of window systems and glazed buildings partitions.Kumar et al. (2022) reviewed different building parameters to provide a conceptual framework for the building envelope.The proposed framework includes life cycle assessment, occupant's satisfaction, and social benefits.Preet et al. (2022) reviewed the studies on DSF systems and discussed the influence of geometric design parameters on heat transfer and fluid dynamics occurring in DSF system.Figure 18 shows the schematic diagram of DSF system used by the authors.Tao et al. (2023) proposed a new theoretical model for a naturally ventilated double-skin façade to calculate the thermal and ventilation performance under varying environmental conditions.Substantial performance improvements can be achieved by using hybrid nanoparticle-enhanced phase change material (NePCM) in DGWs but can cause local overheat and impacts negatively thermal comfort and natural lighting (Yang et al., 2023).Figure19 shows the model of DGW filled with NePCM used by the authors.
Windows with Aerogel
Glass façades and windows are responsible for thermal comfort, daylighting, natural ventilation, and a big part of the energy demands due to thermal losses.Aerogel, due to its optical transparency and low thermal conductivity if inserted in windows and façades can significantly improve their performance and reduce discomfort and glare.(Yang et al., 2023) shows a window with aerogel insulation, while Table 8 shows datum on aerogel glazing available in literature (Ding,2020).Buratti et al. (2012) investigated innovative glazing systems with silica aerogel for energy saving.The monolithic aerogel glazing showed better performance in comparison with granular systems because of light transmittance, and thermal insulation.Experimental results from Gao et al. (2014) indicated that the optical and thermal properties of aerogel glazed units are dependent of the particle size of the granules and the results (for large granules) showed 58.00% reduction in heat losses and 38.00% reduction in light transmittance compared to a traditional DGW.Cotana et al. (2014) assessed the effects of inserting aerogel in an innovative glazed system and concluded that aerogel reduced energy for heating by 50.00% in winter, increased the acoustic insulation index by three dB and reduced luminance by 10.00%.Ihara et al. (2015) evaluated the energy performance of aerogel granulate glazing systems from an office façade and indicated that the proposed façade can achieve a lower energy demand than a double glazed.In another work, Ihara et al. (2015) confirmed experimentally that convection in the granular cavity does not affect the thermal performance of aerogel granulate glazing systems.
Simulations and comparisons of aerogel window with traditional Argon-filled coated double-glazing showed that the aerogel window provided a low U-value of 0.30 W/m 2 K, the daylight transmission of the aerogel window was lower than that of DGW (Garnier et al. 2015).
The different properties and energy performance of aerogel glazed units including durability can provoke architectural challenges and aesthetic problems (Gao et al., 2016).The authors highlighted the need for guidelines to regulate the use of aerogel glazing and presented suggestions and recommendations.In another work, Gao et al. (2016) analyzed the application perspective of aerogel glazing in energy efficient buildings by evaluating their energy efficiency, process economics, and environmental impact and concluded that it can reduce energy consumption by 21.00% and has a possible payback time of about 4.40 years.Buratti et al. (2017) investigated different glazing systems with different granular aerogel and different glass sheets and reported a reduction of 63.00% in the U-value and about 30.00% in light transmittance.Moretti et al. (2018) compared the thermal performance of air and aerogel-filled PC systems and found that the impact of the aerogel was significant in reducing the U-value by 46.00-68.00%,light transmittance by 0.61 and 0.42 for 16 mm and 40 mm aerogel thickness, respectively.Also, the authors reported significant effect on reflectance while the solar factor was like that of the low-e glazing.Berardi (2019) provided review on the aerogel-enhanced opaque systems including cement-based products, aerogel-enhanced plasters, aerogelenhanced blankets and commented future research and development challenges.Mujeebu (2019) presented a review of aerogel including glazing technologies, production, properties, manufacturing, aerogel windows and application in buildings besides challenges in research and developments.Leung et al. (2020) examined the application of aerogel glazing technology and concluded that it can significantly reduce the heat gains and cooling energy.Almeida et al. (2020) reviewed the alumina-silica-based aerogels, including their fabrication processes and physical and thermal properties and commented that the insertion of the alumina phase makes the aerogels stable at high temperatures and maintain low thermal conductivity.Buratti et al. (2021) presented a review of aerogel glazing systems focusing on the main properties of interest in building applications including the material itself, the assembled glazing systems, thermal and optical properties, reliability, and durability of the aerogel glazed products.Zhang et al. (2021) numerically investigated the energy performance of different glazing configurations including glass windows filled with silica aerogel or PCM and compared with traditional air-filled glass windows.Figure 21 shows the schematic of double and triple glazing windows used in their study.Lamy-Mendes et al. (2021) provided a review on the production process of silica aerogels, and thermal and physical properties panels of blankets, cement, mortars, concrete, glazing systems, among others.Meti et al. (2023) presented a review on the progress in the development of aerogels and their classification into three categories: inorganic, organic and organic-inorganic hybrid materials, Figure 22.Recent achievements in organic, inorganic, and hybrid materials and their outstanding physical properties were discussed focusing on adjusting the properties 2023) improved the building insulation using polycarbonate windows with nanogel.The incorporated nanogel layer reduced emissions and decreased the annual energy consumption by 29.00%.
Jadhav and Sarawade ( 2023) provided a review that demonstrated the significant improvements in the mechanical and thermal properties of the nanocomposites.
Table 9 presents summary of results of some references on thermal enhancement of windows and façades.
Smart Glazing
Functional thin films open new applications fields for smart glazed systems by adding other functionalities such as power generation.Power generation through window coatings is relatively new and can be achieved by using semitransparent solar cells as windows.
Figure 23 shows a comparison of electric lighting energy and cooling energy of different glazing technologies (Granqvist et al., 2009), while Table 10 presents a summary of electro chromic, photo chromic, thermos-chromic and gas chromic windows (Aguilar-Santana et al., 2020).
Selection of window glazing is complicated when energy saving and daylighting are required simultaneously, but optimization techniques can provide a balanced solution for this problem Hee et al. (2015).In their review, Anderson et al. (2016) discussed developments on low emissivity coatings to (Granqvist et al., 2009) replace indium and provided a perspective for future trends.Rezaei et al. (2017) reviewed glass coatings and glazing systems and showed possible applications in different climatic conditions.Table 11 shows the benefits and drawbacks of static and passive windows Rezaei et al. (2017).Attia et al. (2018) investigated the current trends of adaptive façades, evaluation of their thermal and optical performance and provided current trends and future challenges.
Low heat loss through glazing systems can be achieved by suppression of convection by use of multiple glass panes with aerogels, inert gas, or vacuum between the panes to reduce convective heat transfer (Ghosh & Norton, 2018).Low emissivity coatings are also required to reduce the radiation heat transfer.Oh and Park (2019) analyzed building energy and daylight performance in an office building and concluded that cited parameters were improved in both cases.Aoul et al. (2019) presented a review on electro chromic glazing and concluded that it can reduce electricity demand and provides energy savings for commercial and residential buildings.Ke et al. (2019) reviewed recent progress in smart windows focused on multi-functionality and enhancement of design and performance.
Tällberget al. (2019) conducted a review on thermoschromic, photo chromic and electro chromic smart windows and commented that the electro chromic window showed the best performance in all cases and highlighted the necessity of adequate operational control strategy.Aburas et al. (2019) presented a review of the thermoschromic films, coatings and glazing and commented that thermos-chromic windows reduce heating and cooling loads.Attia et al. (2020) proposed a conceptual framework and technological classification for adaptive façades, where the multi-functionality and performance requirements of façade technologies can be inserted.
Chromogenic can be used in building façades to reduce the global energy consumption of the building energy consumption, improve the indoor visual comfort, and reduce the risks of glare and excessive artificial lighting Cannavale et al. (2020).In another work Cannavale et al. (2020) provided a review on smart electrochromic windows to enhance building energy efficiency and visual comfort from the available devices and concluded that electro chromic windows can enhance energy efficiency in the building sector.Tong et al. (2021) reviewed TRSG technologies, compared their key optical switch response, challenges and potential solutions and commented that TRSGs are key-elements for climate-adaptive envelopes.Yehia et al. ( 2021) reviewed different types of glazing including conventional and advanced technology.The main objective is identifying their potential to enhance thermal and lighting performance.Onubogu et al. (2021) presented a review on the existing technologies of daylighting systems including both passive and active daylighting systems equipped with sun tracking.The authors recommended further research and developments to make daylighting systems less expensive and relatively easy to install in buildings.Wang and Narayan (2021) provided a review focused on the recent advancement of thermos-chromic materials for smart windows including performance and commercialization and indicated possible challenges for future development.Fathi and Kavoosi (2021) evaluated the influence of electro chromic widows, types of glazing and BIPVs on energy consumption of office buildings.The simulations showed a
COMMERCIAL PROGRAMS & CODES
Many simulation codes and computer programs were developed for the calculation of thermal loads, evaluation of thermal comfort conditions and energy performance of buildings.Currently, there are several computational tools to analyze the thermal performance and energy consumption of buildings.According to DOE (2022), the US Department of Energy's Directory of Computer Simulation Tools has more than 408 simulation programs developed in several countries, such as: BLAST (1992), Comis (1990), EnergyPlus (2012, ,2016), DOE-2 (1993DOE-2 ( , 1985)), Sunrel (1975), TRNSYS (2019, 2012, 2022), eQUEST (2010, 1994) Energietechnic GmbH, 2012).TRNbuild ( 2018) is an interface for the geometric, thermal, and optical definition of a specific building, while IDA ICE (Arasteh et al., 1994;Kalamees 2004) is a flexible wholebuilding performance simulation tool and is relatively easy to extend the existing modeling functionality.DesignBuilder ( 2019) provides an easy-to-use interface to develop building designs from concept through to completion.
IES Virtual Environment (VE) (2011) is building performance analysis software that designers can use to test different window options, identify passive solutions, compare low carbon and renewable technologies, and draw conclusions about energy use, CO2, and occupant comfort.
The LT Method (Baker & Steemers, 1996) is an energy design tool that responds to the parameters available at the beginning of the project development.This method provides an annual primary energy output for lighting, heating, cooling, and window ventilation.ASHRAE Toolkit for Building Load Calculations (Pedersen et al., 2003) is written entirely in FORTRAN 90.The load toolkit components provide a valuable resource for making the thermal break-even load calculation procedure more readily available to ASHRAE members.This toolkit helps application developers incorporate the Load Calculations method, which is presented in the 2001 ASHRAE Handbook-Fundamentals (2001) as the preferred method.Over the years, Autodesk has developed software and devices that, using the same calculation mechanisms as EnergyPlus, provide the workflow and the possibility of energy efficiency simulations.The most recent software for this purpose is Autodesk Insight and its predecessor versions: Green Building Studio (2008) and Project Solon, all available in the cloud.
Finally, CFD tools (ANSYS, 2022) and (COMSOL, 2022) are also used to model the heat transfer in windows but, it is worth mentioning that these models are not integrated with the commercial programs.ANSYS is a general-purpose, finiteelement modeling package for numerically solving a wide variety of mechanical problems including heat transfer, and fluid problems.COMSOL Metaphysics allows simulating acoustics, fluid flow, heat transfer, and chemical phenomena in one environment.
CONCLUSIONS, FUTURE TRENDS IN RESEARCH, & DEVELOPMENTS Conclusions
A fair number of studies were dedicated to reducing energy and emissions of buildings by increasing thermal inertia of buildings structures and components and enhance their thermal performance.
Natural illumination and natural ventilation are important since they can reduce energy demands and improve thermal and visual comfort, which significantly impacts human comfort and efficiency in working areas like offices and classrooms.
The review shows tremendous progress in smart window technology and windows with internal insertions and reflective film, which when implemented can significantly improve the thermal performance of the buildings and residences.Additional efforts are needed to reduce the costs of these new products and facilitate their inclusion in old and new buildings.
The review did not show any publications on financial incentives, tax bonus and adequate public policies and awareness programs to enhance incorporation of new technologies on windows and façades in old and new buildings as well as in popular residences.
The review shows that a fair amount of research and development was done to improve the thermal and operational performance of windows and façades by using filling materials such as absorbing gases, PCM and water flow, vacuum, and aerogels.Smart windows and façades received a lot of attention, but their cost is still high and additional efforts are required to provide low-cost solutions for application in old and new buildings.
According to the bibliography consulted, the nature of the analyses is predominantly numerical, while experimental studies are less frequently addressed, Figure 24.Precisely the costs of the technologies, as well as the maturity of the concepts, are factors that condition this behavior.
In addition, the literature does not provide much information on optical and thermal properties, Figure 25.This, in general, limits the potential applications of the technologies under development.In fact, the state-of-the-art analyses showed a limited number of studies considering the combination of different technologies to take advantage of the benefits of each one in the same configuration.
Based on the U-value and G-value collected, it is possible to define the application range for each technology studied.In the case of vacuum glazing, it is possible to observe a reduction in the U-value parameter as the number of layers increases, from a range of 1.0-1.4 for double-glazing, to 0.7-0.9 for tripleglazing.This parameter shows the lowest values for aerogelbased technologies, from 0.4 to 0.72.However, of all the concepts, photochromic technology has the highest U-value range, from 0.53 to 1.58.Regarding the G-value parameter, used to measure the transmittance of solar gain through glazing, gas-filled technologies had the widest range, from 0.34 to 0.61.
A fair number of commercial simulation and calculations codes are available for local and global simulation of thermal performance of buildings.To improve energy thermal performance and increase the use of these tools there is a need to invest in the development of new codes and validate them in real buildings applications.
Considering the importance of reducing energy consumption of the building sector and reduce its emission contribution, it is hoped that this review can throw some light on the research and development opportunities and be of help for developers and young researchers, practicing engineers and general readers interested in the fascinating topic of windows and façades.1. Windows, façades, roofs, and walls are the major contributors to the buildings' heat losses and gains.To make buildings more sustainable and less energy consuming it is necessary to invest in thermally efficient materials and apply new technologies for windows, walls, and façades besides adopting energy efficiency strategies for heating and cooling.
2. Buildings should be designed to be self-sufficient as much as possible benefiting from natural energy resources as solar and wind and minimizing the needs for artificial lighting and ventilation, which leads eventually to having healthy internal ambient with adequate passive thermal and visual comfort.
3. Natural illumination is an important issue in commercial and office buildings, essential to promote well-being, visual comfort and reduces stress, besides reducing the cooling load and hence energy consumption.It is essential to promote the use of natural ventilation concept in buildings and residences when possible.
4. Strategies for natural ventilation systems must be encouraged in buildings to reduce the thermal load and consequently reduce the energy consumption in air conditioning systems.
5. Further research and development must be encouraged to reduce the cost of smart windows to help popularize their use in commercial buildings and residences.
6. Financial incentives, tax bonus and adequate public policies and awareness programs are required to enhance incorporation of new technologies on windows and façades in old and new buildings as well as in popular houses to reduce energy consumption and emissions.
7.More research work and development are required to provide cheap and effective performance enhancement equipment and strategies validated by extensive laboratory and infield tests.
8. Additional efforts should be directed towards characterizing the optical and thermal properties of the technologies under development, to broaden their range of applications.9.It is also important to focus on analyzing new configurations that combine the advantages of the concepts, taking advantage of the properties of the different technologies.
10. Computers and simulation codes are handy tools for the development of new products and local and global simulation of thermal performance.There is a need to invest in the development of new codes and validate them in real applications.
Figure 1 .Figure 2 .Figure 3 .Figure 4 .Figure 5 .
Figure1.U-& g-value of some commercial glazing systems(Akram et al., 2023) Tong et al. (2021) provided a review of transparentreflective switchable glass (TRSGs) technologies for application in building façades, while El-Eshmawy et al. (2021) reviewed the different types of glazing, categorized them into conventional and advanced and showed their thermal and lighting performance.Pereira et al. (2021) investigated different SCFs applied on existing windows of a building.SCF considered as the best retrofitting solution showed an annual carbon footprint of 4,447 MJ/m 2 /40 y and 380 kgCO2eq/m 2 /40 y, respectively.Calama-González et al. (
Figure 11 .
Figure 11.Details of a water-flow double glass window(Chow et al., 2011)
Figure 23 .
Figure 23.Comparison of electric lighting energy & cooling energy of different technologies(Granqvist et al., 2009)
Figure 24 .
Figure 24.Consulted literature classification by work nature (Source: Authors' own elaboration)
Table 1 .
Summary of some reference results on natural illumination &
Table 2 .
Summary of some reference results on natural ventilation
Table 3 .
Summary of some reference results on thermal comfort To save energy, it is possible to reduce air changes in a room to minimum, enhancing both local thermal performance & comfort.Shading roofs & southern facade of building envelope are most efficient scenarios for passively modified version of building.Ko et al. (2020) Numerical Window Providing a window with a view in a workplace is important for comfort, emotion, & working memory & concentration of occupants.Yang et al. (2021) Numerical Building-integrated photovoltaic/thermal double-skin façade Solar heat gain coefficient of BIPV/T-DSF's external window possessed highest importance affecting indoor thermal comfort & energy consumption.
Table 6 .
(Huang et al., 2021)ane medium(Huang et al., 2021) Michael et al. (2023)t concepts related to PCMs and details of the different types of PCMs according to climate.Xu et al. (2022)prepared a phasechange gel with relatively high melting enthalpy, better leak proof characteristics, better thermal insulation, besides increasing time lag and lowering peak temperature.Arasteh et al. (2023)reviewed energy and thermal performance of PCMincorporated glazing units combined with passive and active techniques including current passive smart glazing technologies, whileMichael et al. (2023)provided a review on established and emerging glazing technologies and the inclusion of multiple functionalities to improve overall building performance.
Table 9 .
Summary of some reference results on thermal enhancement of windows & façades
Table 9 (
Continued).Summary of some reference results on thermal enhancement of windows & façades
Table 12 .
Summary of some reference results on smart glazing | 2024-05-11T16:17:57.144Z | 2024-05-06T00:00:00.000 | {
"year": 2024,
"sha1": "3cc81f48ff74ac55abf2192eaca9a4969e28f182",
"oa_license": "CCBY",
"oa_url": "https://www.ejosdr.com/download/contribution-of-advanced-windows-and-facades-to-buildings-decarbonization-a-comprehensive-review-14580.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9a7199c94911eca5c1a997acd5169cbf3121e21e",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
} |
17858508 | pes2o/s2orc | v3-fos-license | Effects of the vegetable polyphenols epigallocatechin-3-gallate, luteolin, apigenin, myricetin, quercetin, and cyanidin in primary cultures of human retinal pigment epithelial cells.
PURPOSE
Vegetable polyphenols (bioflavonoids) have been suggested to represent promising drugs for treating cancer and retinal diseases. We compared the effects of various bioflavonoids (epigallocatechin-3-gallate [EGCG], luteolin, apigenin, myricetin, quercetin, and cyanidin) on the physiological properties and viability of cultured human retinal pigment epithelial (RPE) cells.
METHODS
Human RPE cells were obtained from several donors within 48 h of death. Secretion of vascular endothelial growth factor (VEGF) was determined with enzyme-linked immunosorbent assay. Messenger ribonucleic acid levels were determined with real-time reverse transcription polymerase chain reaction. Cellular proliferation was investigated with a bromodeoxyuridine immunoassay, and chemotaxis was examined with a Boyden chamber assay. The number of viable cells was determined by Trypan Blue exclusion. Apoptosis and necrosis rates were determined with a DNA fragmentation enzyme-linked immunosorbent assay. The phosphorylation level of signaling proteins was revealed by western blotting.
RESULTS
With the exception of EGCG, all flavonoids tested decreased dose-dependently the RPE cell proliferation, migration, and secretion of VEGF. EGCG inhibited the secretion of VEGF evoked by CoCl2-induced hypoxia. The gene expression of VEGF was reduced by myricetin at low concentrations and elevated at higher concentrations. Luteolin, apigenin, myricetin, and quercetin induced significant decreases in the cell viability at higher concentration, by triggering cellular necrosis. Cyanidin reduced the rate of RPE cell necrosis. Myricetin caused caspase-3 independent RPE cell necrosis mediated by free radical generation and activation of calpain and phospholipase A2. The myricetin- and quercetin-induced RPE cell necrosis was partially inhibited by necrostatin-1, a blocker of programmed necrosis. Most flavonoids tested diminished the phosphorylation levels of extracellular signal-regulated kinases 1/2 and Akt proteins.
CONCLUSIONS
The intake of luteolin, apigenin, myricetin, and quercetin as supplemental cancer therapy or in treating retinal diseases should be accompanied by careful monitoring of the retinal function. The possible beneficial effects of EGCG and cyanidin, which had little effect on RPE cell viability, in treating retinal diseases should be examined in further investigations.
The mechanisms of the protective activities of flavonoids are not fully understood [5]. Many bioflavonoids including green tea catechins were shown to have antioxidant activity at low concentrations and prooxidant activity at high concentrations [1,5,17]. Antioxidant and prooxidant effects were suggested to be implicated in the anti-inflammatory and anticancer activities of dietary flavonoids [5]. The prooxidant effect appears to be responsible for inducing apoptosis in tumor cells and may also cause indirect antioxidant effects via induction of endogenous antioxidant systems in normal tissues that offer protection against oxidative stress [5]. In addition, excessive intake of vegetable polyphenols, as dietary supplements or natural food, may have adverse effects, for example, by inhibiting prosurvival pathways. The cytotoxicity of dietary flavonoids is helpful in treating cancer, but may also concern non-transformed cells [18]. We showed recently that curcumin (the yellow pigment of turmeric) at doses described to be effective in treating tumor cells has cytotoxic effects on human retinal pigment epithelial (RPE) cells and induces apoptosis and necrosis of the cells [19]. In another study, the flavonoids resveratrol (from red wine) and curcumin were shown to cause RPE cell death by inducing apoptosis and necrosis [20].
RPE cells play crucial roles in protecting the outer retina from photooxidative stress, in digesting shed photoreceptor outer segments which contain oxidized lipids, and in inhibiting retinal edema and neovascularization [21]. Dysfunction and degeneration of RPE cells are crucially involved in the pathogenesis of age-related macular degeneration [22,23]. The dry form of this blinding disease is characterized by the presence of lipofuscin within the RPE and drusen beneath the RPE, which contain photoreceptor-derived oxidized lipids, as well as by RPE cell death (geographic atrophy), while the hallmarks of the wet form are choroidal neovascularization and subretinal edema induced by outer retinal hypoxia [22,23]. Vascular endothelial growth factor (VEGF) is the main hypoxia-induced angiogenic factor that promotes retinal neovascularization and edema [24]. RPE cells are one source of VEGF in the retina [25].
Intake of bioflavonoids, as dietary supplements or natural food, is suggested to be helpful as supplemental therapy of cancer and chronic inflammation, and in preventing retinal disorders. However, it has been also suggested that excessive intake of vegetable polyphenols may have adverse effects that concern not only tumor cells but also non-transformed cells [18]. Therefore, further assessment for potential hazards of bioflavonoids should be considered before the compounds are used in the clinical setting [18]. In the present study, we compared the effects of various flavonoids on the physiologic properties of cultured human RPE cells involved in cellular responses to pathogenic conditions (secretion of VEGF, cellular proliferation and migration) and on the viability of the cells. The following compounds were tested: EGCG from green tea, luteolin from parsley, apigenin from celery and parsley, myricetin from black tea, grapes, walnuts, etc., quercetin from bulbs, and cyanidin from various plants such as red cabbage, blueberries, and strawberries. We found that, although most of the flavonoids investigated inhibited the release of VEGF and the proliferation and migration of RPE cells at higher concentrations, the flavonoids had also deleterious effects on cell viability and induced cellular necrosis. Two compounds, EGCG and cyanidin, were had little effect on cell viability and, thus, are suggested as candidates for further examination as possible therapeutic agents in retinal diseases.
Cell culture: The use of human material was approved by the Ethics Committee of the University of Leipzig and was performed according to the Declaration of Helsinki. Human RPE cells were obtained from several donors within 48 h of death, and were prepared and cultured as follows. After the vitreous and the retina were removed, the RPE cells were mechanically harvested, separated by digestion with 0.05% trypsin and 0.02% EDTA, and washed two times with PBS pH 7.2 (1.54 mM KH 2 PO 4 ; 155.17 mM NaCl; 2.71 mM Na 2 HPO 4 x7H 2 O, Invitrogen, Paisley, UK). The cells were suspended in complete Ham F-10 medium containing 10% fetal bovine serum, GlutaMAX II, and penicillin/streptomycin, and were cultured in tissue culture flasks (Greiner, Nürtingen, Germany) in 95% air/5% CO 2 at 37 °C. Cells of passages 3 to 5 were used. The epithelial nature of the RPE cells was routinely identified with immunocytochemistry using the monoclonal antibodies AE1 (recognizing most of the acidic type I keratins) and AE3 (recognizing most of the basic type II keratins), both from Chemicon. To test the substances, cultures that reached approximately 90% confluency were growth arrested in medium without serum for 5 h. Subsequently, media containing 0.5% serum with and without test substances were added.
Cell proliferation: The proliferation rate of RPE cells was determined by measuring the incorporation of bromodeoxyuridine (BrdU) into the genomic DNA. The cells were seeded at 3×10 3 cells per well in 96-well microtiter plates (Greiner), and were allowed to attach for 48 h. Thereafter, the cells were growth arrested in medium without serum for 5 h, and subsequently, medium containing 0.5% serum with and without test substances was added for another 24 h. BrdU incorporation was determined by using the Cell Proliferation Enzyme-linked Immunosorbent Assay (ELISA) BrdU Kit (Roche, Mannheim, Germany). BrdU (10 μM) was added to the culture medium 5 h before fixation.
Chemotaxis: Chemotaxis was determined with a modified Boyden chamber assay. Suspensions of RPE cells (100 µl; 5×10 5 cells/ml serum-free medium) were seeded onto polyethylene terephthalate filters (diameter 6.4 mm, pore size 8 µm; Becton Dickinson, Heidelberg, Germany) coated with fibronectin (50 µg/ml) and gelatin (0.5 mg/ml). Within 16 h after seeding, the cells attached to the filter and formed a semiconfluent monolayer. The cells were pretreated with blocking substances for 30 min, and thereafter, the medium was changed into medium without additives in the upper well and medium containing test substances in the lower well. After incubation for 6 h, the inserts were washed with buffered saline, fixed with Karnofsky's reagent, and stained with hematoxylin. Non-migrated cells were removed from the filters by gentle scrubbing with a cotton swab. The migrated cells were counted, and the results were expressed relative to the cell migration rate in the absence of the test substances.
Cell viability: Cell viability was determined by Trypan Blue exclusion. The cells were seeded at 5×10 4 cells per well in sixwell plates. After the cells reached 90% confluency, they were cultured in serum-free medium 16 h, and then stimulated with test substances for 6 and 24 h, respectively. After trypsinization, the cells were stained with Trypan Blue (0.4%), and the number of viable (non-stained) and dead (stained) cells were determined using a hemocytometer.
Deoxyribonucleic fragmentation: The Cellular DNA Fragmentation ELISA (Roche) was used to determine whether the cells undergo apoptosis or necrosis in the absence and presence of test substances. The cells were seeded at 3×10 3 cells per well in 96-well plates, and were cultured until confluency was reached. After the culture media were changed, the cells were prelabeled with BrdU for 16 h and then incubated in the absence or presence of the test substances in F-10/0.5% fetal calf serum for 6 and 24 h, respectively. Necrosis was determined by analyzing the BrdU-labeled DNA fragments in the cell-free culture supernatants, and apoptosis was determined by using the cytoplasmic lysates of the cells.
Total ribonucleic acid isolation: Total RNA was extracted from cultured cells by using the RNeasy Mini Kit (Qiagen, Hilden, Germany). The quality of the RNA was analyzed with agarose gel electrophoresis. The A 260 /A 280 ratio of optical density was measured using the NanoDrop 1000 device (Peqlab, Erlangen, Germany), and was between 1.9 and 2.1 for all RNA samples, indicating sufficient quality.
Real-time reverse transcription polymerase chain reaction:
After treatment with DNase I (Roche), cDNA was synthesized from 1 µg of total RNA using the RevertAid H Minus First Strand cDNA Synthesis Kit (Fermentas, St. Leon-Roth, Germany). For subsequent PCR amplification, the cDNA was diluted by addition of 20 µl RNase free water. Semiquantitative real-time reverse transcription (RT)-PCR was performed with the Single-Color Real-Time PCR Detection System (BioRad, Munich, Germany) using the following primer pairs: VEGFA (NM_001025370; 407, 347, and 275 bp), sense 5ʹ-CCT GGT GGA CAT CTT CCA GGA GTA-3ʹ, anti-sense 5ʹ-CTC ACC GCC TCG GCT TGT CAC A-3ʹ; ACTB (NM_001101; 237 bp), sense 5ʹ-ATG GCC ACG GCT GCT TCC AGC-3ʹ, anti-sense 5ʹ-CAT GGT GGT GCC GCC AGA CAG-3ʹ. The PCR solution contained 1 µl cDNA, a specific primer set (0.25 µM each), and 10 µl of iQ SYBR Green Supermix (BioRad) in a final volume of 20 µl. The following conditions were used: initial denaturation and enzyme activation (one cycle at 95 °C for 3 min); denaturation, amplification, and quantification, 45 cycles at 95 °C for 30 s, 58 °C for 20 s, and 72 °C for 45 s. This was followed by a melt curve analysis (81 cycles) to determine the product specificity where the temperature was gradually increased from 55 °C to 95 °C (0.5 °C/cycle). The amplified samples were analyzed with standard agarose gel electrophoresis. The mRNA expression was normalized to the level of ACTB expression. The changes in mRNA expression were calculated according to the 2 -ΔΔCT method (CT, cycle threshold), with ΔCT=CT target gene -CT actb and ΔΔCT=ΔCT treatment -ΔCT control .
Enzyme-linked immunosorbent assay: The cells were cultured at 3×10 3 cells per well in 96-well plates (100 µl culture medium per well). At approximately 90% confluency, the cells were cultured in serum-free medium for 16 h. Subsequently, the culture medium was changed, and the cells were stimulated with test substances. The supernatants were collected after 6 h, and the level of VEGF-A 165 in the cultured media (200 µl) was determined with ELISA (R&D Systems) according to the manufacturer's recommendations.
Western blotting: The cells were seeded at 1×10 5 cells per well in six well plates in 1.5 ml complete medium, and were allowed to grow up to approximately 90% confluency. After growth arrest for 16 h, the cells were treated with test substances for 15 min. Then, the medium was removed, the cells were washed twice with prechilled PBS (pH 7.2; Invitrogen), and the monolayer was scraped into 150 µl lysis buffer (Mammalian Cell Lysis-1 Kit; Sigma). The total cell lysates were centrifuged at 10,000 × g for 10 min, and the supernatants were analyzed with immunoblots. Equal amounts of protein (30 µg) were separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis. Immunoblots were probed with primary and secondary antibodies, and immunoreactive bands were visualized using 5-bromo-4chloro-3-indolyl phosphate/nitro blue tetrazolium.
Statistics: For each test, at least three independent experiments were performed in triplicate using cells from different donors. Data are expressed as means±standard error of the mean (SEM). Statistical analysis was performed with the Prism program (GraphPad Software, San Diego, CA). Significance was determined with one-way ANOVA followed by Bonferroni's multiple comparison test, and was accepted at p<0.05.
RESULTS
Production of vascular endothelial growth factor: VEGF is a key player in choroidal neovascularization [24], and RPE cells are one source of VEGF in the retina [25]. To determine whether bioflavonoids modulate the secretion of VEGF protein from RPE cells, we determined the level of VEGF-A 165 in the cultured media with ELISA. The release of VEGF was examined under control conditions and after the cells were stimulated with PDGF and CoCl 2 -induced chemical hypoxia, respectively. PDGF and hypoxia are known inducers of VEGF secretion from RPE cells [26]. As shown in Figure 1A, the major catechin of green tea, EGCG, did not alter the secretion of VEGF from RPE cells under control and PDGFstimulated conditions when tested at concentrations between 1 and 50 µM. However, EGCG induced a dose-dependent decrease in the secretion of VEGF induced with chemical hypoxia. All other bioflavonoids tested, that is, luteolin ( Figure Myricetin induced a strong decrease in the secretion of VEGF protein from RPE cells ( Figure 1D). To determine whether the compound also alters the expression of the VEGFA gene, we performed real-time RT-PCR. As shown in Figure 2A, myricetin induced a significant (p<0.05) decrease in the VEGFA expression at 10 µM, but a strong increase in VEGFA expression at 100 µM. Because the myricetininduced increase in the gene expression of VEGF is reduced in the presence of an inhibitor of programmed necrosis, necrostatin-1 ( Figure 2B), high concentrations of myricetin could induce cell stress, which is characterized by (among others) increased gene expression of VEGF.
Proliferation and chemotaxis of retinal pigment epithelial cells: RPE cell proliferation and migration are characteristic features of proliferative retinal diseases. To determine whether bioflavonoids alter physiologic characteristics of RPE cells, we measured the proliferation and chemotaxis of cultured cells in the absence and presence of PDGF, a known mitogen and motogen of the cells [26]. As shown in Figure 3A, EGCG as the major green tea catechin did not significantly alter the proliferation rate of RPE cells under control and PDGF-stimulated conditions. Similarly, EGCG had no effect on the chemotaxis of RPE cells, under the control and PDGF-stimulated conditions ( Figure 4A). All other bioflavonoids tested, that is, luteolin, apigenin, myricetin, quercetin, and cyanidin, induced dose-dependent decreases in the proliferation rate under control and PDGF-stimulated conditions ( Figure 3B mediated by toxic effects of the compounds. However, cyanidin decreased the secretion of VEGF without a significant reduction in the cell viability ( Figure 6).
Deoxyribonucleic acid fragmentation: By measuring the internucleosomal DNA fragmentation rate in the cultured media and cell lysates, we determined whether the decrease in RPE cell viability induced by various flavonoids was mediated by induction of apoptosis and/or necrosis. An increased level of BrdU-labeled DNA fragments in the cell-free culture supernatants reflects cellular necrosis, while an increased level of BrdU-labeled DNA fragments in the cell lysates reflects apoptosis of the cells. Triton X-100 (1%) was used as the positive control. As previously described [27], Triton induced an increase in the DNA fragmentation rate in the RPE cell lysate after 6 and 24 h of stimulation, while the DNA fragmentation rate in the cultured media remained unchanged (Figure 7). The data suggest that Triton evoked apoptosis but not necrosis of RPE cells. The green tea catechin EGCG did not induce apoptosis or necrosis of RPE cells at both time periods investigated when tested at concentrations between 10 Figure 7D). Similarly, quercetin induced dose-dependently necrosis of RPE cells, whereas it did not induce apoptosis ( Figure 7E). Cyanidin did not induce apoptosis but decreased the rate of cellular necrosis after 6 h of exposure ( Figure 7F). Figure 7D). To determine the molecular mechanisms of myricetin-induced cytotoxicity, we evaluated the rate of DNA fragmentation in the cultured media in the presence of different caspase inhibitors. As shown in Figure 8A and B, the selective inhibitor of the effector caspase-3, DEVD, did not alter the rate of RPE cell necrosis induced by myricetin. The inhibitor of caspase-8, IETD, inhibited in part myricetininduced RPE cell necrosis after 6 h ( Figure 8A), but not after 24 h of stimulation ( Figure 8B).
Mechanisms of myricetin-induced retinal pigment epithelial cell death: Myricetin induced severe necrosis of RPE cells (
The lack of an effect of the caspase-3 inhibitor suggests that the myricetin-induced cytotoxicity is mediated by caspase 3-independent necrotic death pathways. An important activator of the caspase-independent cell death is the mitochondrial flavoprotein apoptosis-induced factor, which mediates chromatin condensation and DNA fragmentation when translocated to the nucleus [28]. The release of the apoptosis-induced factor from the mitochondria can be triggered by different mechanisms, including activation of poly(ADP-ribose) polymerase-1 (PARP-1). To determine whether activation of this nuclear enzyme participates in myricetin-induced RPE cell death, we used the selective PARP-1 inhibitor DPQ. DPQ did not prevent RPE cell necrosis induced by myricetin ( Figure 8A,B). Concomitant inhibition of PARP-1 and caspase-3 using DPQ plus DEVD did also not block myricetin-induced RPE cell death (not shown). Similarly, the inhibitor of the NADPH oxidase pathway and the uncoupling protein-2/mitochondrial pathway, perindopril, and the inhibitor of mitochondrial permeability transition, cyclosporin A, displayed no effects ( Figure 8A,B). In addition, the mitochondrial K ATP channel opener pinacidil had no effect ( Figure 8A,B). The data suggest that activation of the mitochondrial apoptotic pathway is not involved in mediating the cytotoxic effect of myricetin.
Another mechanism that could contribute to the myricetin-induced cytotoxicity is activation of the cysteine protease calpain. Calpain activation has been recently implicated in the toxic effect of curcumin in RPE cells [19]. Pretreatment of RPE cells with the calpain inhibitor PD150606 (which prevents the binding of calcium to calpain and does not significantly inhibit cathepsins and caspases [29]) fully abrogated the RPE cell necrosis induced by myricetin ( Figure 8A,B). The data suggest that calpain activation is involved in mediating the toxic effect of myricetin. Activation of apoptotic pathways [30] and the excitotoxic death of oligodendrocytes [31] were shown to depend on the generation of reactive oxygen species. Preincubation of the cells with the cell-permeable dithiol-reducing agent dithiothreitol significantly (p<0.05) reduced the rate of RPE cell necrosis induced by myricetin ( Figure 8A,B). The data suggest that generation of free oxygen radicals contributes to the myricetin-induced cytotoxicity. We found also that the inhibitor of phospholipase A 2 , 4-bromophenacyl bromide, reduced the cytotoxic effect of myricetin ( Figure 8A,B). However, the cyclooxygenase inhibitor indomethacin had no effect ( Figure 8A,B). Furthermore, inhibition of ERK1/2 activation with the specific MAPK kinase (MEK) antagonist PD98059 did not suppress the myricetin-induced DNA fragmentation ( Figure 8A,B). Inhibition of c-Jun NH 2 -terminal kinase (JNK) activation by SP600125 reduced slightly the DNA fragmentation rate after 6 h ( Figure 8A), but not after 24 h of myricetin stimulation ( Figure 8B).
To determine whether the myricetin-induced cytotoxicity is mediated by programmed necrosis, we tested the inhibitor of necroptosis, necrostatin-1 [32]. As shown in Figure 9A, necrostatin-1 decreased significantly (p<0.05) the myricetin-induced increase in the DNA fragmentation rate of the cultured media. However, the inactive derivative of necrostatin-1 [32] had no effect ( Figure 9A). We also found that the quercetin-induced increase in the DNA fragmentation rate of the cultured media is largely inhibited by necrostatin-1 ( Figure 9B). The data suggest that the myricetin-induced RPE cell necrosis is mainly mediated by activation of death pathways that involve generating free oxygen radicals and activating calpain and phospholipase A 2 , as well as activating caspase-8 and JNK in the early phase of cell death. The myricetin-induced cytotoxicity is in part mediated by the induction of programmed necrosis.
Activation of intracellular signal transduction proteins:
We found that various bioflavonoids decrease the proliferation ( Figure 3B-F), migration ( Figure 4B-F), and viability of RPE cells ( Figure 5B-E). Therefore, we determined with western blotting analysis whether the flavonoids alter the phosphorylation levels of three major ligand-induced signal transduction pathways. Activation of the Ras-Raf-ERK MAPK pathway is an important step in intracellular signaling that stimulates the proliferation of RPE cells [26,33]. Activation of p38 MAPK is implicated in stimulation of cellular migration [26], while activation of the phosphatidylinositol-3 kinase-Akt signaling pathway stimulates the protein synthesis at the translational level required for cell growth and survival [34]. As shown in Figure 10A, the green tea catechin EGCG did not induce significant alterations in the phosphorylation levels of ERK1/2, p38 MAPK, and Akt in RPE cells when the cells were stimulated for 15 min and the compound was tested at concentrations between 1 and 50 µM. However, PDGF induced increases in the phosphorylation levels of the proteins ( Figure 10A), as previously described [26]. Luteolin ( Figure 10B), apigenin ( Figure 10C), myricetin ( Figure 10D), quercetin ( Figure 10E), and cyanidin ( Figure 10F) induced decreases in the phosphorylation level of ERK1/2 at higher concentrations.
With the exception of cyanidin, the bioflavonoids tested did not alter the phosphorylation level of p38 MAPK in RPE cells ( Figure 10A-E). Cyanidin decreased the phosphorylation level of p38 MAPK at higher doses ( Figure 10F). With the exception of EGCG ( Figure 10A), all flavonoids tested decreased dose-dependently the phosphorylation level of Akt protein ( Figure 10B-F).
DISCUSSION
Vegetable polyphenols are suggested to represent promising drugs for the supplemental therapy of cancer in particular because of their capability to induce apoptosis in tumor cells [4,5]. Bioflavonoids may have also benefits in treating retinal diseases [12][13][14][15][16]. However, an excessive intake of bioflavonoids may have adverse effects that may concern nontransformed cells [18]. In RPE cells, for example, curcumin and other flavonoids were shown to have cytotoxic effects and may induce apoptosis and necrosis [19,20]. Therefore, excess intake of bioflavonoids, either as supplemental cancer therapy or in the treatment of retinal diseases, may have adverse effects on the retina. In the present study, we compared the effects of various vegetable polyphenols in cultured human RPE cells. We found that the various flavonoids differentially affect the physiologic properties of RPE cells including the secretion of VEGF and cellular proliferation and migration, and that some of the compounds tested (luteolin, apigenin, myricetin, quercetin) decreased dose-dependently the viability of the cells ( Figure 5) and induced apoptosis and/or necrosis (Figure 7). Overall, the effects of the compounds on the physiologic parameters were observed at concentrations lower than doses that induced a decrease in cell viability. However, the concentration windows between the beneficial and detrimental effects depended on the compound investigated and were larger in the case of apigenin and smaller in the case of myricetin, for example. We assume that the polyphenols tested at low concentrations have beneficial effects, as suggested by the VEGF-decreasing effect and the inhibition of cellular migration and proliferation, and at higher concentrations, the polyphenols (with the exception of EGCG and cyanidin) have toxic effects on the cells. However, further experiments are required to support this assumption.
EGCG and cyanidin did not significantly affect the viability of RPE cells ( Figures 5A,F and Figure 6B) and did not induce apoptosis or necrosis ( Figure 7A,F) at the concentrations tested. EGCG decreased the secretion of VEGF under conditions of chemical hypoxia, but not under control and PDGF-stimulated conditions ( Figure 1A). EGCG did not prevent the proliferation and migration induced by PDGF ( Figure 3A and Figure 4A). PDGF-induced cellular signaling is a major causative factor of proliferative retinopathies [35,36]. However, cyanidine inhibited the release of VEGF ( Figure 1F and Figure 6A), the proliferation and migration of RPE cells ( Figure 3F and Figure 4F), and the cellular necrosis ( Figure 7F) without a significant reduction in the cell viability ( Figure 5F and Figure 6B). The data suggest that EGCG and in particular cyanidin may have certain benefits to prevent RPE cell responses characteristically for neovascular and proliferative retinopathies.
Among the compounds tested, myricetin and quercetin induced strong increases in the rate of DNA fragmentation in the culture supernatants at relatively low concentrations ( Figure 7D,E). Data obtained with pharmacological inhibitors ( Figure 8A,B) suggest that the toxic effect of myricetin in RPE cells ( Figure 5D) was mainly mediated by non-apoptotic death modes such as classical and/or programmed necrosis [37]. Various indications suggest that activating apoptotic pathways did not significantly contribute to the toxic effect of myricetin in RPE cells: (1) We did not find any myricetininduced increase in the DNA fragmentation rate in the cell lysates ( Figure 7D). (2) The caspase-3 inhibitor DEVD did not prevent the myricetin-induced increase in the DNA fragmentation rate in the cultured media ( Figure 8A,B).
(3) The inhibitor of the mitochondrial permeability transition, cyclosporin A, had no effect on the myricetin-induced increase in the DNA fragmentation rate ( Figure 8A,B). The independence of the myricetin-induced cell death from the mitochondrial apoptotic pathway is further suggested by the facts that the PARP-1 inhibitor DPQ, the inhibitor of the NADPH oxidase pathway and the uncoupling protein-2/ mitochondrial pathway, perindopril, and the mitochondrial K ATP channel opener pinacidil had no effects ( Figure 8A,B). However, the contribution of distinct apoptotic pathways, at least of signaling steps upstream from the effector caspase-3, cannot be fully ruled out because the caspase-8 inhibitor IETD and the JNK inhibitor SP600125 decreased the DNA fragmentation rate in the culture supernatants after 6 h of stimulation with myricetin ( Figure 8A). Prolonged activation of the stress-activated JNK was shown to be an important factor in apoptotic and necrotic cell death [37].
We found that the toxic effect of myricetin was mediated by induction of oxidative stress and activation of calpains and phospholipase A 2 ( Figure 8A,B). Reactive oxygen species are known to induce apoptosis and necrosis [37]. They may contribute to JNK activation [37] which stimulates the mitochondrial production of superoxide [38], the main source of cellular oxidative stress involved in inducing necrosis [37]. Increased levels of reactive oxygen species are known to activate calpains likely by increasing the intracellular free calcium level [39][40][41]. Sustained activation of calpain is known to trigger various intracellular signaling processes that lead to progressive plasma membrane damage, a hallmark of necrosis [42,43]. Activated calpains may also induce lysosome destabilization [43], at least in part by inducing mitochondrial permeability transition [44]. However, because the inhibitor of mitochondrial permeability transition, cyclosporin A, did not prevent myricetin-induced RPE cell necrosis ( Figure 8A,B), it seems to be unlikely that myricetin induces a rupture of lysosomes in RPE cells via this pathway. There are various necrotic pathways, including programmed necrosis, in which caspase-8 and calpains play a role [45]. Caspase-8 is a subunit of the ripoptosome involved in inducing programmed necrosis [46,47]. We found that the inhibitor of programmed necrosis, necrostatin-1, decreased the myricetin-( Figure 9A) and quercetin-( Figure 9B) induced increase in the DNA fragmentation rate of the cultured media. Because necrostatin-1 does not block "classic" oxidative stress-induced necrosis [32], the data may suggest that the cytotoxicity induced by both compounds is in part mediated by inducing programmed necrosis. However, the contribution of various apoptotic and/ or necrotic pathways to myricetin-induced RPE cell death remains to be better clarified in future experiments.
Further mechanisms may contribute to the induction of RPE cell necrosis by myricetin. By degrading the anchorage to the membrane cytoskeleton, activated calpains may impair the activity of the Na,K-ATPase [48]. The activity of phospholipase A 2 is increased in response to oxidative stress, resulting in lipid peroxidation and release of arachidonic acid [49]. Arachidonic acid is a potent inhibitor of the Na,K-ATPase that leads to intracellular sodium overload, influx of water with consecutive cellular swelling, and possibly membrane rupture [50,51]. The inhibition of ERK1/2 and Akt activation ( Figure 10B and did not induce significant apoptosis or necrosis ( Figure 7), we assume that normal and moderate intake of the compounds as natural food will have no deleterious consequences in the RPE, and may have even beneficial effects, for example, in the cases of EGCG and cyanidin. Further experiments are required to compare the in vivo bioavailability and retinal effects of different vegetable polyphenols. However, it cannot be ruled out that increased intake of distinct flavonoids as dietary supplement in the (self-) therapy of cancer, for example, may have adverse effects on the retina resulting in dysregulation and degeneration of the RPE, in particular in subjects with decreased levels of antioxidant enzymes. To avoid accelerated development of age-related macular degeneration, which is characterized by dysfunction and degeneration of RPE cells, the intake of flavonoids at higher doses as supplemental cancer therapy or in the treatment of retinal diseases should be accompanied by careful monitoring of the retinal function. Possible beneficial effects of EGCG and cyanidin, which had little effects on the RPE cell viability in the concentration range investigated ( Figure 5), in treating retinal diseases should be further examined. | 2017-06-16T19:24:22.531Z | 2014-03-03T00:00:00.000 | {
"year": 2014,
"sha1": "e312bd171dbac1864d7634d0b2aa08952accd580",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e312bd171dbac1864d7634d0b2aa08952accd580",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
263763636 | pes2o/s2orc | v3-fos-license | Introduction to Nordic Cultures
Introduction to Nordic Cultures is an innovative, interdisciplinary introduction to Nordic history, cultures and societies from medieval times to today. The textbook spans the whole Nordic region, covering historical periods from the Viking Age to modern society, and engages with a range of subjects: from runic inscriptions on iron rings and stone monuments, via eighteenth-century scientists, Ibsen’s dramas and turn-of-the-century travel, to twentieth-century health films and the welfare state, nature ideology, Greenlandic literature, Nordic Noir, migration, ‘new’ Scandinavians, and stereotypes of the Nordic. The chapters provide fundamental knowledge and insights into the history and structures of Nordic societies, while constructing critical analyses around specific case studies that help build an informed picture of how societies grow and of the interplay between history, politics, culture, geography and people. Introduction to Nordic Cultures is a tool for understanding issues related to the Nordic region as a whole, offering the reader engaging and stimulating ways of discovering a variety of cultural expressions, historical developments and local preoccupations. The textbook is a valuable resource for undergraduate students of Scandinavian and Nordic studies, as well as students of European history, culture, literature and linguistics.
Mart Kuldkepp
Contemporary perspectives on the Nordic countries often frame the region as a destination of immigration, focusing on the various challenges that increased cross-border mobility might pose to the Nordic political landscape, ethnic and cultural composition, or the resilience of the Nordic welfare states.However, it is worth keeping in mind that throughout centuries, the Nordic region has much more commonly been a place that people have wanted to leave -not least during times of economic depression or political turmoil.
Examples of significant emigration from the three Scandinavian countries -Denmark, Norway and Sweden -reach back at least to the times of Germanic tribes invading the West-Roman Empire in the Migration Period of the fourth to the sixth century ce (see chapter 1 in this volume); and, famously, Scandinavian seafarers becoming a Europewide nuisance in the Viking Age from the eighth to the eleventh century, when Scandinavian settlements were established in both Western and Eastern Europe.Even more recently, during the era of industrialisation and political change in the nineteenth century, there was major Scandinavian emigration, especially to the USA.This direction of movement was substantially reversed only after the Second World War, when the Social Democratic states in Scandinavia experienced a period of prolonged economic growth and thus received and even encouraged labour migration from other, less advantaged countries.Their humanitarian refugee policies, first formulated in Sweden in the 1960s and 1970s under the influence of new radical human-rights thinking, and subsequently also applied in Norway and Denmark, soon led to the arrival of increased numbers of asylum seekers (see Brochmann 2017, 230).
The first purpose of this chapter is to consider nineteenth-century emigration from Scandinavia -particularly from Sweden -as a complex and multifaceted phenomenon, and engage with questions such as why the Scandinavian emigrants wanted to leave, the forms their emigration took, and how well they became integrated into the societies of their destination countries.Secondly, we will consider official and semi-official Swedish responses to the emigration question in the late nineteenth and early twentieth century, both in terms of trying to understand the causes of emigration and how it might be prevented, as well as attempting to preserve and strengthen the Swedish identity of those already living abroad.
As we will see, attitudes towards emigration in the modern era were closely connected to reactions to broader, often deeply disruptive contemporary processes of nation building, modernisation and societal change.Engagement with 'the emigration question' could therefore become a point of departure for a reformist, optimistic vision of modernity, but it could also serve as an outlet for a national-romantic yearning for the supposedly simpler times of the past (see chapters 4 and 11 in this volume).Furthermore, if emigration were to be embraced rather than shunned, it could be accepted either from a liberal point of view as a much-needed safety valve against overpopulation, or from a radical nationalist perspective as a praiseworthy example of daring colonialism.
In the interests of cohesiveness and brevity, the story presented here will primarily focus on Sweden, the most populous of the Scandinavian countries.This is not to say that the Swedish case is particularly unique.Similar studies could be done on other Nordic nations, and other European states, even though they would naturally differ in their specifics.But the intention here is to be representative, rather than comprehensive; and Sweden, in its attempts to take a markedly managerial approach to its 'emigration problem', is a good example of tendencies that became expressed with perhaps less clarity elsewhere.
Emigration in the Nineteenth Century
Between the years 1800 and 1914, the populations of Denmark, Norway and Sweden tripled in number.The reasons behind this population growth were famously summed up by the Swedish poet Esaias Tegnér in 1833 as 'peace, vaccines and potatoes' ('freden, vaccinet och potäterna') (Jansson 2016, 685).Indeed, with the exception of the two Schleswig-Holstein wars in 1848-51 and 1864, the century after the end of the Napoleonic Wars in 1814 was remarkably peaceful for the Scandinavian kingdoms.Perhaps because young men did not die in wars, more children were being born.The early nineteenth century also saw the widespread adoption of a vaccine against smallpox, which Denmark made legally mandatory in 1810 -one of the first countries in Europe to do so (Orfield 1953, 57).Also important for longer lifespans were better medical education, hospitals, hygiene and an emerging understanding of the causes of certain diseases.
Moreover, while crop failure was still a possibility -as shown by the great Swedish famine of 1867-9 -the increasingly widespread cultivation of potatoes and decreased dependence on grain greatly reduced the threat of hunger.Better agricultural practices were also facilitated by private ownership of land replacing the obsolete open-field system, and, towards the end of the period, the introduction of the first examples of horse-drawn mechanical agricultural machinery.Thanks to improved methods of transport and food preservation, the menu of even lowerclass people was becoming more varied and nutritious.Consequently, while nativity rates remained high, the mortality rates -especially of infant mortality -dropped significantly (Gjerde 1995, 86;Gustafsson 1997, 160-1;Ljungmark 2008, 9).The population growth had significant social and economic consequences, the aggregate effect of which was not unequivocally positive.First, both economic prosperity and social cohesion were threatened by rural destitution as agricultural expansion was limited by the availability of arable land.By mid-century, increasing numbers of people were forced into poverty in the countryside, unable to feed their families from their own fields.Second, an increase in the numbers of surviving children drove up the number of potential heirs, which often led to ancestral farmlands being divided up and individual farms ending up smaller and less efficient in food production.For this reason, Sweden had already in 1827 officially forbidden the further sub-division of farmsteads unless the resulting farms could feed their occupants and pay the requisite taxes.This ban was abolished in 1881, meaning a renewed increase in the numbers of rural poor (Barton 1975, 10;Söderberg 1981, 37).
At the same time, nineteenth-century Scandinavians had more opportunities to migrate and settle elsewhere than ever before since the end of the Viking Age.Movement from the countryside to the towns (urbanisation) was one major facet of it, but significant population movements also took place between rural areas, motivated by overpopulation, unemployment, and a desire for better living conditions.A degree of inter-Nordic migration also occurred: for example, from Finland to Northern Norway, and from Sweden across Öresund to Copenhagen.Emigration overseas was fundamentally yet another aspect of the same mobility, which remained a major feature of life in Scandinavia up until the beginning of the next century when industrialisation as a new source of economic growth began to alleviate unemployment (Gustafsson 1997, 187, 96).
Between about 1825 and 1930 (when American policies started severely restricting further immigration), c.3 million Scandinavians moved abroad, including 300,000 Danes, 850,000 Norwegians and 1.2 million Swedes (Gjerde 1995, 85).Norway had the highest proportion of its population emigrate, while Sweden, where emigration without royal permission was officially decriminalised in 1840, led in absolute numbers.Again, this did not make Scandinavia exceptional.Between the Napoleonic Wars and the Great Depression of the 1930s, more than 50 million Europeans emigrated and settled abroad (Baines 1994, 525), most of them in the USA, but also in Canada, South America, South Africa and Australia.This list of destinations was shared across Europe.In the case of the three Scandinavian kingdoms, well over 95 per cent of emigrants moved to the USA and, to a lesser extent, Canada (Gjerde 1995, 85).
The immediate causes of emigration, sometimes called the 'push factors', were often connected to the economic and social problems that had ensued from overpopulation.Many Swedish emigrants came from poor rural areas, such as Småland in the south.However, there could also be other reasons.Indeed, the first émigré groups from Scandinavia had a religious rationale: in 1832, 52 Norwegian Quakers travelled from Stavanger to the state of New York and from there to Illinois where they settled (Nordstrom 2000, 232).In 1846, the Swedish sectarian preacher Erik Jansson and his pietist followers established the Bishop Hill colony in the same state (see Ljungmark 2008, 18-21).Members of these and other religious minority groups (especially the American-inspired ones: Methodists, Baptists, Mormons etc.) left because of the oppressive Lutheran state churches at home, with Swedish laws against religious dissenters remaining in force up until 1860 (Barton 1975, 12-13).Causes of emigration could also be political or semi-political, such as the rigid social hierarchies and the undemocratic nature of political life in Scandinavia.
The attractive features of the destination country, the so-called 'pull factors', included better chances at economic and social mobilitysuch as lower land prices or better-paid jobs -freedom of religion, more democratic and open societal structures, and, for many émigrés, a badlyneeded chance to start afresh lives that for one or another reason had taken a wrong turn.Furthermore, the choice of destination was heavily influenced by the pre-existence of immigrant communities from the same country: at least initially, emigrants tended to travel together with other countrymen, and in many cases also to settle with them after arrival.Among the most important immediate pull factors were probably letters from relatives who had already settled abroad, often tending to downplay the endured hardships and encouraging family members to follow.The 1860s and 70s also saw a dramatic increase in print propaganda in the form of booklets and newspaper articles, often disseminated by agents of steamship companies looking to sell tickets, or even visiting Swedish Americans who were granted rebates if they took countrymen back with them (Barton 1975, 16-17, 109-10).
In the first half of the nineteenth century, whole families had tended to emigrate together and settle in rural areas.By the 1880s, this had changed, and most emigrants were young, unmarried men and women travelling alone.They tended to settle in towns and cities, rather than in the countryside, and seek work in factories (the men) or as maids (the women).Having established themselves, they would often be able to pay the ship fare for their family members and bring them over as well.There was also an increasing amount of return migration, and many emigrants travelled several times back and forth over the Atlantic as their circumstances changed (Barton 1975, 111-12;Nordstrom 2000, 233).Some emigrants were assimilated rapidly in their new country of residence, but for others, it was a process that could take several generations -especially if they retained a strong connection to the emigrant community.Community leaders, especially pastors, would often play a significant role in preserving the national heritage.Emigrant communities quickly established their own ethnic churches (Scandinavian Lutheran, but also Baptist, Methodist etc.) with services in the heritage language.They would also educate children in the language of their parents, which in some areas, such as in Porter, Indiana, lasted well into the interwar period.In more urban settings, such as in the major Swedish community in Chicago, a significant role was played by secular organisations: sport, music, theatre and temperance societies, ethnic trade unions, clubs and local history societies.As a rule, however, pre-emigration ethnicity was better preserved in the countryside.Some communities in rural USA were homogenous to the extent of being inhabited by families who had all come together from the same parish or village (Nordstrom 2000, 233-4).
Also important for preserving ethnic identity was access to journalism and literature in the heritage language.Starting with just one newspaper published in New York in 1851-3 -entitled Skandinavien and meant for all Nordic immigrants -hundreds of newspapers in the USA eventually came to be published in Scandinavian languages for immigrants from different parts of Scandinavia, but also with different political and religious views.The majority of these did not switch to English before the 1920s, but many eventually came to find readers also in communities with speakers of other Scandinavian languages, highlighting the extent to which many Swedes, Danes and Norwegians came to adopt a more 'Scandinavian' identity in their new country of residence (Barton 1975, 19;Ljungmark 2008, 108-15).There were many reprints of Scandinavian authors and an émigré literature made an appearance, often dealing with the emigration experience (Nordstrom 2000, 235).Examples of the latter include novels by the Swede Ernst Skarstedt (1857-1929), the Norwegian Ole Edvart Rølvaag and the Dane Sophus Keith Winter (1898Winter ( -1983)).
Nevertheless, over generations, Scandinavian identity tended to fade away.Partially, this tendency was reversed in the post-Second World War decades when many people again discovered and embraced their ethnic roots in the form of hybrid identities such as Danish-American or Swedish-American.Even if they no longer speak the heritage language, the heritage ethnicity has been important to many descendants of emigrants, particularly in the USA.Examples of heritage institutions in the USA include the American Swedish Institute in Minneapolis (founded in 1929), the Nordic Museum in Seattle (founded in 1980) and many others.There are also whole communities that are strongly dedicated to their Scandinavian heritage, such as the originally Danish town of Solvang in Santa Barbara County, California, and the old Janssonist settlement Bishop Hill in Henry County, Illinois.
Emigration and Opposition to Emigration
It is probably fair to say that emigration was a largely positive phenomenon for nineteenth-century Scandinavia.If overpopulation had not had a safety valve of this kind, it might have led to even stronger social and economic tensions in Scandinavia itself.According to one estimate, emigration reduced the growth of the Norwegian population by half, likely making it more sustainable at a time when the country's economy had not yet industrialised enough to provide for a larger population (Nordstrom 2000, 236).Many emigrants also ended up sending home substantial sums of money and some eventually returned with new ideas and money to invest in their native land.Finally, another arguable benefit of emigration routes was that they could be used to exile troublemakers, such as the Danish socialist Louis Pio (1841-94), founder of the organised workers' movement in Denmark, and in the realm of fiction, the mischievous Emil i Lönneberga, who had the whole village collect money to send him off to America, as depicted by the Swedish children's author Astrid Lindgren (Hult 2008, 576).
Contemporary attitudes towards emigration were mixed.There was some early recognition of its value for dealing with overpopulation -in 1840, Swedish liberals even founded a short-lived and controversial Émigré Society (Emigrantföreningen) to publicise opportunities in the New World.However, it was also true that most emigrants were young and potentially productive members of society, not necessarily the poorest and the most desperate.Indeed, it seems that it was often fear of destitution that drove emigration, rather than destitution itself (Barton 1975, 11, 16).Starting in the 1840s, at a time when the Scandinavian economies finally experienced a period of growth, the danger of workers moving abroad prompted the first movement of public opposition to emigration.Critical voices condemned emigrants as criminals and layabouts, content to abandon their native land and look for a better life abroad.Warnings were also issued about the various dangers awaiting on the other side of the ocean, especially during the American Civil War of 1861-5, when many in Europe predicted a total collapse of the young republic (Barton 1990, 13-14).
The recession that began in Scandinavia in the mid-1860s provided a further impetus for emigration, especially since it now coincided with economic growth in post-Civil War USA.The famine caused by three successive crop failures in Sweden in 1867-9, and later the import of cheap American grain towards the end of the 1870s also made the numbers go up as local conditions worsened (Barton 1975, 107-8;Ljungmark 2008, 31).
The importance of religious push factors decreased, but more emigrants now had a political motivation, especially after the 1866 Swedish parliamentary reform was found insufficient by many liberally-minded people.Liberal Swedish journalists also visited their compatriots living in the USA, encouraging more people to emigrate with their positive depictions of life in the New World.Emigration was further facilitated by regular Trans-Atlantic steamship services becoming available in the 1870s, which made travel both cheaper and faster.A journey from Stockholm to Chicago which in 1846 had taken 93 days, including 74 days at sea, could thanks to railways and steamships now be completed in merely 20 days (Barton 1975, 108;1990, 14;Ljungmark 2008, 69-72).
In Sweden, not least in academic circles, these developments led to widespread resignation to emigration as a necessary evil.The historian Wilhelm Erik Svedelius argued in 1875 that since some of Sweden's population could not provide for themselves in their native land, emigration was unavoidable and best thought of as a form of development assistance to the USA.In another approach, the political economist Knut Wicksell suggested in 1882 that the core of the problem was overpopulation and recommended the use of contraceptives, rather than measures against emigration per se (Barton 1990, 14-15).
The émigrés themselves would naturally also defend their choice, sometimes by appealing to a nationalist sentiment.Johan A. Enander, editor of the Chicago newspaper Hemlandet and a leading proponent of 'Swedish-Americanism' (Ljungmark 2008, viii) described the modern Swedish emigrants as the spiritual descendants of both the Viking discoverers of Vinland and the small group of seventeenth-century Swedish colonists in Delaware.According to Enander, the latter had embodied American moral values better than the 'egoistic Anglo-Saxons'.Now, when ships full of Swedes were again landing on American shores, they were to be regarded as once more peacefully conquering 'Vinland the Good' which was rightfully theirs.Similar idealisation of emigrants also made inroads into the Swedish debate, for example in the writings of the priest and publicist Carl Sundbeck, who had been awarded grants by the state to study Swedish emigrants in the USA and Canada.Sundbeck went even further than Enander, arguing that the émigrés were the most exemplary Swedes of all, daringly spreading the Swedish language and culture to the wild American prairies in a kind of 'national imperialism'.It was now the duty of the emigrants to inspire a similar resoluteness for national action back home, so that other Swedes would set to work with the same enthusiasm and sense of purpose, even without having to travel abroad (Barton 1990, 15-17).
In the 1890s, the USA turned less welcoming again.The frontier of settlement evaporated, as most available land resources had been exhausted.At the same time, competition in industry grew fiercer and working conditions worsened.Scandinavia was now also going through its belated industrialisation: from 1870 to 1914, the rural population in Sweden sank from 72.4 per cent to 48.4 per cent of the total.Travelogues from the USA became more critical and opposition to emigration grew in conservative nationalist circles.As a result, the numbers of emigrants dropped substantially between 1894 and 1900 (Barton 1975, 203-6;Ljungmark 2008, 5, 29).
Around the turn of the century, however, the numbers picked up again, culminating in 1903 with more than 35,000 people leaving Sweden.In response, emigration became yet again an object of intense public debate, not least because Sweden's worries were also piling up in other ways.The Russification policies enacted in Finland, and Norwegian separatism (the latter ending in secession in 1905), were putting Swedish foreign policy on the defensive, while sharpening class tensions and the emergence of the labour movement undermined the domestic power of traditional political elites.In 1901, the period of compulsory military service was extended from 90 days to either eight or twelve months.This was strongly resented by the poor who had to bear the financial burden of interrupted employment and lost income.From the perspective of the state, emigration therefore became perceived as a security issue, since the outflow of young men reduced the pool of available conscripts (Barton 1975, 204;1990, 16;Ljungmark 2008, 39-43).
Emigration and Swedishness Abroad
The staunchest early opponents of emigration had been large landowners, worried about the lack of farm workers and the rising wages emigration would cause.Instead, they proposed the creation of 'a new America in Sweden' through adoption of modern, large-scale agriculture.After 1900, they were also joined in their criticism by the burgeoning Social Democratic movement, adamant to condemn the lack of labour rights in the USA, which they considered 'the workers' hell'.Yet other critics had a radical nationalist point of view which regarded emigration as basically unwelcome and unpatriotic -a part of the state of 'slumber' of the Swedish nation that had begun with the defeat of Sweden in the Great Northern War of 1721, which had ended its Great Power Era ( Stormaktstiden).A new national awakening was now needed, and ending emigration was to be an important part of it (Barton 1990, 14-16).
Meanwhile, academic and parliamentary circles presented more rationally oriented proposals to investigate the causes and effects of emigration in detail.In 1907, a state-funded research group headed by Gustav Sundbärg began its work on a full review of the emigration issue feeding into policy proposals, the so-called Emigrationsutredningen.Their work resulted in a thick volume with twenty appendices, published over 1908-13, which attempted to cover all the issues related to emigrationfrom the social to the psychological -and amounted to nothing less than a broad socioeconomic survey of Sweden as a whole.The authors' policy suggestions recognised industrial growth and social reforms as the means to reduce emigration, but Sundbärg also allowed himself an appendix on the 'Swedish national character' (titled Det svenska folklynnet), where he accused his fellow Swedes of a fondness for fanciful ideas, a weak sense of national identity, envy, a weakness for everything foreign, and lacking psychological sense, all of which he thought were the fundamental causes of emigration beyond the statistics (Barton 1990, 16-18, 21;Scott 1965, 314-16).
In political groups with less faith in officially sanctioned solutions, voluntary organisations sprang up with the purpose of either limiting emigration or preserving the 'Swedishness' of those already settled abroad.The year 1907 saw the establishment of Nationalföreningen mot emigrationen (the national union against emigration), dominated by Conservative landowners, businessmen and academics, and headed by the young radical conservative Adrian Molin.Nationalföreningen tried to convince people contemplating emigration to abandon the idea either by providing them with cheap loans to build a family home in a less-populated part of Sweden -what came to be known as Egnahemsrörelsen (the home ownership movement) -or by mediating work placements.They also attempted to convince people who had already emigrated to return to Sweden (Lindkvist 2007, 35-57).H. Arnold Barton argues that this was an opposite approach to the one taken by Emigrationsutredningen: instead of modernisation and industrialisation, the solution was seen to lie in the reinvigoration of the Swedish countryside according to a national romantic vision of rural life.Thanks to its good financial resources, Nationalföreningen was able to organise several large-scale propaganda campaigns which probably had some effect (see Figure 12.1).However, it quickly lost its purpose as emigration streams dried up.In 1917, the organisation had 16,000 members, but in 1925 only 2,500, indicating how substantially the issue had dropped in importance (Barton 1990, 18-19, 21).
In 1908, the year after the establishment of Nationalföreningen, Riksföreningen för svenskhetens bevarande i utlandet (the State Union for the Preservation of Swedishness Abroad) was founded.It was an organisation of mainly civil servants and academics headed by Vilhelm Lundström, professor of classics at the University of Gothenburg.The task of Riksföreningen was to 'morally and economically support the preservation of the Swedish language and culture by Swedes abroad, to further the feelings of unity between Swedes abroad and at home, and to generally promote knowledge of the Swedish language and culture abroad' (Kummel 1994, 77).Modern mass emigration, which Lundström agreed was a tragedy, had in his opinion nevertheless resulted in something of value: the existence of Swedish communities abroad.Riksföreningen was keen to make the point that out of the nine million Swedes in the world, a whole third was living outside Sweden itself.Since the state was not doing anything to ensure that these diaspora Swedes would not lose their Swedishness, private organisations had to step in (Barton 1990, 19-20).
Although the intention of Riksföreningen had mainly been to work with recent émigrés, most of its activities ended up being concerned with the Swedish minorities in Finland and Estonia (then parts of the Russian Empire), which dated back to the Middle Ages.There, Riksföreningen attempted to consolidate Finnish-Swedish and Estonian-Swedish identities as 'Eastern Swedes' (östsvenskar).Its efforts in Finland were largely unsuccessful, although it did make some substantial contributions, for example by facilitating the reopening of the Swedish-speaking university Åbo Akademi in Turku in 1918.It had more success in Estonia, where the Swedish community was much smaller and poorer, and thus had reason to be more grateful for help and recognition from Sweden (Kummel 1994, 247-51).Riksföreningen's efforts to promote what Lundström called the pan-Swedish idea (den allsvenska idén) can be considered an example of the so-called pan-movements or a macronationalism aiming to unite members of a particular ethnicity over state borders to 'another, larger fatherland' (Barton 1990, 20).In this sense, its role was not limited to the emigration question alone, and, unlike Nationalföreningen, Riksföreningen still exists today, having in 1979 changed its name to Riksföreningen Sverigekontakt.
The Aftermath and Reverberations
By the 1920s, emigration from Scandinavia to the USA had reduced to a trickle, and during the Great Depression of 1929-34 it ended almost entirely.On top of widespread unemployment in the USA that discouraged further emigration, in 1930 the American authorities introduced restrictive immigration laws which, in spite of favouritism shown to immigrants from Northern Europe, made the country a less attractive destination.At the same time, the 1930 census showed 1,562,703 persons born in Sweden or of Swedish-born parents living in the USA, out of a population of almost 123 million (Barton 1975, 3;Ljungmark 2008, 8) -the largest diaspora of ethnic Swedes (or half-Swedes) that has ever existed.
In Sweden itself, memories of emigration persisted, as did many personal and family contacts with relatives who had settled abroad.Fictionalised and romanticised versions of the emigrant experience would now find a fertile ground there, as the importance of emigration as a burning societal problem had ceased.More than anyone else, it was the novelist Vilhelm Moberg (1898Moberg ( -1973) ) who helped to shape a Swedish popular imagination of emigration with his monumental tetralogy: The Emigrants (1949, Utvandrarna), Unto a Good Land (1952, Invandrarna), The Settlers (1952, Nybyggarna) and The Last Letter Home (1959, Sista brevet till Sverige).The series of novels depicts the fate of a family from Småland: Karl-Oskar Nilsson and Kristina Johannesdotter, their three surviving children and a number of relatives and neighbours who decide to emigrate to Minnesota in 1850.Settled in the New World, they gradually adapt to the local conditions and live through a number of formative events in American history, including the California Gold Rush (1849-55) and the American Civil War.
Considered some of the greatest works of modern Swedish literature, Moberg's novels also became the basis for two major feature film adaptions by director Jan Troell: The Emigrants (1971, Utvandrarna) and The New Land (1972, Nybyggarna), starring Max von Sydow, Liv Ullmann and Eddie Axberg.Both films were nominated for several Academy Awards and The New Land won the Golden Globe Award in 1973 for the best foreign language film.In 1995, Benny Andersson and Björn Ulvaeus (of ABBA) further premiered a musical based on the novels, which received instant acclaim and widespread popularity.Dealing with 'the great questions of our time', according to the critics, it ran for nearly four years in Sweden, generated guest performances and concert versions abroad, and both the recorded album and a single held considerable positions in the national charts throughout the late 1990s.
Conclusion: Emigration and the Nation
Emigration from Sweden -and from the rest of Scandinavia -in the nineteenth and early twentieth century was a complex phenomenon with a number of causes and a multifaceted impact on the countries of both its origin and destination.Peaking in the late 1860s, early 1880s and around the turn of the century, emigration changed in volume over time.However, beyond the concrete material and ideational circumstances that encouraged people to emigrate -hopes for employment and social advancement, fear of material destitution or religious and political persecution -the 'emigration question' also had a wide societal resonance as a focal point in public debates around which various fears and hopes would congregate, often reaching far beyond the question of emigration per se.
It is somewhat ironic that Emigrationsutredningen, the impressive study of the causes and effects of emigration from Sweden, became obsolete almost as soon as it was concluded -the beginning of the First World War and subsequent closure of international borders made it nearly impossible to emigrate.Nevertheless, it remains an important milestone in the history of Swedish statistics and exemplifies more generally its culture of state-commissioned research projects (statliga utredningar).The widely felt need to 'do something' about emigration in the decade around the turn of the century was an important impetus in the development of the rational and scientific ways of solving perceived societal problems that subsequently became a hallmark of the Swedish Social Democratic welfare state and the 'Nordic model' more generally (Hall 2000, 241).
But whether presenting an emotive and nostalgic vision of rural Swedishness, or seeking social science 'solutions' to emigration as a societal 'problem', arguments in opposition to emigration (and in some cases in support of it) ultimately tended to be about Sweden's future as a state and a nation.This became visible in how modernity -the great divisive issue in nation building in the age of industrialisation -was alternatively embraced or rejected in the emigration debate as various commentators attempted to secure the unreachable ideal of a unitary and well-governed, materially prosperous nation state.Perhaps more than anything else, it was this strongly idealistic, but at the same time also pragmatic and hands-on, approach to public policy that came to define Sweden and the rest of Scandinavia in the century that followed. | 2020-04-23T09:12:30.444Z | 2019-04-17T00:00:00.000 | {
"year": 2020,
"sha1": "4db2d64b3940ed6c7e95532b32b201d9616a8cbb",
"oa_license": "CCBY",
"oa_url": "http://library.oapen.org/bitstream/20.500.12657/37586/1/Introduction-to-Nordic-Cultures.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f7b135d1547bbeb83740a971e7052e4948135361",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
16622881 | pes2o/s2orc | v3-fos-license | A Model Optimization Approach to the Automatic Segmentation of Medical Images
SUMMARY The aim of this work is to develop an e ffi cient medical image segmentation technique by fitting a nonlinear shape model with pre-segmented images. In this technique, the kernel principle component analysis (KPCA) is used to capture the shape variations and to build the non-linear shape model. The pre-segmentation is carried out by classifying the image pixels according to the high level texture features extracted using the over-complete wavelet packet decomposition. Additionally, the model fit-ting is completed using the particle swarm optimization technique (PSO) to adapt the model parameters. The proposed technique is fully automated, is talented to deal with complex shape variations, can e ffi ciently optimize the model to fit the new cases, and is robust to noise and occlusion. In this paper, we demonstrate the proposed technique by implementing it to the liver segmentation from computed tomography (CT) scans and the obtained re-sults are very hopeful.
Introduction
The image segmentation is the first and essential process in many medical applications including analysis of anatomical structure [1], lesion detection [2], volume measurement and surgical planning [3].This process is traditionally performed by radiologists or medical specialists who use their knowledge and experience to manually trace the objects on each image or slice.In almost all of these applications, the medical specialists have to access a large number of images which is a tedious and a time consuming process.Although the automatic segmentation is helpful in these applications, it is a demanding issue which needs a considerable amount of knowledge inclusion.
Many researchers make an effort to develop semiautomatic and automatic medical image segmentation techniques and several articles have been presented in the literature.Some of these techniques interpret the image as an undirected and weighted graph, and compute the minimal cost path between user defined seed points [4].Although this class of segmentation techniques gives the user a full control over the segmentation process, it still requires the user interaction time and the quality of segmentation result is greatly depends on the skills of the operator.Another class of these techniques depends on the gray levels analysis and a simple or iterative thresholding to create a binary images that are usually further processed by morphological operators to separate the attached organs [5].The techniques of this class are likely to fail when the gray levels of different organs are similar or when patients with completely different gray level characteristic are processed.To overcome these limitations, a group of researchers used the learning techniques as the neural network to learn the gray level characteristics corresponding to different tissues [6].
Unfortunately their techniques are keeping some of the original limitations and they recommend the practice of basic anatomical knowledge.
Due to the limitation of the previously mentioned techniques, more advanced techniques incorporate the prior information captured from a set of training cases in the segmentation process [7].These techniques capture and describe prior information regarding the shape, size and position of each organ.In order to achieve this goal the researches employ deformable models, statistical shape models, and probabilistic atlases [8].However these techniques surmount the main limitations of previously mentioned techniques they produce their own limitations as the difficulty to build a proper training set, the challenge of representing all the shape variations, and the difficulty to get the optimal model parameters which fit the new cases.Moreover different approaches to medical image segmentation employ the level set method [9] with some novel speed functions.These methods propagate the implicitly defined surface toward the object boundary according to local image characteristic and the past front history [10].Though these techniques are able to produce a reasonable segmentation, the reliance on image information alone often lead to inaccurate segmentation results and the incorporation of prior knowledge and the parameters adjustment can greatly affect their accuracy .
The incorporation of shape priors with the segmentation techniques has been shown to be an effective way for knowledge inclusion and it is leading to more robust segmentation [11].Many researchers integrate linear shape priors with level set methods to control the contour evolution process [12].However these approaches are able to capture small variations of the shape of an object, they lead to unrealistic shape priors when the object undergoes complex or nonlinear deformations.In [13], Y. Rathi, et.al proved that the nonlinear shape prior obtained from kernel PCA (KPCA) space is more realistic and outperform the linear one.Additionally in [14], S. Dambreville, et.al incorporated the nonlinear shape model into a level set framework.However these techniques incorporate nonlinear shape pri-ors, they depend on the variational level set method and they can be trapped in a local minima and a manual initialization is very essential.Furthermore, they rely on the image moments only which is unsatisfactory in the case of complex textures.
Therefore in this work, we propose an efficient automatic segmentation technique by combining texture features-based classification and nonlinear shape model optimization.In this technique, the high level texture features extracted using the over-complete wavelet packet decomposition are used to accurately define the different tissues and to perform a preliminary segmentation.Additionally, the kernel PCA (KPCA) is utilized to capture the shape variations and to build a nonlinear shape model from a set of manually segmented images.The particle swarm optimization algorithm (PSO) is then applied to efficiently obtain the parameters of the shape model and to accurately fit it with the pre-segmented images.
After this introduction, in Section2, the basic PSO algorithm is explained.In Section3, we briefly describe the KPCA and how it is used to obtain the nonlinear shape priors.The texture prior extraction will be discussed in Sec-tion4.In Section5, the proposed PSO segmentation framework will be presented.The experimental results will be shown in Section6 and the paper will be concluded in Sec-tion7.
Particle Swarm Optimization (PSO)
PSO is a population based stochastic optimization technique founded by Kennedy and Eberhart in 1995 [15].In this algorithm they mimic the social behavior of bird flocks searching for food to produce computational intelligence.There are many similarities between PSO and the other evolutionary computation techniques, but the PSO algorithm supports the idea of individuals cooperation instead of competition used in the other techniques and that provides a better search methodology and reduces the dependency on the parameters initialization.Additionally, it can achieve better results in a faster and cheaper way compared with other evolutionary computational techniques as proved in [16] and as we will clarify in the experimental results.
In PSO, a population or swarm of individuals − particles − are separated over the search space of some problem.Each particle represents a complete solution of this problem and it evaluates the objective function at its location.Additionally, the particle moves in the search space under the influence of its behavior and the whole swarm behavior.Each particle in the swarm is defined by three d−dimensional vectors; the current location x i , the velocity v i and the best position it reaches p i , where d is the dimensionality of the search space.The original version of PSO algorithm will be described in the following algorithm.
1.The position and velocities are randomly initialized from the specified range.2. loop a.For each particle, evaluate the desired optimization fitness function.b.Compare particle's fitness evaluation with its pbest i , where pbest i is the fitness evaluation at particle's best location.If current value is better than pbest i , then set pbest i equal to the current value, and p i equal to the current location x i .c. Identify the particle in the neighborhood with the best success so far, and assign its index to variable g. d.Change the velocity and position of each particle according to the following equations. where • t : reefers to the iteration index.
e.If a criterion is met (sufficiently good fitness or maximum number of iteration), exit loop.
save the global best position as the problem solution.
This original PSO algorithm has been received many enhancements from its appearance till now [17].The PSO with inertia weight [17], [18] is one from these enhancements which provides better control on the search space, so we are interested in it during this work.The following equations are the velocity and position update equations of the PSO with inertia weight.
where, ω is the inertia weight The researchers have found that the large value of ω allows the particles to perform extensive exploration and the small value of ω increases the chance to get local optima.So they have found that the best performance could be achieved by using a large value of ω (e.g., 0.9) at the beginning and gradually decrease it until reach another small value of ω. in addition, the velocity of each particle is kept within a specified range of [−maximumvelocity, maximumvelocity].
Nonlinear Shape Priors
The nonlinear shape priors have been proven to be an efficient way for the representation of complex object deformation [13].Additionally, the KPCA has been shown as a predominant tool to extract nonlinear structure from a data set [14].In the following subsections, we will briefly review the KPCA and how it is utilized to form the shape priors.
Kernel principle component analysis (KPCA)
To extract the nonlinear structure from a complex dataset, we have to map this dataset from an input space I to a feature space F through a nonlinear function ϕ.Usually, the dimension of this mapping is very high and may be infinite and it increases the computational cost.Therefore, the KPCA benefits from the kernel trick to perform PCA in the feature space without explicitly mapping the dataset [19].The kernel is a function k (., .)such that, for all data points x i , the kernel matrix K (i, j) = k(x i , x j ) is symmetric positive definite.In addition, the kernel function gives the inner product between two points in the feature space, i.e., k x i , x j = ϕ(x i ), ϕ(x j ) .
Let τ = {x 1 , x 2 , , x N } be a set of training data.The kernel trick can be used to obtain the eigenvectors in the feature space from the following Eigen decomposition: where H is the centering matrix defined as a 2 , . . ., a N ] with a i = [a i1 , a i2 , . . ., a iN ] is the matrix containing the eigenvectors and Σ = diag (λ 1 , λ 2 , . . ., λ N ) contains the corresponding eigenvalues.Denote the mean of the ϕ−mapped data by φ = 1 N N i ϕ(x i ) and as described in [13], [14], [19], the centered map φcan be defined as: The k th orthonormal eigenvector of the covariance matrix in the feature space can then be computed as: In addition, the projection of the ϕ−image of a test point x onto the subspace spanned by the first n eigenvectors is given by: where, β k is the projection of ϕ(x)onto the k th component and it is computed as: k(., .) is the centered kernel function and it is given by: with
Shape Priors Using KPCA
As a first step in the shape modeling, a set of CT slices must be segmented manually and its corresponding level set have to be formulated.In order to simplify this process, we built an interactive system that allows the medical doctor to segment the objects by selecting some points around it and then, the cubic Spline interpolation [20] is employed to estimate the segmenting curve from these points as shown in Figure 1 (a).Additionally, the level sets which describe the segmented objects are formulated according to the following procedure.
1. Construct a binary mask from the segmenting curve with the value of 1 inside the curve and the value of 0 outside it.2. Use the binary mask generated in the previous step to generate a mask with the value of -1 inside the object and the value of 1 outside it.This mask is regarded as a sign function and denoted as s as shown in Figure 1 Furthermore, the process of shape modeling is completed according to the following algorithm: 1. Load the N level set functions Ψ i (x, y), i = 1, 2, . . ., N which had been constructed from the training images .2. Constructing a column vectors ψ i , i = 1, 2, . . ., N consisting of M samples of each Ψ i , M = m 1 × m 2 is the image size,by stacking the m 2 columns of Ψ i .3. Defining the shape matrix S as S = [ψ 1 , ψ 2 , . . ., ψ N ].
Using the Gaussian kernel
, with σ 2 = 1 N N i=1 min j i d 2 (ψ i , ψ j ) to build the kernel matrix K. 5. Applying KPCA as described in the previous section and selecting the eigenvectors that have eigenvalues greater than one as a shape representation.
Texture Priors
We utilize the over-complete wavelet packet transform to extract the high-level feature vectors for each foreground pixel in the training images.As illustrated in Figure 2 the over-complete wavelet packet transform does not perform the down-sampling as in standard wavelet packet transform, so it ensures the translation invariance property which is indispensable for textural analysis.In addition, it provides robust texture features at the expense of redundancy [21].Feature extraction using over-complete wavelet packet transform can extract all bandpass information about the texture.
In this work, we extract the wavelet packet feature set according to the following procedure: where, F(x + i, y + j) is the wavelet coefficient of a feature sub-image in (2m + 1) × (2m + 1) window centered at pixel (x, y). 5. Construct the feature vectors of each pixel in the image from the energy of the corresponding feature subimages.
After the construction of the high level feature vectors, we assign a label for each pixel to indicate whether this pixel is a desired object pixel or not and finally, we use the linear fisher discriminate algorithm [22] to build the textural prior model.The energy, l 2 − norm, of each feature sub-image is a favorable feature of texture because it indicates the dominant spatial-frequency channels of the original image and it leads to better classification results than the spatial domain methods as shown in Figure 3. Linear fisher discriminate algorithm is a classification method that project highdimensional data onto a line and performs classification in this one-dimensional space.This projection maximizes the distance between the means of the two classes while minimizing the variance within each class.
PSO-Based Segmentation
In order to segment a new image, its wavelet packet based feature set is extracted and each foreground pixel in this image is classified as a desired object pixel (true) or undesired object pixel (false) according to the prior textural model as shown in Figure 4.This classification process is carried out by using the linear fisher discriminate algorithm.Finally, this stage is completed by applying the PSO algorithm [17] to get the level set function that truly segments the image as shown in Figure 5 and described in the following sections.
The Model Description
Each particle in the PSO population consists of the set of parameters that control the shape and the pose parameters of the segmenting curve.In this framework, the level set function φ(x, y) that implicitly represents the segmenting curve is defined as the pre-image of feature vector υ = l i=1 w i α i , l is the number of KPCA principle components, α i and w i , i = 1, 2, . . ., l are the normalized KPCA principle components and its weights respectively.The pre-image of this feature vector is computed according to the direct method proposed in [23].Furthermore, we consider the pose parameters, a, b for transition, h for scaling and θ for rotation, which incorporated in this framework using an affine transform.According to above considerations, each individual, particle, P in the PSO population is defined as P = [(w i , i = 1, 2, , l), a, b, h, θ] and it represents a segmenting curve.
The fitness of each particle in this work represents how the corresponding curve segments the image.Accordingly in the proposed framework, we tend to maximize the fitness function used in [24].This fitness function is formulated as: where, A is the fraction of pixels inside the segmenting curve that are labeled "true" and B is the fraction of the pixels outside the segmenting curve that are labeled "true".The maximization of this fitness function means that more desired pixels are gathered inside the segmenting curve.
The PSO algorithm configuration
In this work, we are employing the PSO algorithm with inertia weight described in Section1.The PSO algorithm includes an inertia term and acceleration constants which give us more control on the segmenting curve.The PSO algorithm configuration is shown in Table1 and the curve parameters configuration is provided in Table2.The parame-ters in Table1 control how fast the PSO converge to the correct segmentation and make the balance between the global and local search, however the PSO is robust to initialization.The parameters in Table2 control the shape formulation and its transformation.Among these parameters, w i have fixed ranges and the other parameters are selected practically according to the characteristic of the cases and the selection of these parameters have to guarantee that all possible variations and transformations are considered.
The PSO Algorithm Implementation
After we configure the PSO algorithm and adjust the curve parameters according to the desired object, we carry out the segmentation process according to the following sequence: 1. Initialize the curve parameter randomly from the range specified in Table1.2. Create the level set function from the curve parameters.3. Segment the image by all segmenting curves derived from the level set.4. Measure the fitness of each curve by computing the fitness function described in Section5.1.5. Determine the best segmenting curve and the best segmentation results for each curve.6.If the best curve is not changed for more than 30 iterations, produce the segmentation results; else go to Step-6.7. Update the curves parameters according to the PSO algorithm equations and go to Step-2.
Experimental Results
In this work, we use a portal phase of computed tomography (CT) images of resolution 512 × 512 pixels and of 1mm slice interval to perform two experiments of liver segmentation.The used datasets contain normal cases as well as cases with liver abnormalities, tumors and cysts.In the first experiment, a set of five CT images of different patients were used.Each CT image consists of about 150 slices stacked together and the liver fully appears in about 100 slices.In this experiment, two datasets were extracted and used; dataset1 consists of 34 slices of one patient with low shape variations and dataset2 consists of 33 slices of the same patient with high shape variations.These slices were manually segmented to build the nonlinear shape prior and textural prior models as described in Section2 and Section3.We select 8 and 10 principle modes to represent the shape variation in dataset1 and dataset2 respectively.Figure 6 shows that every principle mode expresses a variation in some object parts.After we had built the shape and textural priors, we employed the proposed PSO segmentation technique on a set of slices of the patient used in the training stage and a set of novel slices for other patients.The resulting images shown in Figure 7 and Figure 8 illustrate the effectiveness of the proposed technique in liver segmentation from the CT images.
In the second experiment, a set of ten CT images of different patients was used for cross validation; nine patients were used for training and one patient was used for testing.Each CT image consists of about 170 slices stacked together and the liver fully appears in about 140 slices.In this experiment, a set of key frames were extracted from different patients at interval of 5 slices and all extracted frames were manually segmented.The level sets constructed from corresponding frames were used to build multi-shape and texture models.In this work, we use 27 slices to build each model and select 7 principle modes to represent the shape variations.Sample results of this experiment are shown in Figure 9 and Figure 10.
To validate the superiority of the proposed segmentation technique, five competitive techniques were utilized to segment the liver in the same set of slices and all results were compared.The first implemented technique is the active contour without edges [25] with a manual initialization inside the liver, the second technique performs the segmentation using the wavelet packet decomposition feature set and the fisher linear discriminate algorithm, the third technique utilizes the genetic algorithm (GA) to fit the pre-constructed shape model as proposed in [24], the fourth technique incorporate the linear shape priors of [12] in the segmentation framework, and The fifth technique is the technique proposed in [14] which incorporates nonlinear shape model and intensity-based model and it requires a manual initialization.Figure 11 shows the effectiveness of the proposed technique in the case of high shape variations , Figure 12 shows sample results of GA-based technique and Figure 13 demonstrates sample results of the fifth competitive technique.As shown in Figure 13, the balance between the shape model and the intensity-based model greatly influences the final results and keeping this balance manually is very difficult in the case of abdominal CT images.The goodness of fitness, G, of all techniques were computed for all datasets and compared in Table3, Table4 and Table5.
To calculate the goodness of fitness, we generate two binary masks to represent the manual and the computerized segmentation results.These masks have a value of 1 inside the object and a value of 0 outside.Then the goodness of fitness is calculated according to the following equation.
where, Am represents the area of manually segmented object and Aa represents the area of automatically segmented object.A score of 1 represents a perfect match with the manual segmentation.
Conclusion and Futute Work
In this work, the high level features extracted using the over- Fig. 13 Sample results of the framework proposed in [14], the upper row is the manual segmentation, the middle row is the best obtained results using a mixture of 40% of intensity-based model and 60% of shape model, and the bottom row is the results obtained using a mixture of 50% of intensity-based model and 60% of shape model, the black curve is the manual initialization and the white one is the final evolution result.[14] 0.94 0.85 complete wavelet decomposition allows the technique to accurately discriminate the desired tissue.Also, the incorporation of nonlinear shape priors increases the ability to capture the desired object accurately.In addition, the utilization of the particle swarm optimization algorithm to adapt a region
(b). 3 .
Compute the Euclidian distance transform between each pixel and the segmenting curve and denotes it as D as shown in Figure 1 (c).4. Formulate the level set function as Ψ(x, y) = s(x, y) • D(x, y) as shown in Figure 1 (d).
Fig. 1 F
Fig. 1 Manual segmentation and level set formulation (a) the estimated curve, (b) the sign function s, (c) the distance function D, and (d) the signed distance function Ψ.
Fig. 2
Fig. 2 Wavelet packet decomposition of an image into four sub-bands.(a) The standard decomposition, (b) The over-complete decomposition.H and L denote high-pass and low-pass filter respectively and ↓ 2 means a down-sampling by 2.
Fig. 3
Fig. 3 Sample classification results, (a) using wavelet texture feature set and (b) using Laws texture.
Fig. 4
Fig. 4 Preliminary segmentation of sample images: the foreground is the pixels classified as desired object pixels.
Fig. 6
Fig.6The first 10 principle variation modes of dataset2, √ λ i α i , i = 1, 2, . . ., 10 from left to right and up to down, the black contour represents the shape boundary.
Fig. 7
Fig. 7 Samples of the proposed technique results,first experiment-dataset1, (a) images for the patient used in training, (b) images of the other patients -the manual segmentation on the upper row and the automatic segmentation on bottom row.
Fig. 8
Fig. 8 Samples of the proposed technique results,first experiment-dataset2, (a) images for the patient used in training, (b) images of the other patients -the manual segmentation on the upper row and the automatic segmentation on bottom row.
Fig. 9
Fig. 9 Samples of The proposed technique results, the second experiment, on test slices extracted from the patients used in the training stage, the manual segmentation on the upper row and the results on the bottom row.
Fig. 10
Fig. 10 Samples of The proposed technique results, the second experiment, on novel test slices extracted from the test patients, the manual segmentation on the upper row and the results on the bottom row.
Fig. 11
Fig. 11 Comparison of the results of incorporating linear and nonlinear shape priors in the segmentation framework, (a) manual segmentation, (b) nonlinear shape priors and (c) linear shape priors.
Fig. 12
Fig. 12 Samples of genetic algorithm-based segmentation technique results, (a) the first experiment, (b) the second experiment, the manual segmentation on the upper row and the results on the bottom row.
Table 2
Curve parameter configuration
Table 3
Goodness of Fitness of the Final Segmentation Results (First experiment, dataset1). | 2015-07-15T00:15:54.000Z | 2010-04-01T00:00:00.000 | {
"year": 2010,
"sha1": "f655d153ad9ecf6cf69234e796bf3d65af47ad80",
"oa_license": null,
"oa_url": "https://doi.org/10.1587/transinf.e93.d.882",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f655d153ad9ecf6cf69234e796bf3d65af47ad80",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
233233768 | pes2o/s2orc | v3-fos-license | Sera neutralizing activities against SARS-CoV-2 and multiple variants six month after hospitalization for COVID-19
Abstract Background Humoral response to SARS-CoV-2 occurs within the first weeks after COVID-19. Those antibodies exert a neutralizing activity against SARS-CoV-2, whose evolution overtime after COVID-19 as well as efficiency against novel variants are however poorly characterized. Methods In this prospective study, sera of 107 patients hospitalized with COVID-19 were collected at 3- and 6-months post-infection. We performed quantitative neutralization experiments on top of high-throughput serological assays evaluating anti-Spike (S) and anti-Nucleocapsid (NP) IgG. Findings Levels of sero-neutralization and IgG rates against the ancestral strain decreased significantly over time. After 6 months, 2.8% of the patients had a negative serological status for both anti-S and anti-NP IgG. However, all sera had a persistent and effective neutralizing effect against SARS-CoV-2. IgG levels correlated with sero-neutralization and this correlation was stronger for anti-S than for anti-NP antibodies. The level of sero-neutralization quantified at 6 months correlated with markers of initial severity, notably admission in intensive care units and the need for mechanical invasive ventilation. In addition, sera collected at 6 months were tested against multiple SARS-CoV-2 variants and showed efficient neutralizing effects against D614G, B.1.1.7 and P.1 variants but a significantly weaker activity against B.1.351 variant. Interpretation Decrease of IgG rates and serological assays becoming negative did not imply loss of neutralizing capacity. Our results indicate a sustained humoral response against the ancestral strain and the D614G, B.1.1.7 and P.1 variants for at least 6 months in patients previously hospitalized for COVID-19. A weaker protection was however observed for the B.1.351 variant.
Introduction
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has been shown to induce a humoral immune response with seroconversion occurring in most patients between 7 and 21 days after diagnosis [1,2]. This early humoral response is mostly composed of IgA, IgM and IgG directed against the viral surface glycoprotein Spike (S), the nucleocapsid protein (NP) or the spike Receptor Binding Domain (RBD) [2]. The detection of such antibodies may reflect a neutralizing activity believed to be a key point in viral clearance [3,4], as well as conferring a relative protection to the disease in the convalescent phase. Similarly to other coronaviruses, anti-SARS-CoV-2 antibodies decline overtime [5], which raised questions about the extent of the protection conferred and the potential risk of reinfection. Furthermore, the recent emergence of multiple SARS-CoV-2 variants raised additional questions on cross-reactivity of the acquired antibodies after COVID-19 [6].
Some publications have reported an association between the level of antibodies and the clinical severity, a higher level being observed in patients presenting the most critical form of the disease [4,[7][8][9][10]. However, if there is consistent evidence that outpatients usually develop weaker immune response, there is fewer data relating to patients in intensive care units, a valuable population whose described antibodies response could set the upper limit of the humoral immunity against SARS-CoV-2.
While vaccination is ongoing worldwide, the insufficient supply of doses makes prioritization strategies still needed. Being able to shape levels of immunity required to protect against severe reinfection would considerably assist public health strategies in this regard, in addition to being critical information to estimate if vaccines stand the test of time and emerging variants.
A c c e p t e d M a n u s c r i p t 5 Our study explores the longitudinal evolution of antibody levels and of sera neutralizing activities in a French monocentric cohort of patients hospitalized for COVID-19 during the first wave of SARS-CoV-2 pandemic and followed-up for 6 months after hospital discharge. In addition to the ancestral viral strain, sera-neutralization activities against the emerging SARS-CoV-2 variants (B.1.1.7, B.1.351 and P.1) were evaluated.
Cohort description
We conducted a single-center prospective observational study on adult patients with laboratory positive SARS-CoV-2 real-time reverse-transcriptase polymerase chain reaction (RT-PCR) admitted to Hôpital Européen Georges Pompidou (APHP, Paris, France) for at least 48h. All patients were initially enrolled from March 17 th to April 29 th 2020 and were then proposed for a clinical and serological follow-up at month 3 (M3) and M6 post-infection. The study is part of The French Covid cohort (NCT04262921) sponsored by Inserm and was authorized by the French Ethics Committee CPP Ile-de-France VI (ID RCB:2020-A00256-33). This study was conducted with the understanding and the consent of each participant or its surrogate.
Data collection
Demographic, clinical presentation, and comorbidity data during the index COVID-19 hospitalization were extracted from the electronic medical records collected in a standardized data collection form in the Clinical Data Warehouse (CDW) of our hospital. The dedicated medical records were stored on an i2b2 platform in a CDW together with all other hospital health records.
A c c e p t e d M a n u s c r i p t 6
Index value threshold for positivity was 1.4 as recommended. Beckman Coulter Access SARS-CoV-2 IgG assays (Brea, CA, USA) targeting the RBD of SARS-CoV-2 spike surface protein, were done on UniCel DxI 800 Access Immunoassay System (Beckman Coulter), according to manufacturer's instructions. Index value threshold for positivity was 1 as recommended.
Qualitative results as well as index values were used for analysis for both assays.
Virus strains
The ancestral non-D614G SARS-CoV-2 strain (BetaCoV/France/IDF0372/2020) was isolated in [11]. Viruses were sequenced directly on nasal swabs and after one or two passages on Vero cells.
S-Fuse neutralization assay
Neutralization was performed using the S-Fuse reporter system, as previously described [11].
Briefly, U2OS-ACE2 GFP1-10 or GFP 11 cells, which become GFP + upon infection with SARS-CoV-2, were mixed (1:1 ratio) and plated at 8 × 10 3 cells per well in a μClear 96-well plate (Greiner Bio-One). SARS-CoV-2 strains were incubated with sera at the indicated dilutions for 15 min at room temperature and added to S-Fuse cells. All sera were heat inactivated 30 min at 56 °C before use. After 18 h incubation at 37°C 5%CO2, cells were fixed with 2% paraformaldehyde, washed and stained with Hoechst (1:1,000 dilution; Invitrogen). Images were acquired on an Opera Phenix high-content confocal microscope (PerkinElmer). The GFP area, the number of syncytia and nuclei were quantified using the Harmony software (PerkinElmer). The percentage of neutralization was calculated using the number of syncytia with the following formula: 100 × (1 − (value with serum − value in 'noninfected')/(value in 'no serum' − value in 'noninfected')). Neutralizing activity of each sera was expressed as the ED50, calculated using the percentage of neutralization at each different concentration. Cells were tested negative for mycoplasma. Neutralization determined with the S-Fuse reporter system correlates to pseudovirus neutralization assay [12].
Statistical analyses
Statistics were performed using NCSS 2012 software (G Hintze, Kaysville, UT, USA). All numerical data were checked for normality and non-normal distributions were transformed (using ExpNorScore function on NCSS, which returns the expected value of the normal order statistic corresponding to X).
A c c e p t e d M a n u s c r i p t 8 Continuous variables are reported as means (SDs). Discrete variables are described as counts and percentages. Groups were compared using Two sample T-Test or Wilcoxon Rank test when necessary for continuous variables and Fisher's exact Test or χ2 for discrete variables.
We also performed a multiple regression analysis to assess variables correlated to the seroneutralization at M6. For analyses, P values <0.05 were considered significant.
Role of the funding source:
The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. The corresponding author (J.S.H) had full access to all the data in the study and had the final responsibility for the decision to submit for publication.
Results
Between March 17 th and April 29 th , 2020, 354 patients were hospitalized at Hôpital Européen Georges Pompidou (Paris, France) with a confirmed SARS-CoV-2-positive pneumonia. By November 16 th , 2020, 85 deaths (24.0%) had occurred during or after hospitalization. From June 17 th to November 16 th 2020, we were able to manage a complete follow-up for 107 of these patients with two time-point visits at 3 and 6 months after hospital discharge (see Supplementary Figure 1). The 3-month and 6-month visits were performed with a median interval of 98 days (IQR: 91-101) and 203 days (IQR: 191-216), respectively. Among those patients with a complete follow-up, 32.7% (35/107) had required medical care in Intensive Care Unit during the acute COVID-19 phase. Fifteen (15/35, 42.9%) of them required invasive mechanical ventilation (MV). The oxygenation maximal flow in patient in non-ICU unit, in patients not requiring MV in ICU unit and in patients before MV in ICU was 2.0 ± 1.6, 10.7± 5.2 and 13.0 ± 3.6 L/min respectively (p<0.0001). A majority of patients were male (73/107, 68.2%) with a mean age of 58.7 ± 14.0 years-old. A past history of cardiovascular risk factor (chronic cardiac disease, diabetes, obesity, hypertension, chronic kidney disease) was found in 51.4% (55/107) of them and 10.2% (11/107) had immunosuppressive diseases (cirrhosis, asplenia, sickle cell anemia, solid organ or stem cell transplantation, HIV infection, primary immune deficiency, chronic hematological disease, malignant neoplasm, autoimmune disorder). Moreover, 5.6% (6/107) of them were previously treated with an immunosuppressive therapy. All patients' characteristics are listed in Table 1.
Neutralizing activities against ancestral SARS-CoV-2 at 3-and 6-months post-infection
We then aimed to assess if a persistent serum neutralizing activity was detected up to 6 months following COVID-19 infection, independent of anti-S or anti-NP levels. To do so, we used S-Fuse cells (specifically designed to become GFP+ when productively infected by SARS-CoV-2) to evaluate our patients' sera propensity to prevent such infection. At minimum dilution (1/30), seroneutralization was observed in all samples at M3 and M6, even when anti-S and anti-NP IgG were considered as negative regarding the commercial kit threshold.
We next quantified sero-neutralization by performing serial dilutions in order to define the ID50 neutralization (maximum dilution to maintain a 50% neutralization capacity). ID50 neutralization significantly decreased between M3 and M6 ( Figure 2), with residual values nevertheless indicating a high neutralizing activity at M6.
Impact of the initial clinical severity on residual humoral immunity
As all measures were highly variable between patients, we then sought factors associated with higher levels of neutralizing activities. In a multiple regression analysis, we found that an initial management in ICU and the need for an invasive mechanical ventilation were the only two factors significantly associated with a higher rate of ID50 neutralization at M6 (Table 2). When considering ID50 neutralization according to ICU hospitalization and the need for mechanical ventilation, we found that M a n u s c r i p t 10 patients in ICU had significantly higher neutralizing activities as compared to non-ICU patients with the highest levels observed in ICU patients who had invasive mechanical ventilation ( Figure 3A).
A similar trend was observed at M3, however not reaching significance ( Figure 3A).
We then analyzed anti-S and anti-NP IgG levels at M3 and M6 according to the initial management in ICU and the need for an invasive mechanical ventilation. We found higher levels of anti-NP IgG in mechanically ventilated patients in ICU vs. non mechanically ventilated patients admitted to ICU vs. patient in non-ICU medical departments at M3 (7.6±1.8 vs. 7.2±2.2 vs. 6.2±2.3, p=0.03 by Kruskall-Wallis test, Figure 3B) Figure 3B) with a significantly higher rate in ICU patients vs. no ICU patients (4.9±2.0 vs. 3.3±2.4 p value = 0.001). In contrast, this pattern was not observed with the anti-S antibodies as there were no significant differences between the ICU vs. the no ICU groups at M3 and at M6 ( Figure 3C).
Neutralizing activities against SARS-CoV-2 emerging variants
We then used sera collected at 6 months following COVID-19 infection to assess their neutralizing activities against multiple variants, including D614G, B.
Discussion
In the present study, we described the longitudinal evolution of IgG levels and sero-neutralization at 3-and 6-months post-infection in a relatively large prospective cohort of 107 hospitalized patientswith a third of severe cases tending to be scarce in literatureand thus provides important data about the evolution of humoral immunity after a hospitalization for COVID-19 infection. We found that at least one serology assay was still positive at 6 months in 97.2% of the studied patients. Although antibodies levels decreased significantly over time, with rates dropping under the positivity threshold in a few cases, all patients' sera conserved an effective neutralizing activity at 6 months post-infection against the ancestral strain. Sero-neutralization remained higher at 3 and 6 months in patients who had required intensive care. We also used our sera collection to estimate the levels of humoral protection against the emerging SARS-CoV-2 variants. In these additional in vitro experiments, we found that sera-neutralizing activities was also effective against the B.1.1.7 and P.1 variants (also known as the UK and Brazilian variants), but was potentially weaker for the B.1.351 strain (also known as South-African variant).
Higher ID50 against the ancestral strain were observed in patients with more severe presentations, even at distance of infection. This correlation between sero-neutralization and clinical severity has been previously described [7,10,13,14], and our results now indicate that this trend might persist over time. Interestingly, we found that anti-NP IgG titers were higher according to the stage of severity, which was not observed for anti-S IgG. Early after symptoms onset, anti-NP response had already been reported as a possible marker of severity, associated to delayed viral clearance and disease severity [8]. Whether this exacerbated humoral response in severe patients is a A c c e p t e d M a n u s c r i p t 12 protective adaptation to a more intense viral load or if it plays a putative role in pathogenicity remains subject to debate [15,16].
At 6 months post-infection, we found that anti-S IgG titers correlated with sera ID50 neutralization, but not anti-NP IgG. This was generally in line with other works that had underlined, at different times post infection, a relatively strong correlation between neutralizing antibodies and anti-S or anti-RBD antibodies, and a usually poorer correlation to anti-NP antibodies [9,14,17,18].
The evolution of sera-neutralization overtime and in response to emerging variants is one important element to consider when questioning the extent of effective protection conferred by a priori infection and thus helps evaluating the strength of shield immunity during this pandemic. In line with first encouraging results [11,13,17,[19][20][21], we confirmed in this study the persistence of neutralization up to 6 months post-infection against the ancestral strain, but also the existence of a broader and similarly effective neutralizing activities against novel variants including B1.1.7 and P1 variants. These results are in favor of antibodies cross-reactivity and potential protection against reinfection with these variants. As compared to other strains, we observed a weaker protection against the B.1.351 variant, however with a substantial neutralizing activity observed in most of the patients. These data suggest that these antibodies acquired during a prior COVID-19 infection might not confer a complete protection against this emerging variant firstly described in South African patients [22]. We took benefit of the available sera collection and the development of a novel assay to estimate these activities but cannot extrapolate on a higher risk of reinfection in these convalescent patients. Several publications based on pseudovirus or virus neutralizing assays A c c e p t e d M a n u s c r i p t 13 outlined that variants could partially evade humoral immunityin exposed patients as well as vaccinees [23][24][25].
Our cohort of patients with the most critical forms of COVID-19 represents a valuable population to explore maximal antibodies response and set the upper limit of the humoral immunity against the SARS-CoV-2. Our results suggest that our patients should be protected at least 6 months against future re-infection. So far, there are few cases of reinfection published in the literature [26].
Reinfections rates have been estimated as low in large recent observational studies despite waning neutralizing antibodies [27,28]. However, it is impossible to extrapolate on future infections with novel variants. Interestingly, a recent model predicted a relationship between neutralizing levels and immune protection against the ancestral strain and novel variants, as well as a protection against severe disease [29]. Further studies are nonetheless required, especially regarding the B.1.351 variant, in order to determine if partial humoral escape can clinically lead to severe events.
Cellular immunity also appears as a major shield against SARS-CoV-2, with the development of durable T memory cells [20], whose reactivity could be only slightly impacted by variants [30].
Overall, patients who survived the most critical forms of COVID-19 consequently developed an intense and prolonged humoral immunity. These levels tend to correlate with the severity of the initial presentation, with patients in ICU who had invasive mechanical ventilation having the highest neutralizing activities as compared to ICU and non-ICU patients. A c c e p t e d M a n u s c r i p t 28 Figure 4 | 2021-04-15T06:16:27.872Z | 2021-04-14T00:00:00.000 | {
"year": 2021,
"sha1": "c3b8713a67d2ad2b568cc9f48aabb1161c7e0f14",
"oa_license": null,
"oa_url": "https://academic.oup.com/cid/article-pdf/73/6/e1337/40392509/ciab308.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "677870c6ef76a8bd6cb917e9cf1dd59fd7f1d2dd",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265607880 | pes2o/s2orc | v3-fos-license | TCE at Qur’an QA 2023 Shared Task: Low Resource Enhanced Transformer-based Ensemble Approach for Qur’anic QA
In this paper, we present our approach to tackle Qur’an QA 2023 shared tasks A and B. To address the challenge of low-resourced training data, we rely on transfer learning together with a voting ensemble to improve prediction stability across multiple runs. Additionally, we employ different architectures and learning mechanisms for a range of Arabic pre-trained transformer-based models for both tasks. To identify unanswerable questions, we propose using a thresholding mechanism. Our top-performing systems greatly surpass the baseline performance on the hidden split, achieving a MAP score of 25.05% for task A and a partial Average Precision (pAP) of 57.11% for task B.
Introduction
Ad hoc search is a fundamental task in Information Retrieval (IR) and serves as the foundation for numerous Question Answering (QA) systems and search engines.Machine Reading Comprehension (MRC) is a long-standing endeavor in Natural language processing (NLP) and plays a significant role in the framework of text-based QA systems.The emergence of Bidirectional Encoder Representations from Transformers (BERT) and its family of transformer-based pre-trained language models (LM) have revolutionized the landscape of transfer learning systems for NLP and IR as a whole (Yates et al., 2021;Bashir et al., 2021).
Arabic is widely spoken in the Middle East and North Africa, and among Muslims worldwide.Arabic is known for its extensive inflectional and derivational features.It has three main variants: Classical Arabic (CA), Modern standard Arabic (MSA), and Dialectal Arabic (DA).
Qur'an QA 2023 shared task A is a passage retrieval task organized to engage the community in conducting ad hoc search over the Holy Qur'an (MALHAS, 2023;Malhas and Elsayed, 2020).While Qur'an QA 2023 shared task B is a ranking-based MRC over the Holy Qur'an, which is the second version of Qur'an QA 2022 shared task (Malhas et al., 2022;MALHAS, 2023).
This paper presents our approaches to solve the two tasks A and B. For task A, we explore both dual-encoders and cross-encoders for ad hoc search (Yates et al., 2021).For task B, we investigate LMs for extractive QA using two learning methods (Devlin et al., 2019).For both tasks, we utilize various pre-trained Arabic LM variants.Moreover, we adopt external Arabic resources in our fine-tuning setups (MALHAS, 2023).Finally, we employ an ensemble-based approach to account for inconsistencies among multiple runs.We contribute to the NLP community by releasing our experiment codes and trained LMs to GitHub1 .
In this work, we address the following research questions2 : RQ1: What is the impact of using external resources to perform pipelined fine-tuning?RQ2: How does ensemble learning improve the performance obtained?RQ3: What is the effect of thresholding on zeroanswer questions?RQ4 A : What is the impact of hard negatives on the dual-encoders approach?RQ5 B : What is the impact of multi answer loss method on multi-answer cases?RQ6 B : How is post-processing essential for ranking-based extractive question answering?
The structure of our paper is as follows: Sections 2 and 3 provide an overview of the datasets used in our study.In Section 4, we present the system design and implementation details for both tasks.The main results for both tasks are presented in Section 5. Section 6 focuses on the analysis and discussion of our research questions RQs.Finally, Section 7 concludes our work.2 Task A Dataset Details Qur'an QA 2023 shared task A serves as a test collection for the ad hoc retrieval task.The divine text is divided into segments known as the Thematic Qur'an Passage Collection (QPC), where logical segments are formed based on common themes found among consecutive Qur'anic verses (Malhas et al., 2023;Swar, 2007).In this task, systems are required to provide responses to user questions in MSA by retrieving relevant passages from the QPC when possible.This suggests there is a language gap between the questions and the passages, as the passages are in CA.Table 1 presents the distribution of the dataset across the training and development splits.The majority of questions in the dataset are multi-answer questions, meaning that systems can only receive full credit if they are able to identify all relevant passages for these queries.Additionally, Table 1 provides information on zero-answer questions, which are unanswerable questions from the entire Qur'an.(More information about the dataset distribution of topics in Appendix A.1) Task A is evaluated as a ranking task using the standard mean Average Precision (MAP) metric.(Additional information about the evaluation process including zero-answers cases can be found in Appendix A.2)
Task B Dataset Details
Qur'an QA 2023 shared task B is a rankingbased SQuADv2.0-likeMRC over the Holy Qur'an, which extends to the Qur'an QA 2022 (Malhas et al., 2022;Rajpurkar et al., 2016).The dataset is also referred to as Qur'an reading comprehension dataset v1.2 (QRCDv1.2).The same questions from task A are organized as answer span extraction task from relevant passages (Malhas and Elsayed, 2020;Malhas et al., 2022).(See the dataset distribution of topics in Appendix A.1) and triplets across the training and development splits.In addition, the table presents the distribution of answer types for the dataset pairs.
Although zero-answer questions account for 15% of the questions in task A test collection, they only contribute to 5% of the question-passage pairs in task B. Furthermore, task B has a limited number of unique questions in comparison to their corresponding question-passage pairs as seen from Tables 1 and 2, respectively.As a consequence, task B can have repeated questions and passages among different samples and can be even leaked among training and development splits (Keleg and Magdy, 2022).Keleg and Magdy (2022) analyzed this phenomenon and identified sources of leakage in Qur'an reading comprehension dataset v1.1 (QR-CDv1.1).In QRCDv1.1, leakage is defined as the presence of passages, questions, or answers that are shared among multiple samples (Keleg and Magdy, 2022).This can lead to LMs memorizing or overfitting leaked samples (Keleg and Magdy, 2022).Keleg and Magdy (2022) categorized QRCDv1.1 into four distinct and mutually exclusive categories based on the type of leakage: pairs of passagequestion, passage-answer, or just questions.(For more information about leakage in task B, please refer to Appendix A.4) We extend the analysis made by Keleg and Magdy (2022) for QRCDv1.2.Our main observation is that 90% of the samples with no answer belong to the trivial leakage group called D (1) .This group refers to samples with duplicate passageanswer or question-answer pairs.This indicates that zero-answer questions are not just less prevalent in task B but also present a greater challenge in terms of generalization.Given the four groups defined by Keleg and Magdy (2022), they proposed a data re-splitting mechanism for QRCDv1.1 called faithful splits.In this work, we extend their resplitting approach and create faithful splits for QR-CDv1.2.(Please refer to Appendix A.4 for more details about faithful splitting) Task B is evaluated as a ranking task as well, using a recently proposed measure called pAP (Malhas and Elsayed, 2020;MALHAS, 2023).(More details about this measure and zero-answer sample evaluation can be found in Appendix A.3)
System Design
In this work, we fine-tune a variety of pre-trained Arabic LMs, namely AraBERTv0.2-base(Antoun et al., 2020), CAMeLBERT-CA (Inoue et al., 2021), andAraELECTRA (Antoun et al., 2021).We utilize transfer learning and ensemble learning for both tasks.To determine zero-answer cases, we apply a thresholding mechanism.(Additional information on transfer learning and ensemble learning can be found in Appendices B and C, respectively)
Task A Architecture
We examine two distinct approaches for neural ranking in ad-hoc search: dual-encoders and crossencoders approaches (Yates et al., 2021).
In dual-encoders, documents and queries are encoded separately into dense vectors, which are then compared using a metric learning function, such as cosine distance.We utilize Stable Training Algorithm for dense Retrieval (STAR) with a batch size of 16 queries to train our dense retrievers (Zhan et al., 2021;Yates et al., 2021).
In contrast cross-encoders involve encoding positive and negative pairs of documents and questions, assigning a relevance score.This method packs a document and a question into a single input for a sentence similarity LM (Yates et al., 2021).Both methods require negative relevance signals during training.(Please refer to Figures 4a and 4b in Appendix for both approaches.Additionally, see Appendix D for more details about negative selection criteria and zero-answer prediction) Although cross-encoders have a higher computational overhead compared to dual-encoders when used for ranking, the former has a quadratic complexity while the latter has a linear complexity.However, both methods are still feasible for lowresource datasets (Yates et al., 2021).In both approaches, we utilize the cumulative predicted scores of the top K documents to calculate the likelihood of each question having an answer.We then apply a threshold ζ to identify zero-answer questions.
Task B Architecture
We fine-tune pre-trained LMs for span prediction as in SQuADv2.0 (Rajpurkar et al., 2018;Devlin et al., 2019).We use two different fine-tuning methods: First answer loss (FAL) and Multi answer loss (MAL).The FAL method focuses on optimizing for the first answer in the ground truth answers, which is the default approach in standard span prediction implementations for SQuAD (Devlin et al., 2019;Wolf et al., 2019).In contrast, MAL optimizes for multiple answers simultaneously for the multi-answer samples in QRCDv1.2.This helps prevent the trained systems from being overly confident in a single span and distributes the predicted probability among different spans.(Refer to Appendix E for more information about these learning methods) It is worth noting that raw predictions from span prediction LMs are suboptimal for ranking MRC, as many of them have overlapping content.To address this, we follow a post-processing mechanism proposed by Elkomy and Sarhan (2022).(See Appendix E.1 for implementation details) Similar to task A, we perform thresholding by a hyperparameter ζ to determine zero-answer samples using LM null answer [CLS] token probability (Rajpurkar et al., 2018;Devlin et al., 2019).(See Appendix E.2 for more details on zero-answer cases)
Results
The results tables for both tasks use the following notational format: We use short forms to refer to combinations of LMs and their fine-tuning approaches using superscripts and subscripts.
The subscripts ∼ and ≈ denote direct fine-tuning and pipelined fine-tuning, respectively.Additionally, the arrows in model names subscripts indicate the stages of pipelined fine-tuning, with the learning resources names listed.Superscripts are used to denote the architectures employed for task A and the learning methods for task B.
Tables 3 and 4 present our detailed results on the development split for both tasks for single and self-ensemble models.Table 3 shows the results for cross encoder and dual-encoders for task A. Our best single model, (ARB ⊗ ≈ ), achieved a MAP of 34.83% and an MRR of 47.09%.(ARB ⊗ ≈ ) selfensemble achieved the best MAP of 36.70%.Table 3 also presents the R@10 and R@100 metrics.This represents the upper bound on the reranking stage performance that we can obtain (Yates et al., 2021).Considering the question types , experiments of (ARB ⊗ ∼ ) and (ARB ⊗ ≈ ) obtains the best MAP performance for zero-answer and multi-answer questions for task A.
With regard to the hidden split, Tables 5 and 6 provide a summary of our official submissions.
In task A, as shown in Table 5, we made 3 crossencoder submissions: MIX ⊗ ≈ , which is an ensemble combining runs from CAM ⊗ ≈ and ARB ⊗ ≈ cross encoders.MIX ⊗ ≈ achieved a MAP of 25.05%.In comparison, the TF-IDF baseline only achieved a MAP of 9.03%.
On the other hand, in task B, we experimented with our two best performing models in Table 4.As shown in Table 6, (ARB M ≈ ) outperformed (ELC M ≈ ) with a pAP of 57.11%.This result is consis-tent with the findings from the faithful validation split (Keleg and Magdy, 2022) in Table 4 for (ARB M ≈ ) and (ELC M ≈ ).Specifically, the MAL method outperformed FAL for all of our models in the faithful validation split (underlined in Table 4).
Analysis and Discussion
Regarding RQ1, external resources always bring significant improvements to the same LM for both tasks.For task A, we have three stages of finetuning as indicated by arrows in Table 3.For example, when (ELC ⊗ ∼ ) is fine-tuned with external resources into (ELC ⊗ ≈ ) the MAP performance improves from 8.96% to 26.60% for single models as in Table 3.In similar fashion for task B, (ELC M ≈ ) outperforms (ELC M ∼ ) by almost 13% for the standard split in Table 4.
To answer our RQ2, ensemble learning consistently outperforms single models for both tasks.For instance, (CAM ⊗ ≈ ) ensemble surpasses its single model by 3.5% for the MAP metric for task A. Similarly, (ELC M ≈ ) ensemble outperforms its corresponding single model by almost a pAP of 2% for task B.
With regard to RQ3, the hyperparameter ζ affects the zero answer type evaluation scores for both tasks.We make best use of the available data by employing a quantile method to determine the threshold ζ for both tasks.However, (ARB ⊗ ≈ ) model MAP performance improves by 3% when the optimal ζ ⋆ is employed for task A. This suggests that there is a room for improvement for the ζ parameter.(Please refer to Appendix F for more details about ζ selection and RQ3).
In Table 3, we experimented with dual-encoders using both random and hard negatives (Zhan et al., 2021) to address RQ4.(ARB ⊚ ≈ ) outperforms (ARB ⊚ ∼ ) by almost 4.5% when we perform hard negatives mining using a fine-tuned checkpoint (ARB ⊚ ∼ ).In Table 4, MAL learning method consistently brings significant improvements to the final performance for all models over the faithful split.Moreover, it consistently outperforms FAL learning method for the multi-answer type of samples.For instance, (ELC M ≈ ) performs better than (ELC F ≈ ), achieving a pAP score of 51.55% compared to 43.69% achieved by (ELC F ≈ ) for the subset of multianswer samples.However, due to the fact that multi-answer samples make up only 18% of the development samples in the standard split (Table 2), MAL does not always outperform FAL for the standard split overall performance.This finding addresses RQ5.
With regard to RQ6, the post-processing approach proposed by Elkomy and always surpasses the raw prediction score for both single and ensemble models.This is represented by Post subscript in Table 4.For example, postprocessing improves (ARB M ≈ ) both single model and self-ensemble pAP performance by almost 3%.
Conclusion
In this paper, we have presented our solution for both task A and task B of Qur'an QA 2023 shared tasks.We explored various Arabic LMs using different training approaches and architectures.Our best performing systems are ensemble-based, enhanced with transfer learning using external learning resources.Lastly, we addressed a set of RQs that highlight the main strengths of our work.
Limitations
In this paper, we have adapted conventional learning-based architectures for Arabic QA tasks, specifically for MRC and ad hoc search.However, we faced several challenges throughout our study.One significant challenge was the scarcity of training resources, along with the imbalanced distribution of topics and question types.This was particularly evident in the zero-answer cases.As a consequence, our zero-answer thresholding mechanism demonstrated high sensitivity to each individual model.
Additionally, we noticed significant performance variations due to the small size of the datasets.In order to tackle the problem of variations and noisy predictions, we investigated an ensemble approach.However, we still suggest that the results we obtained during the development phase may not accurately reflect the actual performance of learning systems.Despite the effectiveness of faithful splits for task B, we still suggest exploring n-fold crossvalidation for both tasks.However, our computation resources were significantly limited during the competition phase.
For task B, our models trained for MRC were found to be suboptimal for ranking tasks.Although our post-processing technique improved the raw predictions, this indicates the necessity for other ranking-based MRC approaches.Furthermore, we would like to explore the performance of large LMs on this particular task.
Appendix A Dataset Additional Details
AyaTEC is a dataset designed to evaluate the performance of retrieval-based Arabic QA systems over the Holy Qur'an.It contains 207 questions and 1,762 corresponding answers, which are categorized into 11 topics covering different aspects of the Qur'an.The dataset caters to the information needs of two types of users: skeptical and curious (Malhas and Elsayed, 2020).The dataset includes single-answer and multi-answer questions, as well as questions that have no answer.Both Qur'an QA 2023 shared tasks are primarily based on an adapted version of AyaTEC (MALHAS, 2023;Malhas et al., 2022).Figure 1 illustrates an example from task A. The question asks whether there is a reference in the Qur'an to the body part used for reasoning.Four relevant Qur'anic segments are annotated to have an answer for this question.Figure 2 depicts a question-passage-answer triplet from task B. The question in this case is about creatures capable of praising God, within the context of the given passage.
A.1 Topic Distribution for tasks
AyaTEC covers 11 diverse topics referenced in the Holy Qur'an.Figure 3 illustrates the imbalanced nature of those different topics.Furthermore, the representation of unique questions is significantly limited in comparison to question-passage-answer triplets.Additionally, it is evident that the ratio of triplets to unique questions varies for each respective topic.In task B, these factors give rise to common questions across various passages.Consequently, they result in data leakage between the training and development splits (Keleg and Magdy, 2022).(Further information regarding this can be found in Appendix A.4)
A.2 Task A Evaluation Measures
For this ranking task, systems are expected to return up to 10 Qur'anic passages for each question when possible.If the system determines that the question is unanswerable from the entire Qur'an, a null document is only returned, indicated by -1.The primary measure for the task is MAP, which gives full credit only if all relevant documents are retrieved at the top of the ranked answer list.For the zero-answer questions, full credit is given to successful systems only when they are unable to find any relevant Qur'anic passage to answer the question, and return the null document.In addition to MAP, mean Reciprocal Rank (MRR) is also reported, which gives credit just for the first relevant document from the ranked list (Yates et al., 2021).
In formal notation, we begin by defining the function α(q, p), which is a binary relevance function that indicates whether a passage p is annotated as relevant to a question q in the test collection.Equ.(1) represents the function that calculates the total number of relevant Qur'anic passages from the QPC to q. ψ(q) = p∈QPC α(q, p) (1) Zero-answer questions have a zero value for the function ψ, and their MAP score is calculated in a different way.Equ.(2) shows the evaluation measure for MAP for answerable questions.For a ranked list R, we calculate the precision at each possible cutoff @i at which a relevant document is present (Yates et al., 2021).
MAP(R, q) =
(i,p)∈R Prec @i(R, q) • α(q, p) ψ(q) , (2) Equ.(3) illustrates the combined MAP evaluation measure for task A. In this measure, zero-answer questions are given full credit only when R is the null document, represented by −1 in the official evaluation script3 (MALHAS, 2023).
if ψ(q) = 0 MAP(R, q) Otherwise (3) 1 C is an indicator function, which returns 1 if the binary condition C holds and 0 otherwise.
A.3 Task B Evaluation Measures
Standard MRC tasks, like SQuADv2.0, are evaluated based only on the first prediction.In contrast, task B is evaluated as a ranking task against a ranked list, rather than relying solely on the top prediction.As in task A, systems are expected to return up to 10 answer spans from a given Qur'anic passage to answer a question when possible.The primary evaluation metric for this task is pAP (Malhas and Elsayed, 2020;MALHAS, 2023).This metric incorporates partial matching with the traditional rank-based Average Precision measure, i.e., MAP.In the case of unanswerable samples, the system receives a full score if it only returns and empty ranked list.
Formally, partial matching is performed over token indexes of two substrings extracted from a given supporting passage.Based on Malhas and Elsayed (2020), F 1 is used to calculate the similarity between the two substrings R k and g.R k represents the k th answer from a ranked list R, and g refers to any ground truth answer from the set of ground truth answers G.
In their study, MALHAS (2023) introduced a method for handling multi-answer samples.They proposed a string splitting mechanism that ensures only one correct answer is matched in each entry of R. Equ.( 6) presents the pAP evaluation metric for multi-answer ranking MRC in terms of pPrec (Malhas and Elsayed, 2022), which stands as a token-level partial matching version of Equ(2).
β(R, i) is a binary function that returns one if R i is a partially relevant answer.More specifically, In similar fashion, Equ.( 8) presents the complete pAP evaluation measure for task B. In this measure, zero-answer samples are given full credit only when R is an empty list (MALHAS, 2023).A.4 Leakage in QRCDv1.2 Keleg and Magdy (2022) analyzed QRCDv1.1 and identified instances where passages and questions were repeated.They classified QRCDv1.1 into four logical mutually-exclusive categories according to their complexity.Table 7 provides a summary of the criteria used and the expected behavior of trained LMs for each category.Additionally, symbols are employed to indicate the levels of complexity within each category, as determined by performance scores obtained by Keleg and Magdy (2022).Based on their analysis, Keleg and Magdy (2022) solely utilized D (3) ood + hard for their final development split for QRCDv1.1.
In this work, we extend their approach for QR-CDv1.2.We slightly modify this by considering both D (2) and D (3) for the development split.In addition, we employ disjoint set algorithm to find all leakage groups in D (1) .We use those groups to balance the zero-answer questions ratio in the development split.This is because 90% of zero-answer questions belong to the trivial leakage group D (1) .
In their work, Keleg and Magdy (2022) also proposed a resplitting approach for QRCDv1.1.They reorganized training and development splits using the four logical groups to create what they called faithful splits for QRCDv1.1.Faithful splits aim to create more representative evaluations for QR-CDv1.1 dataset.Table 8 summarizes the modifi-cations we made for performing evaluation using faithful splits.Table 9 presents the distribution of our faithful split for QRCDv1.2 based on our modified splitting strategy outlined in Table 8.It also includes the distribution of zero-answer samples within each group.As in Table 9, we preserve the original ratio of training to development splits.Additionally, the percentage of zero-answer samples within each split is preserved compared to the original distribution in Table 2.
A.5 External Learning Resources
We leverage external resources to perform pipelined fine-tuning for both tasks A and B. For task A, we utilized interpretation resources (tafseer) from both Muyassar and Jalalayn, obtained from Tanzil (2007Tanzil ( -2023)).We created pairs of QPC Qur'anic passages and their corresponding interpretations, resulting in approximately 2.5K relevant pairs.Additionally, we used the Arabic TyDI-QA GoldP dataset (Clark et al., 2020) to generate pairs of relevant questions and their supporting evidence passages, resulting in 15K relevant pairs.For task B, we solely relied on the Arabic subset of the TyDI-QA GoldP MRC dataset (Clark et al., 2020).This dataset consists of approximately 15K question-passage-answer triplets.
Criteria
Expected LM behavior Samples with unique passages but having rarely repeated questions (appearing 3 times or less) Some reasoning is required to find the right answer for rare questions D (4) ood + easy Samples with unique passages but having commonly repeated questions (more than 3 times) Lexical matching guides trained LMs to find similar answers ¬ ¬ Use it entirely for training, this is due to the fact that D (1) is trivial for development.To balance the zero-answer questions ratio, we take entire zero-answer leakage groups into the development set.We employ disjoint-set algorithm for this purpose.
D (2) in+no leakage
Split randomly with a splitting ratio of 86.7% for training and 13.3% for development, which corresponds to the original ratio of the data.Split randomly with a splitting ratio of 86.7% for training and 13.3% for development, which corresponds to the original ratio of the data.
Use it entirely for training, this is due to the fact that D (4) is trivial for development.
Table 8: Description of our modified faithful splitting for QRCDv1.2dataset over the four categories introduced by Keleg and Magdy (2022).We also show their proposed splitting approach (Keleg and Magdy, 2022).Check Table 7 for more details and reasons behind such splitting strategies.Question Passage
B Transfer Learning
In order to overcome the limited training resources for both tasks, we incorporate external QA and interpretation resources (tafseer) (Tanzil, 2007(Tanzil, -2023)).
External resources enhance our learning systems in general by leveraging transfer learning across multiple fine-tuning stages (Garg et al., 2020;MALHAS, 2023).We use arrows in subscripts in Tables 3, 4, 5, and 6 to refer to stages of fine-tuning.(More details about external learning resources and their construction in Appendix A.5)
C Ensemble Learning
We utilize a voting self-ensemble technique for a group of fine-tuned models trained with different seeds (Sagi and Rokach, 2018).We use the raw predictions without applying a zero-answer threshold.
In task A, for an ensemble E we aggregate the relevance scores for a Qur'anic passage p and a question q assigned by a model φ.The ensemble relevance score S between p and q is as follows: In similar fashion for task B, we leverage a span voting ensemble (Elkomy and Sarhan, 2022).For each sample, we aggregate span scores for each span s made by each predictor φ.
After that, we apply zero-answer thresholding to the aggregated result.
D Additional System Details for task A
We summarize both architectures for task A in Figures 4a and 4b for dual-encoders and crossencoders, respectively.
D.1 Implementation Details
In our STAR training process, we incorporate both random in-batch negatives and hard negatives.Random negatives involve randomly selecting irrelevant documents for each query, providing positive and negative signals for learning systems (Yates et al., 2021).On the other hand, hard negatives refer to the most offending irrelevant examples predicted by an encoder similarity score (Zhan et al., 2021).In a batch of size 16, we encode 16 different queries with their corresponding positive documents; in addition, in-batch negatives are used for all other queries.These negatives can be chosen randomly or through STAR hard negative mining.We use a learning rate of 5 × 10 −5 for all of our dual-encoder experiments.In the case of crossencoders, we generate question-document pairs.These pairs have a ratio of one positive pair and three randomly selected negative pairs.For all of our cross-encoders, we use a learning rate of 1 × 10 −6 with a batch size of 16.
D.2 Zero-answer Prediction
We assign a likelihood for each question q to be answerable using the total relevance scores for the top returned passages R. φ refers to a general relevance predictor between q and a passage p.
γ(q) = − p∈R φ(q, p) The negative sign corresponds to the inverse proportional relationship between high relevance scores and the likelihood of unanswerability.We then normalize those scores for all questions into γ(q) and apply a no answer threshold ζ.We define a binary threshold function, σ, which applies the threshold to identify unanswerable questions.
E Additional System Details for task B
In this work, we fine-tune LMs for extractive MRC as span predictors (Devlin et al., 2019).The finetuning process involves packing each questionpassage pair x together and feeding it to a LM to predict the start and end token indices from the passage, as shown in Figure 5.To achieve this, a trainable randomly initialized start vector S and end vector E are stacked on top of the LM, having the i th token hidden-representation T i .The final model with the newly stacked layers has learnable parameters θ. T
[SEP1] T [CLS] [SEP] [SEP] T [SEP2]
Ranked list of answer spans The dot product between S and T i is chosen to determine the score that the i th token is the start of the answer span.These scores for all passage tokens are followed by a softmax layer that produces the probabilities for individual tokens being the start of the answer span (Seo et al., 2016;Devlin et al., 2019).Equ.( 13) depicts the probability that the i th token is the start of the answer span.
Under full-supervision, the training objective is to optimize the log-likelihoods for both the ground truth start and end positions.For a model with learnable θ, an input x, and a single ground truth answer span y, the log likelihood for the start token position is as follows: where the subscript s in y s refers to the start position of the answer span y.
If there are multiple answers for a sample x, we rather have a set of plausible answer spans Y. Elkomy and Sarhan (2022);Sleem et al. (2022); Mostafa and Mohamed (2022) in Qur'an QA 2022 tackled this by considering any answer span from Y by taking one at random or the first answer span, namely, y 1 .We denote the i th answer from Y as y i .We call this learning method First answer loss (FAL).This can be formulated in terms of Y as denoted below: Figure 6a illustrates this learning method.However, QRCDv1.2 task B considers a multi-answer MRC scenario, this leads to discrepancy between training and testing when FAL learning method is employed for fine-tuning.Towards this end, we define MAL learning method.This learning method takes the multi-answer cases in consideration by optimizing for all answers altogether.Mathematically, this generalizes to any y i from the set Y and takes the sum of the log likelihood losses for multiple answers as shown in Equ.( 16): We show the MAL learning method in Figure 6b.
E.1 Implementation Details
To enhance LMs predictions, we employ a postprocessing approach.Elkomy and Sarhan (2022) proposed an effective non-maximum suppression post-processing approach at Qur'an QA 2022 (Malhas et al., 2022).They also proposed some operations for rejecting uninformative short answers.For all of our models, we used a learning rate of 2 × 10 −5 and a batch size of 16.
E.2 Zero-answer Prediction
MRC for SQuADv2.0-likedatasets uses null answer [CLS] token probability to give a likelihood for a question to have an answer within the supporting passage (Rajpurkar et al., 2018;Devlin et al., 2019).This works by finding the difference between the null answer score of [CLS] token and the non-empty answer span with the highest score.φ is a general span extractor that operates on a question q and a passage p.
γ(q, p) = φ(q, p) [CLS] − φ(q, p) MAX Upon calculating scores for all samples, we proceed to normalize them into γ(q) and then apply a threshold value ζ to determine if there is no answer.To identify unanswerable questions, we use a binary threshold function σ, σ(q) = 1 γ(q)>ζ (18)
F ζ Selection and ζ ⋆
In this work, we defined ζ hyperparameter for zeroanswer thresholding.This hyperparameter controls the proportion of samples that are considered to be zero-answer.Due to the small size of the dataset, we used a quantile method to set ζ.This method marks a proportion of the samples according to the statistics of the dataset.Task B is less sensitive to this parameter because almost 5% of the samples are zero-answer.In contrast, task A is highly sensitive to this parameter because of the larger proportion of zero-answer cases compared to task A. Additionally, We are interested in finding the theoretical upperbound performance for ζ; this is addressed by RQ3.
In Tables 3 and 4 we use ⋆ accompanied by ζ to refer to the optimal performance of the binary classification problem of has-answer vs. has-noanswer, as explained in Appendices D.2 and E.2. Figure 7 illustrates the thresholding effect against fine-tuned model performance for task A; this answers RQ3.As we can see, the ζ hyperparameter can not be set arbitrarily.Instead, we can adjust it by considering the outcomes obtained from trained models on the training data.To find the optimal threshold ζ ⋆ for both tasks, we implemented a greedy optimization algorithm for all possible levels of thresholds made by a given model; check the code for more details4 .
Figure 1 :
Figure 1: A sample from shared task A. We highlight the most relevant part in each Qur'anic segment.
Figure 2 :
Figure 2: A sample from shared task B. We highlight the ground truth answers in the Qur'anic passage.
Split them into two overlapping sets, as such, confusing examples with the same passages are distributed among training and development with different answers.D (3) ood + hard Only use it for the development set (removed from training).Same as Keleg and Magdy (2022) D (4) ood + easy Cross-encoder generic architecture for an input pair of a question and a passage with a predicted similarity score.
Figure 4 :
Figure 4: Diagrams for model architectures for task A.
Figure 5 :
Figure 5: Generic architecture illustration of a LM for ranking MRC.
Table 1 :
Task A dataset relevance pairs distribution across training and development splits.We also include the distribution of answer types per split.
Table 2 depicts the distribution of dataset pairs
Table 2 :
Task B dataset pairs and triplets distribution across training and development splits.For questionspassage pairs, we show the distribution of answer types.
Table 3 :
Dev split evaluation results for task A. MAP means ζ is set to mark 15% of questions as unanswerable.
⋆ accompanied by ζ refers to applying the best ζ (see Appendix F).Average performance is reported for multiple runs of single models.Superscripts ⊚ and ⊗ in short form refer to dual-encoder and cross encoder, respectively.Subscripts ∼ and ≈ denote direct fine-tuning and pipelined fine-tuning, respectively.
Table 4 summarizes the results for task B. Our best performing model over the standard split, (ELC M ≈ ), attained a pAP of 53.36% and 55.21% for single model and self-ensemble models, respectively.Table 4 also presents results for the faithful validation split we defined previously.(ARBM ≈ ) is our best performing single model for the faithful split, achieving a pAP score of 54.19%.Both tables present comprehensive results for different question types, as well as the outcomes for a manually set threshold ζ and ζ ⋆ , i.e., the threshold that yields the best performance.
(See Appendix F for more details about ζ and optimal ζ selection)
Table 4 :
Dev split evaluation results for task B. pAP means fixing ζ to 0.8.Post subscript identifies post-processing.⋆ accompanied by ζ refers to applying the best ζ (see Appendix F).Average performance is reported for multiple runs of single models.Superscripts F and M in short form indicate FAL and MAL methods, respectively.Subscripts ∼ and ≈ denote direct fine-tuning and pipelined fine-tuning, respectively.Underlined values refer to the higher performance when comparing the two learning methods.
≈ 25.05 46.10Table 5: Results on the hidden split for task A. ζ is set to mark 15% of questions as unanswerable.
Table 6 :
Sarhan (2022)Results on the hidden split for task B. ζ is set to mark 5% of pairs as unanswerable.
Figure 3: Distribution of QRCDv1.2 over the 11 topics for task A questions and task B triplets.
Table 7 :
Description of the four categories introduced by Keleg and Magdy (2022) over QRCDv1.1 dataset.We show the criteria for identifying each category and the expected behavior for a fine-tuned LM.We denote the complexity of each category using symbols.For instance, ¬ ¬ ¬ ¬ represents the most challenging set for learning systems, while ¬ refers the least challenging set.
For duplicate question-answer or passage-answer pairs, choose only one sample for training and leave the rest for the development set. | 2023-12-05T14:04:21.890Z | 2024-01-23T00:00:00.000 | {
"year": 2024,
"sha1": "a3bd6467d14d571af6181614e35a3d1254c92bcd",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/2401.13060",
"oa_status": "GREEN",
"pdf_src": "ArXiv",
"pdf_hash": "0b69de76004fc119d242cf835df3d9e430d649e5",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
235457287 | pes2o/s2orc | v3-fos-license | The dynamics of rainfall and temperature on peatland in South Sumatra during the 2019 extreme dry season
During the extreme dry season of 2019 massive fires broke out on peatlands in South Sumatera. This study examines the dynamics of rainfall and temperature in the peatlands of South Sumatera in the 2019 dry season as one of the fire disaster mitigation efforts. The data used are in situ measurement data by the Peatland Restoration Agency’s measurement stations on two peatlands in South Sumatera. The results of this study indicate that rainfall in July until October 2019 was very minimal even in one of the study sites in August there was no rain. This shows that rainfall anomaly has occurred along with massive fires so that rainfall is possible to be used as one of the fire control parameters in peatlands. The lack of rainfall in South Sumatera during this period was due to the positive Indian Ocean Dipole phenomenon that occurred in the Indian Ocean. The results of this study also showed that temperature did not have a clear pattern of relationship with fire events on peatlands.
Introduction
Indonesia is a country that has very large tropical peatlands, which are around 20 million hectares. These peatlands are spread across almost all parts of Indonesia, especially on the islands of Sumatera, Papua and Kalimantan. The area of peatland on the island of Sumatera is around 6 million hectares, mainly in South Sumatera, Riau and Jambi provinces [1][2][3][4][5][6] .
In every dry season, peatlands burn so that prevention is necessary. In the extreme dry season of 2019, there were massive fires on peatlands in South Sumatera which burned 328,457 hectares of peatlands [7] . One of the prevention methods is controlling the parameters related to peatland fire events. Parameters estimated to be closely related to peatland fires include rainfall and temperature. Therefore, it is necessary to study the dynamics of rainfall and temperature through an accurate measurement system for prevention of future fires.
So far, rainfall and temperature data on peatlands only consist of remote sensing data from satellite measurements [8][9][10][11][12][13] . In-situ measurements are believed to be closer to the true value when compared to the results of remote sensing measurements. The Indonesian government, through the Peatland Restoration Agency, has set up several stations to measure in-situ parameters related to peatland fires. The parameters are rainfall, temperature, groundwater level, and soil moisture [14,15] . In South Sumatera, several of these stations have been established since July 2017. This study uses rainfall and 2 temperature data from the measurement results of these stations to assess whether these two parameters have changed significantly during the dry season in 2019.
Data
The data used in this study came from the results of in-situ measurements at two stations belonging to the Indonesian Peatland Restoration Agency (BRG) which are located on peatlands in South Sumatera. These data are hourly rainfall and temperature measured for the period 1 January 2019 to 31 December 2019. Station names and coordinates are shown in Table 1.
Data analysis
Rainfall and temperature data that have been obtained are processed to obtain daily average data and monthly average data. Daily average data is displayed in the form of time series graph, and monthly average data is shown in tabular form. Analysis was carried out on the time series graphs and tables obtained to assess whether there was a significant change in the value of rainfall and temperature during the extreme dry season in 2019. If there is a significant change in value, it can be concluded that these parameters have a close relationship with fire events on peatlands in South Sumatera. This parameter can be used as one of the controlling parameters for fire prevention in peatlands.
The dynamics of rainfall
The study of the dynamics of daily rainfall is carried out through analysis of daily rainfall time series graph and monthly rainfall data table. The time series graph of the dynamics of daily rainfall at two measurement stations SS1 and SL1 during 2019 is shown in Figure 1. In Figure 1, it can be seen that daily rainfall decreased drastically in July until October 2019. Table 2 shows the monthly rainfall at these 2 locations. In Table 2, it can be seen that the monthly rainfall in July until October 2019 is much lower than the other months, even in August there was almost no rain at all. This is what causes the peatlands to become very dry, which causes massive fires on the peatlands. The period from July to October 2019 is called the extreme dry season. Figure 1 and Table 2 show a close relationship between rainfall and fire events on peatlands in South Sumatera. If the parameters of rainfall can be controlled, it can be used as an alternative effort to prevent fires in peatlands.
This extreme drought has had a negative impact on the agriculture, water resources, forestry and environment sectors. This extreme drought is triggered by the anomalous phenomenon of sea surface temperature (SST) in the Indian Ocean. In this phenomenon, the sea surface temperature in East Africa is warmer than the sea surface temperature in Southwest Sumatera. This phenomenon is called the Indian Ocean Dipole (IOD) which strengthened from April 2019 to December 2019 and caused very low rainfall in the dry season period from July to October 2019 [14,[16][17][18][19][20] . In general, the dry season in 2019 shows drier conditions than the 2018 dry season and the climatological normal reference for 1981-2010, although not drier than the dry season conditions in 2015 when there was a strong El Nino phenomenon at that time. [18] Indian Ocean Diploe (IOD) is an ocean-atmosphere phenomenon in the equatorial region of the Indian Ocean that affects the climate in Indonesia and other countries around the Indian Ocean basin. As the name implies, IOD is characterized by an anomaly of Sea Surface Temperature (SST) between the 'two poles' of the Indian Ocean, namely the West Indian Ocean (50E-70E, 10S-10N) and Southeast (90E-110E, 10S-0S). Horizontal temperature variations at sea level are generally influenced by the position (radiation) of the sun and water mass. The difference in SST anomaly between the two regions in the West and Southeast Indian Ocean is called the Dipole Mode Index (DMI), and is used to measure the strength of the IOD itself. The period in which the DMI is positive is generally referred to as the positive IOD period (IOD +), and vice versa, when the DMI is negative it is called the negative IOD period (IOD-) [21,22] . DMI in 2019 has a high value as shown in Figure 2 [18] .
In the IOD + period, the waters in the Southeast Indian Ocean were generally colder (temperatures are lower than average), whereas the waters in the western Indian Ocean would be warmer (temperatures higher than average). As a result, convection (which is the initial process of cloud and rain formation) will shift from the East Indian Ocean towards the West, and bring a lot of rain to the eastern part of the African continent. On the other hand, areas of the Eastern Indian Ocean that are "left behind" by convection (such as Indonesia) will suffer from drought.
The IOD-period characteristic is the opposite of IOD +. SST in the Southeast Indian Ocean will be warmer, while in the West it will be colder. This will result in drought in Eastern Africa and increased rainfall in Indonesia, especially West Indonesia which is adjacent to the Indian Ocean.
The dynamics of temperature
The temperature data used is the hourly average T data which is processed into average temperature per day and per month originating from 2 BRG stations, namely: SL1 and SS1 for the period January 1, 2019 until December 31, 2019. The daily average temperature in the form of a time series graph is shown in Figure 3, and the monthly average temperature is shown in Table 3. In Figures 3 (a) and (b), it can be seen that in the dry season from July to October 2019 there was a slight decrease in the daily average temperature. In Table 3, it can be seen that at the SS1 location in July to September 2019 the temperature was slightly lower than the other months, while at the SL1 location there was no significant change in temperature during the extreme dry season.
If we look for the relationship between temperature and the IOD + event, which is from July to October 2019, the temperature of the water surface in the Indian Ocean (SST) in the Southeast is lower than the temperature in the West, so it is estimated that the lower SST temperature is related to the low daily average temperature of peatlands in South Sumatera.
If temperature is to be used as a fire control parameter in peatlands, it is quite difficult to apply because the pattern of the relationship between temperature and the incidence of fire in peatlands is not clear. The relationship between temperature dynamics and fire events on peatlands in South Sumatera is not significant.
Conclusion
The analysis that has been carried out on the dynamics of rainfall and temperature in the extreme dry season of 2019 shows that rainfall has a very significant change in value, while temperature has also changed but the pattern of change is unclear. This very significant change in rainfall indicates a close relationship between rainfall and the occurrence of fires on peatlands during the extreme dry season of 2019. The change in the value of these two parameters is due to the IOD + phenomenon. If it is used to control fires on peatlands, the rainfall parameter is more likely than temperature because rainfall dynamics have a significant relationship with fire events on peatlands. | 2021-06-17T20:02:49.415Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ca65cd0cebb059031d2fcbb9cd2e87164dc1aec8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1940/1/012030",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ca65cd0cebb059031d2fcbb9cd2e87164dc1aec8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
224828878 | pes2o/s2orc | v3-fos-license | Comparative transcriptomics reveals hidden issues in the plant response to arthropod herbivores
Abstract Plants experience different abiotic/biotic stresses, which trigger their molecular machinery to cope with them. Besides general mechanisms prompted by many stresses, specific mechanisms have been introduced to optimize the response to individual threats. However, these key mechanisms are difficult to identify. Here, we introduce an in‐depth species‐specific transcriptomic analysis and conduct an extensive meta‐analysis of the responses to related species to gain more knowledge about plant responses. The spider mite Tetranychus urticae was used as the individual species, several arthropod herbivores as the related species for meta‐analysis, and Arabidopsis thaliana plants as the common host. The analysis of the transcriptomic data showed typical common responses to herbivory, such as jasmonate signaling or glucosinolate biosynthesis. Also, a specific set of genes likely involved in the particularities of the Arabidopsis‐spider mite interaction was discovered. The new findings have determined a prominent role in this interaction of the jasmonate‐induced pathways leading to the biosynthesis of anthocyanins and tocopherols. Therefore, tandem individual/general transcriptomic profiling has been revealed as an effective method to identify novel relevant processes and specificities in the plant response to environmental stresses.
INTRODUCTION
P lants are organisms subjected to direct and constant interaction with a broad range of stresses present in the environment. Exposure of plants to these stresses induces a disruption in the plant metabolism which leads to a reduction in their fitness and productivity (Rejeb et al., 2014). To cope adequately with these stresses, plants have developed specific mechanisms of resistance which allow them to detect precise environmental changes and respond to undesirable stress conditions. Among biotic cues, arthropod herbivores pose a widespread threat to plants that in the current context of climate change is becoming even more extreme. Climatic warming helps spread pest distribution, accelerate their life cycles, and increase the range of host species for many herbivores (DeLucia et al., 2012). Understanding the mechanisms of how plants are able to detect and recognize a stress, and act against it, is of prime importance in providing opportunities for the establishment of alternative strategies of control.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
In this scene, the proper identification and characterization of the genes involved in plant response are crucial. Nowadays, the most popular and direct approach to decipher transcriptomic data is the gene differential expression analysis. Detection of differentially expressed genes (DEGs) represents a powerful way to perform a screening of plant defense-related genes (Schenk et al., 2000;Korth, 2003). The availability of these data is fundamental in accelerating the study of gene functions in plant defense responses. For this reason, gene expression databases have been generated to provide public access. Gene Expression Omnibus and ArrayExpress are two of the most popular repositories of high throughput gene expression data (Edgar et al., 2002;Parkinson et al., 2007). Nevertheless, the use of this information for broad gene expression comparisons remains not a trivial task. Inherent to their underlying technology, microarray data are not as comprehensive as RNA-seq data and not all genes are represented on microarrays. The processed data may not be directly comparable since expression abundance values are provided in different data formats. Furthermore, annotation of the genes can differ across different experiments, making automatic parsing processes not straightforward. Despite that, several approaches to identify functional modules or genes have pointed out the value of stored data for comparative transcriptomics analysis (Ruprecht et al., 2017;Vercruysse et al., 2020). In this scenario, the development of specialized transcriptomics databases where standards are implemented is highly required (Stoeckert et al., 2002;Rung and Brazma, 2013).
Some attempts have been done to create transcriptomics databases for the analysis of the plant immune response, mostly focused on pathogen experiments. For instance, ExPath (Chien et al., 2015) and PathoPlant (Bülow et al., 2004) are two transcriptomics databases for analyzing co-regulated genes in plant defense response. Other examples are the plant stress RNA-seq database Nexus, a stress-specific transcriptome database in plant cells (Li et al., 2018) and PlaD, a database where 2,444 public pathogenesis-related gene expression samples from Arabidopsis, maize, rice and wheat have been analyzed in a similar way to perform comparisons across the different samples (Qi et al., 2018). Regarding arthropod experiments, even less information has been reported, which essentially comes from individual analyses of the transcriptomic response of a plant to a herbivore. Additional useful databases related to the process of recognition of biotic agents have been developed, like the Plant Resistance Gene database 3.0 (PRGdb) (Osuna-Cruz et al., 2018). Nonetheless, despite the considerable amount of information obtained in various studies related to changes in plant gene expression, it is still not deeply understood how plants recognize a particular stress and respond in a rapid way. Deep transcriptomic analysis to an individual species together with a broad meta-analysis of the transcriptomic responses to related species could help go further in this direction.
The interaction between the two-spotted spider mite Tetranychus urticae and the model plant Arabidopsis thaliana has become a good system to adequately explore the response of the plant to a herbivore attack (Santamaria et al., 2020). Thus, starting from a comprehensive analysis of the transcriptomic Arabidopsis response to the spider mite, this work is focused on the capacity of comparative transcriptomics to discover additional key points in this response. Taking together our experimental results and the information generated by the meta-analysis, unknown processes and particularities of the plant response to the spider mite were revealed.
DEGs upon different times of spider mite infestation
The development of the early response of Arabidopsis plants to T. urticae infestation was assessed by the transcriptomic analysis at 30 min, 1, 3, and 24 h of infestation in leaves of the partially resistant Col-0 accession. Principal components analysis (PCA) showed good separation of the samples coming from the three biological replicates in time-related groups using the first two components ( Figure S1A). A first insight on the differential response along the time of infestation came from the variations in the percentage of reads per chromosome. This percentage decreased in chromosomes 1, 4, and 5, increasing in chromosomes 2 and 3 as well as in the mitochondrial and plastid chromosomes ( Figure S1B). These changes were not associated with variations in the amount of paired reads or the number of detected genes with at least one read mapped ( Figure S1C, D).
Differentially expressed genes were obtained for each time point (Data S1). The number of up-regulated genes was higher than the number of down-regulated genes for all the time points ( Figure 1A). An analysis of the overlaps of DEGs along time showed a remarkably high number of genes only up-regulated at 30 min, 629; a substantial set of genes upregulated in all time points, 524; and a minor number of genes only up-regulated at 24 h, 222. By contrast, downregulated genes did not present any particular pattern ( Figure 1B; Data S2). All detected DEGs were used to identify the enriched biological processes in the plant triggered during mite infestation. As expected, ontology terms related to defensive processes were predominantly found ( Figure 1C). These terms included those associated with the jasmonic acid signaling and the indole glucosinolate metabolism. The relationship between expression of DEGs and time of infestation is highlighted in a heatmap ( Figure 1D). Whereas many genes up-regulated in the four time points are grouped in cluster 1, a large group of genes specifically up-regulated at 30 min is present in cluster 4. A close correlation between RNA-seq and quantitative real-time polymerase chain reaction (RT-qPCR) results was observed for the 15 genes tested ( Figures 1E, S2).
Temporal regulation of gene modules upon spider mite infestation
The specificity in the up-regulation of many genes at different time points suggested the possibility of a concomitant dissimilarity in the activation of gene modules. To characterize the temporal similarities/differences in the regulation of specific gene modules, gene networks were performed using the NetworkAnalyst software ( Figure 2A). The obtained networks comprise a mix of interactions between induced and repressed genes, as well as additional interactions with nonregulated genes necessary to connect regulated nodes that share a common putative interactor (Data S3). From these networks, a time-associated pattern arose. Modules of genes related to signal perception and transduction, exocytosis enhancement, and jasmonate defense were rapidly altered, as well as modules of genes related to the control of the cell cycle and light and hormonal responses associated with growth and development. From 3 h of infestation, modules related to the production of secondary metabolites, such as terpenoids and flavonoids, were identified. Upon 24 h of infestation, alterations in genes involved in development were detected again. Further, groups of genes related to sugar transport, redox modulation, and protein folding were identified at the four times of infestation as a part of the connected networks. An analysis of the enriched biological processes for the DEGs at each time point largely agrees with the identified modules (Table S1).
Generation of a core gene framework involved in plant defense against T. urticae In an attempt to simplify the complex response of the plant to the mite infestation, a new network was constructed based on the centrality features of the networks previously created. The 15 genes with the top betweenness values were selected from each time point (Figure 2A). Betweenness centrality is based on the shortest paths connecting network genes and a high value for a gene indicates that this node is a hub necessary to connect various branches of the network. As shown in Figure 2A, betweenness and degree, the number of connections of each node, exhibited a low correlation. Further, many of the genes with the top betweenness values were not differentially expressed upon mite infestation but were required to construct the connected network. From the analysis of the gene lists, three genes were identified with elevated values of betweenness at the four time points. These genes were UBQ13 and MYC2, which were always up-regulated, and BT2, a gene that did not alter its expression in response to the spider mite. Using these three genes as seed, a core gene framework was constructed using the GeneMANIA database ( Figure 2B). This network represents an initial identification of the minimum needed gene modules involved in the establishment of the Arabidopsis response to T. urticae. As expected, a high representation of genes related to response to a stimulus, in particular to jasmonic acid, was found. Further, a large number of genes were found with a potential role in the regulation of the transcription process, such as transcription factors and transcriptional regulators linked to the ubiquitination pathway.
Data selection and analysis of Arabidopsis transcriptomes in response to herbivores To get additional data useful for increasing the knowledge of the response of Arabidopsis to T. urticae, we extracted an initial collection of microarrays and RNA-seq experiments regarding transcriptomic analyses of Arabidopsis plants exposed to different arthropod herbivores. Because of the high variability of the experimental conditions across the transcriptomic experiments, a subset of them was selected to extract more robust conclusions. To be comparable with our RNA-seq experiment, all the selected experiments used Arabidopsis plants of the ecotype Columbia-0 (Col-0), with a preferably vegetative stage of 4 weeks when infested. A summary of the final list of selected experiments is shown in Table S2. The final collection was composed of 28 experiments, 17 with microarray data and 11 with RNA-seq data. Experiments included different herbivores: lepidopterans, mites, aphids, leafminers, thrips, and hemipteran ( Figure 3A). Most experiments used foliar tissue from 4-week-old plants. After this initial selection, DEGs were obtained for each experiment. Only data from the experiments of Liriomyza huidobrensis, Brevicoryne brassicae, and Frankliniella occidentalis insects needed to be re-analyzed, as processed data were available for the rest of the experiments. Normalization was performed to reduce the non-biological variability. The final list of DEGs showed high variability in the number of genes detected ( Figure 3B). The number of DEGs varies from 127 in the case of the response to Myzus cerasi at 3 h to 2,416 in the case of the response to Pieris rapae at 24 h.
Gene Ontology term enrichment analysis
Once the experiments were selected, the lists of DEGs were compared to determine the similarities or differences present among them (Data S4). For this purpose, a heatmap with the DEGs and experiments was generated ( Figure 3C). The obtained results pointed out the existence of high variability in the plant response. None of the DEGs was differentially expressed in all the experiments, but several groups of DEGs were identified in the response to different species. For example, L. huidobrensis shared a partial common gene induction with T. urticae and P. rapae. Nevertheless, as the number of total DEGs was very large, further approaches were done to compare the modified gene expression among experiments.
As a first step to discover similarities in the global responses, the correlation in the expression of the genes that were differentially expressed in at least 10 experiments was analyzed (Data S5). From the 188 genes with any correlation, 124 showed a correlated expression with another gene of the same fully connected subnetwork. When a heatmap with the expression of these genes was performed, a robust pattern was detected ( Figure 3D). Correlation results were characterized by a general absence of altered expression in aphids accompanied to a common up-regulation for most genes in the other species. This module of coexpressed genes was mainly enriched in terms of biological processes associated with the jasmonic acid response and the metabolism of indole glucosinolates ( Figure 3E).
Clustering of experiments
Differential expression patterns shown in the heatmaps were quite compatible with an accurate clustering. For that reason, hierarchical clustering of the expression patterns was performed to determine which plant responses were more similar to each other. The four clusters option was selected for the hierarchical clustering ( Figure 4A). In cluster 1, all the experiments including aphids were grouped. Further, the experiments performed with the silverleaf whitefly Bemisia tabaci and with the thrip F. occidentalis at 24 h were also included. Cluster 2 was constituted by experiments with the lepidopterans Pieris brassicae, Mamestra brassicae, Spodoptera littoralis and P. rapae at 3 h, the thrip F. occidentalis at 48 h, and the mite Brevipalpus yothersi. In cluster 3, only the experiments performed with P. rapae at 6, 12, and 24 h were included. Finally, in cluster 4, the experiments with the spider mite T. urticae and the leafminer L. huidobrensis were grouped.
Contribution of experiments and DEGs to clustering
Principal component analysis was performed for visualizing the underlying relationships between the experiments and the DEGs in them. Experiments were used as the source of variation of the DEGs, trying to find those genes that have a special profile of expression through the experiments. These genes would be the most relevant to discriminate among the transcriptomic responses. For this purpose, a PCA biplot was depicted, showing both PCA scores plot with the gene classification and the PCA loading plot, with the weight of the experiments on the PCA ( Figure 4B). The PCA plot showed a quite high number of genes located near the origin and a low number of them far from it. In the case of the experiments, considering the 49.5% of the variability of the data explained by the two first components, the experiments performed with T. urticae, L. huidobrensis and P. rapae exhibited the most different transcriptomic response. Locations in the plot displayed notable similarities with the clustering classification, being the experiments of each cluster located near to each other in the plot. Therefore, according to the biplot, those genes in which classification is strongly positive in component 1 are more likely to be important in the transcriptomic response of the plant to T. urticae, L. huidobrensis and P. rapae. Further, if those genes also have a strongly negative value in component 2, they are more likely relevant genes in the transcriptomic response to P. rapae.
To obtain a higher precision in the analysis, the dimensions explaining the majority of the variability of the PCA were selected ( Figure S3A). Then, the experiments and top 100 genes with the highest contribution to these dimensions were extracted, being those genes relevant in the Arabidopsis response to these experiments ( Figure S3B, C; Data S6). The experiments with the highest contribution in the eight first dimensions of the PCA were those experiments performed with L. huidobrensis, T. urticae and P. rapae. This information was similar to that observed in the PCA biplot. The identifiers of the top 100 genes were also plotted in the PCA biplot to mark their position. All of them showed a prominent contribution in the first PCA dimension, being located on the right part of the biplot. To elucidate their relevance and specificity, an analysis of the differential expression of these 100 DEGs across the four clusters was performed ( Figure S4A). The number of relevant DEGs in each cluster and their intersections were calculated ( Figure S4B, C). Clusters 3 and 4 contained the highest numbers of genes with confidence intervals significantly higher than the log 2 FC mean, many of them shared by both clusters. The individual genes significantly deregulated in clusters 3 and 4 are compiled in Table S3. Three groups of genes could be found, those with significantly higher expression in cluster 4 (20 genes), clusters 3 and 4 (27 genes), or cluster 3 (26 genes). Also, four genes in cluster 3 had a significantly lower expression. Enriched biological processes for the three groups are mainly related to the jasmonic acid signaling pathway, with some particularities ( Figure 4C). Cluster 3 is enriched in genes associated with a direct killing response, cluster 4 in genes leading to the biosynthesis of terpenoids, and several shared induced genes in both clusters are related to the ethylene signaling pathway and the biosynthesis of anthocyanins. When a network was built using the Gene-MANIA tool in Cytoscape, most genes from the three groups were connected ( Figure 4D). However, whereas genes from cluster 4 had a predominant central position in the network, genes from cluster 3 were predominantly located at the periphery of the network or were unconnected with the rest of the nodes.
Specificities in the response to T. urticae As the response of the plant to the different herbivores showed a broad range of putative similarities and specificities, the DEGs only found in the T. urticae experiments were analyzed (Data S7). Among these T. urticae specific genes under the present data, 323 up-regulated and 228 downregulated, many DEGs were time point specific ( Figure 5A; Data S8). The highest number of up-regulated genes was found at 30 min upon infestation and 30 min, 1, and 24 h time points shared similar numbers of down-regulated genes. Interestingly, whereas up-regulated genes at 30 min were enriched in biological processes related to the perception and first steps of signaling, the first biological process enriched in the down-regulated genes at 30 min and 1 h was the plant epidermis development ( Figure 5B).
Finally, many of the results previously obtained were integrated to establish a preliminary model on the key biological processes involved in the response of the plant to the spider mite infestation. A comparison was performed between: (i) the genes involved in the basal network constructed by the spider mite RNA-seq data ( Figure 2); (ii) the genes in which expression was correlated among herbivore experiments ( Figure 3); (iii) the genes with a significant contribution to separate the cluster with T. urticae experiments from the rest of the clusters (Figure 4); and (iv) the genes only found deregulated upon spider mite infestation ( Figure 5). The Venn diagram showed that only three out of 549 genes specifically deregulated in T. urticae and six of the correlated genes appeared among the 43 genes used to construct the basal network ( Figure 6A; Data S9). Further, many of the genes contributing to the spider mite cluster classification showed a correlation among experiments and only one gene was also present in the basal network. From these results, a STRINGbased network was built after six rounds of adding nodes to an initial set composed of the three specific genes that appeared in the basal network, the gene shared by the basal and cluster gene sets, and the nine genes that were uncorrelated among experiments and were significantly different from all the other clusters ( Figure 6B). Three clusters of functionally related genes were obtained. Enrichment of the biological processes involved showed an expected defensive response related to the signal transduction mediated by jasmonic acid. Further, several metabolic pathways were enriched, such as those related to the production of flavonols and anthocyanins, the metabolism of aromatic amino acids, and the synthesis of tocopherols. Figure 6. Prediction of the Arabidopsis enhanced responses to Tetranychus urticae infestation (A) Venn diagram comparing the genes involved in the basal network constructed by spider mite RNA-seq data, the genes in which expression was correlated among herbivore experiments, the genes with a significant contribution to the cluster with the T. urticae experiments, and the genes specifically deregulated upon spider mite infestation. (B) STRING-based network after eight rounds of adding nodes to an initial set composed of the three specific genes that appeared in the basal network, the gene shared by the basal and cluster gene sets, and the nine genes not correlated that were significantly different only in the cluster with the T. urticae experiments. Enriched biological processes for the three putative subgroups of the network are included.
DISCUSSION
One of the major challenges in plant biology is to understand how plants rewire their molecular machinery to cope adequately with abiotic/biotic stresses. Here, we aimed to elucidate the usefulness of comprehensive meta-analysis to discover hidden gaps not covered by individual plant transcriptomics responses to herbivores. For that, we took advantage of the bioinformatics tools and expression databases publicly available. Despite inherent variability, a core of pathways triggered by the main stress may be found. In Arabidopsis, the Bla-2 accession is more resistant to the attack of T. urticae than the Kondara accession, but their responses are commonly based on the induction of the jasmonic acid hormonal pathway and the production of indole glucosinolate (IG) metabolites (Zhurov et al., 2014). Likewise, induction of the metabolic pathway for IG production was found in 19 Arabidopsis accessions upon infestation with different insects, although differences in the up-regulated genes were reported (Sato et al., 2019). Therefore, useful findings may be obtained from an in silico analysis of both, the time course experiment with T. urticae and the meta-analysis comparing transcriptomic data of the Arabidopsis response to different arthropod herbivores. Dissected information from individual modules is required to establish a final molecular model.
Module 1. Dissecting information from the time course experiment
Early responses upon 1 h spider mite infestation were previously described as involved in signaling and regulation of gene expression and mostly maintained until 24 h infestation (Zhurov et al., 2014). More specific information arises on the first regulatory steps concerning gene expression by taking an earlier time point. In fact, a substantial number of genes with a rapid and transient up-and down-regulation appeared upon 30 min mite infestation. As expected, this set of genes was enriched in both extra and intracellular receptors, in genes related to signaling by calcium levels and kinase/ phosphatase activities, and in genes involved in the vesicular transport of proteins.
To unravel the meaning of extensive changes in gene expression, the information coming from the differential expression analyses must be properly processed. Enrichment of Gene Ontology terms and network analyses are useful to establish key processes and molecular connections. Jasmonic acid signaling and IG production were pointed out as the most remarkable events in the response of Arabidopsis to the spider mite, with a certain role for salicylic acid (SA) signaling (Zhurov et al., 2014). As expected, these biological processes were enriched in our set of data along the time course. Network algorithms based on protein-protein interactions have been postulated as key tools to adequately provide a systems view of plant defense (Windram et al., 2014). In the present analysis, a large number of individual DEGs became connected when the NetworkAnalyst program was applied. Therefore, a snapshot of the interactions between proteins at different time points arises. These interactions permit the discovery of important genes connecting functionally related modules that were not up-or down-regulated upon a mite attack. Nodes with high connectivity and betweenness are enriched in conditional phenotypes and are positively related to the interaction with pathogen effectors and the modulation of plant immunity (Ahmed et al., 2018). These nodes are called hubs, defined as the highest connected central proteins in scale-free proteinprotein interaction networks (Vandereyken et al., 2018). Our centrality measures detected the TF MYC2, and the ubiquitinrelated proteins UBQ13 and BT2 as putative hubs connecting functional modules along with the response of Arabidopsis plants to spider mites. The relevance of MYC2 in the response to herbivory has been broadly documented, with a crucial role in the signaling pathway activated by jasmonic acid (Kazan and Manners, 2013). BT2 has been proposed as an essential component connecting and integrating multiple signaling routes, including the jasmonic acid pathway (Mandadi et al., 2009), and was identified as the most central element of the nitrogen use efficiency molecular network (Araus et al., 2016). These central features of BT2 may be explained by its scaffolding role. BT2 binds to calmodulin and interacts with CUL3 forming an E3 ubiquitin-protein ligase complex and with the general transcription activators GTE9 and GTE11 (Du and Poovaiah, 2004;Figueroa et al., 2005;Gingerich et al., 2005). Finally, UBQ13 is a ubiquitinencoding gene, and the ubiquitin system has been described as a signaling hub for the integration of environmental signals (Miricescu et al., 2018).
Starting from these three central proteins, the network obtained by adding interacting proteins to connect them could be established as a basal frame of the molecular plant response to the spider mite. As expected, this core frame englobes the response to a stimulus associated with the jasmonic acid signaling pathway and the regulatory mechanisms carried out by TFs and the ubiquitination system. Connections between nodes reflect the relevance of the three hub proteins and their interactions with other key proteins. For example, the interactions of MYC2 with the GA-related protein RGL3 or the cytokinin-signaling regulator AHP5 reflect the previously reported participation of MYC2 in the crosstalk of jasmonic acid with other hormonal signaling pathways (Jang et al., 2020). PUB10 and TIC are two proteins that negatively regulate MYC2 by ubiquitination or repression (Shin et al., 2012;Jung et al., 2015). Finally, several MYB transcription factors (TFs) are connected to MYC2 in the network. Members of the MYB and bHLH TF families act together regulating the biosynthesis of secondary metabolites. MYB21 interacts with MYC2 to control the expression of terpene synthase genes (Yang et al., 2020); MYB28, 29, 34, and 122 interact with MYC2, both being involved in the production of indole and aliphatic glucosinolates (Schweizer et al., 2013); and MYB24 together with a set of MYB-bHLH interacting partners regulate the biosynthesis of flavonoids and anthocyanins (Xu et al., 2014;Battat et al., 2019).
Module 2. Dissecting information from the transcriptomic meta-analysis
Individual information on one species may be significantly enriched with data from related species. The ultimate goal is to increase knowledge on several issues related to individual data, like the specificity in the response of the plant to a herbivore or the plant response patterns common to arthropod herbivores feeding on Arabidopsis. The observed specificity in the response was strikingly high. Regarding the two RNA-seq experiments with four times-associated samples, more than 500 genes were exclusively deregulated upon T. urticae infestation, a number that was even higher upon P. rapae infestation, nearly 900 genes. In a previous large-scale transcriptome analysis in Arabidopsis based on microarray data from 14 pathogen species, more than 25% of deregulated genes were species-specific (Jiang et al., 2017), confirming the enormous plasticity of the Arabidopsis response.
However, most responses trigger a set of common signaling pathways related to plant defense. The set of genes with a correlated expression in response to herbivores was enriched in genes related to jasmonate and nitrogen compound response or glucosinolate metabolism. These categories have been broadly associated with biotic stresses. As broadly known, jasmonic acid signaling is a conserved core pathway in herbivore-induced responses (Wang et al., 2019). The production of IG occurs in response to many biotic attacks, being secondary metabolites toxic to a broad range of microorganisms, nematodes and insects (Wittstock and Burow, 2010). Likewise, the chitin of phytopathogenic fungi, nematodes and arthropods is recognized by the plant, activating innate or adaptive plant defense responses (Jiang et al., 2019). These features support that the identified set of genes rightfully belongs to a basal signaling pathway triggered by herbivory. This common pathway would be modulated by additional inputs coming from the specificities in the perception of each herbivore species. Inputs are directly related to outputs. Our analysis clearly points to the association of several genes to specific responses. The expression patterns showed by these genes strongly build the most robust gene-cluster associations, which were found in the responses against P. rapae or the cluster formed by T. urticae and L. huidobrensis. Similarities between T. urticae and L. huidobrensis experiments suggest a common recognition and defense response by the plant, which could be in some way explained by their feeding features. Both species feed on the palisade and spongy mesophyll of the leaf causing cell death only of the consumed cells (Bensoussan et al., 2016;Weintraub et al., 2017).
Module 3. Compiling information to discover the underlying key molecular aspects Virtually, mining of transcriptomes and secondary analyses should offer realistic clues on the particularities involving the plant response in an individual plant-herbivore interaction. However, there is not an optimal way to deal with the analysis of meta-transcriptomes due to the variability in the approaches and conditions used in the correction of experimental bias and the subjective interpretations of integrated data. Consequently, an intuitive assay based on the previous results of analysis emerges as the sub-optimal method to extract conclusions. Although uncertainties are likely found, this kind of analysis entails substantial contributions to robust species-based studies. In a first attempt to disentangle the principal features of the Arabidopsis response to T. urticae infestation, several considerations were taken into account to combine the transcriptomic analysis of the response to T. urticae with the transcriptomic metaanalysis of Arabidopsis responses to arthropod herbivores.
First, jasmonic acid signaling, response to chitin, and glucosinolate metabolism represent the master responses against herbivores in Arabidopsis plants. As these processes are the most enriched by the genes with a correlated expression among experiments, they could be included in the set of regulated genes with a low species specificity. Second, the predicted basal network for the Arabidopsis response to T. urticae comes from a previous selection of the most probable genes acting as hubs derived from the individual response to T. urticae and could be involved in the specificity of the response. Third, an elevated number of genes was only detected as deregulated upon T. urticae infestation, which could equally be involved in the particular pathways triggered by T. urticae. Fourth, the genes that are significantly contributing to the clustering of experiments have a reasonable probability to be involved in specific rewires in the transcriptional response. Thus, the 10 genes significantly more expressed in the cluster with the T. urticae experiments that were uncorrelated, and the three genes included in the predicted basal network and specifically induced by T. urticae are the best candidates to participate in the enhanced responses triggered by the specific perception associated with T. urticae herbivory.
Interestingly, these likely essential mite-regulated genes connect jasmonate and defensive responses with metabolic pathways leading to the production of anthocyanin-containing compounds and terpenoid-related metabolites. Connections between jasmonic acid response and anthocyanin synthesis are mediated by MYC2 and the induction of specific WD-repeat/ bHLH/MYB modules (Xu et al., 2014). These specific TFs, like PAP1 (MYB75), and the bHLH genes TT8 and GL3 were upregulated upon mite infestation. As expected, these activated complexes led to the up-regulation of several enzymes involved in the biosynthesis of anthocyanins, such as the dihydroflavonol reductase DFRA, the leucoanthocyanidin dioxygenases/anthocyanidin synthases LDOX and ANS, or the anthocyanin glucosyltransferases UGT75C1 and UGT79B1 (Saito et al., 2013). Further, two specific pathways in the biosynthesis of terpenoidrelated metabolites appeared as de-regulated. The first route leads to the synthesis of the herbivore-induced volatile C16-homoterpene TMTT (E,E-4,8,12-trimethyltrideca-1,3,7,11tetraene) from GGPP (geranylgeranyl diphosphate). TMTT influenced the foraging behavior of predatory mites when emitted from lima bean leaves infested by spider mites (de Boer et al., 2004). The enzymes involved in the two steps of the route, TPS04/GES and CYP82G1, were induced upon T. urticae attack, supporting a relevant role for TMTT in the Arabidopsis response. The second route connects tyrosine metabolism with tocopherol production. A key enzyme is the mite-induced tyrosine aminotransferase TAT3, which was previously shown to be upregulated by wounding and jasmonic acid (Yan et al., 2007). TAT3 catalyzes the reversible transamination from tyrosine to form 4-hydroxyphenylpyruvic acid (pHPP). pHPP can be converted to homogentisic acid, the aromatic precursor of tocopherols and plastoquinone. Tocopherols have been associated in Arabidopsis to an effective basal resistance against compatible Pseudomonas syringae and the activation of defenses when challenged with Botrytis cinerea (Cela et al., 2018;Stahl et al., 2019). The up-regulation of HPT1, APG1, and VTE2-2, enzymes involved in the biosynthesis of tocopherols from homogentisic acid, supports an undescribed relevant role of these compounds in the coordinated response to T. urticae infestation.
In conclusion, the combination of our own transcriptomic data with data from public repositories enables us to reasonably predict novel relevant processes and specificities involved in the Arabidopsis response to the spider mite. Thus, dual individual/general analysis of the transcriptomic responses should be considered a robust tool to be integrated into biotechnological projects. In the next few years, new data from RNA-seq experiments and novel bioinformatics tools will allow the construction of more robust databases and to perform better analyses. As a consequence, it is expected there will be an exceptional generation of knowledge on how crops recognize and respond to different biotic agents.
MATERIAL AND METHODS
Plant material and growth conditions Arabidopsis thaliana L. Col-0 accession was used. Seeds were planted and incubated 5 d in the dark at 4°C in autoclaved peat moss and vermiculite (3:2 V/V). Plants were then grown in growth chambers (Sanyo MLR-350-H) under control conditions (23°C ± 1°C, >70% relative humidity, and a 16 h/8 h day/night photoperiod).
Spider mite maintenance and plant infestation
A colony of T. urticae, London strain (Acari: Tetranychidae) provided by Dr. Miodrag Grbic (UWO, Canada), was reared on beans (Phaseolus vulgaris) and maintained in growth chambers (Sanyo MLR-350-H) at 25°C ± 1°C, >70% relative humidity and a 16 h/8 h day/night photoperiod. Three-week-old plants were infested with 20 T. urticae female adults per plant. They were carefully transferred with a brush to the leaf surface. Plant material was harvested after 0 h, 30 min, 1, 3, and 24 h of infestation.
RNA-seq library preparation, sequencing, alignment, and DEG analysis Total RNA was isolated and purified by using RNeasy Qiagen Mini Plant Kit (74904 Qiagen), including the on-column DNA I (79254, Qiagen) digestion recommended by the manufacturer.
RNA amount and quality were tested in a Nanodrop ND-1000.
Total RNA was sent to Centre for Genomic Regulation (CNAG-CRG) (Barcelona, Spain). Double-stranded cDNA libraries obtained from purified mRNA were sequenced using Illumina HiSeq™ 2000 high throughput sequencing technology. More than 40 M paired-end reads were obtained for each sample (n = 3). Three biological replicates coming from three independent experiments were used. For each biological replicate, six rosettes were pooled and frozen in liquid nitrogen. Reads were mapped to the Arabidopsis reference genome (ensemble release 39, TAIR10) using STAR aligner version 2.5.3a (Dobin et al., 2013) with ENCODE standard options for long RNA-seq. Mapped reads were quantified at "Gene" level with RSEM version 1.3.0 with default parameters (Li and Dewey, 2011). Differential expression analysis was performed with DESeq2 version 1.18 (Love et al., 2014) with default settings. Size factors calculation and dispersion estimation were done with samples from all time points together. For hypothesis testing the Wald test with the "contrast" function was used to compare groups of interest (always using time 0 as the reference group). Differentially expressed genes were considered those genes showing a p-adjusted value <0.05 and a log 2 Ratio (fold change) higher than 1. Venn diagrams were performed using the Venny 2.1 tool (Oliveros, J.C., 2007-2015, https://bioinfogp.cnb.csic.es/tools/ venny/index.html). Gene enrichment analyses were performed with the Bonferroni step-down test using ClueGO package (Bindea et al., 2009) in Cytoscape (Shannon et al., 2013). Comparison of total DEGs across selected experiments was conducted using Instant Clue software (Nolte et al., 2018), which perform a hierarchical clustering to classify the experiments and generate a heatmap for the visualization of the similar patterns of DEGs. The datasets generated during the current study are available in the ArrayExpress repository, accession number E-MTAB-9448. Real-time RT-qPCR analysis for expression comparisons is described in Supporting Methods.
Searches in transcriptomic databases and analysis of selected experiments
To examine the transcriptomic responses to biotic stresses mediated by phytophagous arthropods, we searched in different public repositories of gene expression patterns under diverse biotic stress conditions. Microarrays and RNA-seq experiments were collected from Expression Atlas (http://www.ebi.ac.uk/ gxa), ArrayExpress (https://www.ebi.ac.uk/arrayexpress) or National Center for Biotechnology Information Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/), using "transcriptomics", "biotic stress", "Arabidopsis" or "herbivore" as keywords with no restrictions in the date of publication. Transgenic and resistant genotypes were excluded from the analysis. Analysis of microarray and RNA-seq data are described in Supporting Methods. Comparisons of DEGs across selected experiments were conducted using Instant Clue software (Nolte et al., 2018). The ExpressionCorrelation plugin for Cytoscape (http://www.baderlab.org/Software/ExpressionCorrelation) was used to identify correlated genes among experiments. The similarity matrix was computed using a threshold of 0.95 for the Pearson correlation coefficient.
Clustering and analysis of experiments and DEGs
To understand the similarities and differences in the response of A. thaliana to different herbivores, hierarchical clustering of the DEG lists was performed. To that end, Euclidean distances were calculated following Ward's linkage method with the hclust function in the "stats" package of R (v.3.5.2). Principal component analysis was also performed using the princomp function in R software (v.3.5.2). To examine the classification of DEGs and the underlying relationship between experiments and DEGs, eigenvalues and contributions were calculated. Experiments were used as variables to see which genes have a specific or peculiar behavior in response to specific experiments. Based on the explanation of over 70% of the total variation and the presence of an inflection point in the scree plot, the most important dimensions for the analysis were selected (Cattell, 1966). Using the previous information, the experiments with the highest contribution to the variability of the expression of the genes and the top 100 DEGs that respond more specifically to this variation in these dimensions were extracted. Analysis of their expression through the clusters previously generated was developed to identify which DEGs were more relevant. Because of the nature of the data, bootstrapped non-parametric bias-corrected and accelerated (BCa) confidence intervals of the log 2 FC for each DEG in each cluster were calculated with the percentiles 0.025 and 0.0975 and 10,000 replications using the boot.BCa function in R. DEGs in a cluster were considered to have a relevant behavior in a cluster when the mean value of the log 2 FC of all genes through all experiments was not included within their bootstrapped confidence interval.
Molecular networks
Several available tools were used to construct gene molecular networks. NetworkAnalyst is a platform that builds molecular networks based on the qualitative expression of a gene and the protein-protein interactions generated in the STRING database version 11.0 (Szklarczyk et al., 2019;Zhou et al., 2019). Using the confidence score higher than 900 and experimental evidence required as parameters and the minimum connected network option, the significant genes are mapped to the corresponding molecular interaction database. Further, the Gen-eMANIA tool for Cytoscape (Montojo et al., 2010) and the own STRING database were selected to construct protein-protein interaction networks with increasing complexity.
SUPPORTING INFORMATION
Additional Supporting Information may be found online in the supporting information tab for this article: http://onlinelibrary.wiley.com/doi/10.1111/ jipb.13026/suppinfo Data S1. Differentially expressed genes in Arabidopsis at four times of mite infestation Data S2. Lists of specific and shared up-or down-regulated genes corresponding to Figure 1B Data S3. List of genes in networks built from differentially expressed genes in Arabidopsis at four times of mite infestation Data S4. Differentially expressed genes in Arabidopsis upon infestation using different herbivores Data S5. Differentially expressed genes in Arabidopsis with a correlated expression upon infestation using different herbivores Data S6. List of genes in clusters with a significant deviation from the mean value Data S7. Differentially expressed genes specifically deregulated in Arabidopsis upon infestation with Tetranychus urticae Data S8. Lists of specific and shared up-or down-regulated genes corresponding to Figure Figure S4. Analysis of confidence intervals for clustering-responsible genes (A) Representation per cluster of the bootstrapped confidence intervals against the log 2 FC mean from the top 100 clustering-responsible genes. Confidence intervals significantly different from the log 2 FC mean for each cluster are colored in red. (B) Number of genes significantly different from the log 2 FC mean in each cluster. (C) Venn diagram showing the specific and shared significant genes for each cluster. Table S1. Enriched biological processes upon 30 min, 1, 3, and 24 h of Tetranychus urticae infestation Table S2. Selected transcriptomic experiments for the analysis of the response of plants to herbivore attack. Accession number of the experiment is provided as Gene Expression Datasets Series (GSE) from the Gene Expression Omnibus (GEO) platform, as E-MTAB accession number from ArrayExpress database, as SRP accession number from the Sequence Read Archive (SRA) or as NASCarray experiments from the Nottingham Arabidopsis Stock Centre's microarray database. The presence of an asterisk (*) indicates that a re-analysis of the data was performed Table S3. Individual genes significantly deregulated in clusters 3 and 4 Table S4. Oligonucleotide sequences for quantitative real-time polymerase chain reaction analysis Supporting Methods. Real-time quantitative real-time polymerase chain reaction analysis and analyses of stored microarray and RNA-seq data | 2020-10-22T18:55:06.424Z | 2020-10-21T00:00:00.000 | {
"year": 2021,
"sha1": "627c03e84ee5a9692aef2ee419d2b07d9f2a33f4",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jipb.13026",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "be38d480d26961dbb84c1bf24b637153490c1d2f",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
20429553 | pes2o/s2orc | v3-fos-license | Enhancement of Ca-contractions by catecholamines and temperature dependency in the depolarized teania coli of the guinea-pig.
Guinea-pig taenia colt suspended in a depolarizing solution without Ca responded to Ca by a contraction. The effects of adrenaline, noradrenaline and isoprenaline on the contractile response were investigated at 20°C and at 37°C. Contractile response to Ca was not significantly affected by adrenaline and noradrenaline, but was reduced by isoprenaline at 37°C. At 20°C, all the three catecholamines inhibited Ca-contraction. The potency in producing the inhibition was isoprenaline, adrenaline, noradrenaline, the same sequence of potency as related to beta-receptor mediated responses. Treatment with a beta-receptor blocking agent markedly reduced or abolished the inhibitory effects, and consequently the Ca-contraction was enhanced by adrenaline and noradrenaline at 37°C. Treatment with an alpha-receptor blocking agent increased the inhibitory effects so that adrenaline and noradrenaline reduced the Ca-contraction at 37°C as well as at 20°C. The order of potency in producing the inhibition after alpha-receptor blockade was again isoprenaline, adrenaline, noradrenaline. It was concluded that the stimulatory effect is mediated by alpha-receptor activation and the inhibitory effect by beta-receptor activation. After beta-receptor blockade and at 37°C, adrenaline and noradrenaline applied after Ca caused further development of tension. The contractile response to adrenaline resembles the slow component of the ACh-induced contraction, regarding time course and susceptibility to removal of Ca from the extracellular medium. Adrenaline enhanced the Ca-contraction to the same extent before and after frequent exposures to ACh to deplete cell-bound Ca. These findings suggest that the enhancement of the Ca-contraction by adrenaline and noradrenaline is dependent on increased permeation of Ca ions across the cell membrane.
adrenaline, noradrenaline, the same sequence of potency as related to beta-receptor mediated responses. Treatment with a beta-receptor blocking agent markedly re duced or abolished the inhibitory effects, and consequently the Ca-contraction was enhanced by adrenaline and noradrenaline at 37°C. Treatment with an alpha-receptor blocking agent increased the inhibitory effects so that adrenaline and noradrenaline reduced the Ca-contraction at 37'C as well as at 20°C. The order of potency in pro ducing the inhibition after alpha-receptor blockade was again isoprenaline, adrena line, noradrenaline.
It was concluded that the stimulatory effect is mediated by alpha-receptor activation and the inhibitory effect by beta-receptor activation. After beta-receptor blockade and at 37°C, adrenaline and noradrenaline applied after Ca caused further development of tension. The contractile response to adrenaline re sembles the slow component of the ACh-induced contraction, regarding time course and susceptibility to removal of Ca from the extracellular medium. Adrenaline enhanced the Ca-contraction to the same extent before and after frequent exposures to ACh to deplete cell-bound Ca. These findings suggest that the enhancement of the Ca-contraction by adrenaline and noradrenaline is dependent on increased per meation of Ca ions across the cell membrane.
It is well known that catecholamines such as adrenaline, noradrenaline and isoprena line cause relaxation of intestinal smooth muscle and that the relaxing action is mediated by both adrenergic alpha and beta-receptor mechanisms (1)(2)(3)(4). The initial effect is sup pression of the spontaneous discharge of the action potentials (3)(4)(5)(6), usually with hyper polarization of the muscle membrane associated with an increase of the membrane con ductance mainly due to potassium and to chloride (3,7). It has been established that Ca ions are essential for the action of the catecholamines (8,9). The effects of raising the external concentration of Ca on the membrane potential and membrane resistance re semble those of catecholamines (10,Il). These observations have led to the view that the catecholamines may alter the process of Ca-binding in and Ca-removal from the mem brane, resulting in a change in distribution of Ca ions in the membrane, and thereby chang ing the permeability (9,10).
It is generally accepted that Ca ions are the final activator of the contractile system of smooth muscle as in skeletal muscles and it is assumed that smooth muscle membrane may play a role in controlling the concentration of intracellular free Ca ions as in the sarco plasmic reticulum of skeletal muscle, since the cellular Ca exchanges rather quickly with the extracellular Ca (12), the sarcoplasmic reticulum is very poorly developed (13,14) and due to the small size of the cells (surface/volume ratio approaching 1: 1) the possibility has been considered by Peachey & Porter (14) that the contractile elements may be acti vated sufficiently fast by diffusion of Ca across the cell membrane or by liberation of Ca from the membrane structure.
Thus, if the view presented by Bulbring and Tomita (9, 10) that the amines change the process of Ca-binding in and Ca-removal from the membrane is correct, one can expect that the drugs may also affect directly contractile responses to Ca of smooth muscle de polarized in a high K medium. The aim of the present work was to investigate the effects of catecholamines on the responses to Ca of depolarized taenia colt of the guinea-pig at different temperatures and to draw inferences about the movements of Ca from the mechanical changes produced by the catecholamines. It was found that the catecholamines enhanced the Ca-responses by alpha-receptor activation and this effect could be due mainly to increased permeation of Ca ions across the cell membrane.
A preliminary account of some of these observations has been given (15).
The experiments were performed at 2011'C and at 37±1°C. After equilibration with the Tyrode solution for approx. 10 min at 20°C and 60 min at 37°C, the preparations were then transferred to Ca-free and K-rich Tyrode solution in which Ca contamination is as low as 0.04 mM (referred to as K-Tyrode in this paper) : the NaCl was replaced with equimolecular KCI or K2SO,, the CaCl2i NaH2PO, and NaHCO3 were omitted and Tris maleate buffer (pH 7.4), 5 mM was added. In most of the experiments, KCI-Tyrode was used as the depolarizing solution. Some experiments were performed with K2SO, instead of KCI as the depolarizing solution. The results obtained under these different condi tions were not qualitatively different.
The volume of the bathing fluid was 3.0 ml. Calcium chloride and drugs were added to the bathing fluid from the concentrated solutions by rapid injection of a small volume The maximum rate of rise in tension also increased but less consistently.
Frequently, the tracing of tension development at 37'C showed a curve with a hump as illustrated in Fig. 1 b.
The hump made it difficult to measure the rate of rise in tension.
It can also be seen in Fig. I Ca-removal from the membrane (9, 10), leading to changes in the intracellular Ca ions available for the contractile elements, one can expect that the drugs would affect responses to these Ca concentrations to a great extent. Each exposure of the strips to Ca was limited to 3 min at 20'C and to 2 min at 37°C, although sometimes contractions did not reach a steady level. The tension developed within these periods, however, was found to be high ly reproducible provided that 15 min (at 20°C) or 10 min (at 37'C) intervals between ap plications were allowed.
In each experiment, after the sensitivity of the muscle strip to Ca was stabilized by Table 1. Differences between the mean values in a and b in It is well known that adrenaline and noradrenaline activate both alpha and beta receptors whereas isoprenaline almost solely activates beta-receptors, and that the in hibitory effect in the depolarized taenia coli is mediated by beta-receptor activation (16). In keeping with the concept that Ca ions which activate smooth muscle contraction can be supplied from more than one site (18)(19)(20) , the present finding may be interpreted as reflecting two sources of Ca ions which are differentially affected by chang ing the extracellular concentration of Ca. One source responsible for the fast contractior may be a cellular site, and another responsible for the slow contraction may be the extra cellular space. Assuming this to be true, adrenaline appears to act through increasing Ca entry from the extracellular medium into the cells, whereas ACh appears to act through increasing the mobilization of Ca ions from both Ca pools. At this stage the contractile response to 2 mM Ca was ob served (Fig. 5-a). The same muscle strip was then subjected to a second series of ACh exposures to deplete Ca in the same manner as described above. After this, adrenaline, 5 x 10-' M was introduced to the bath and then the response to the same dose of Ca was obtained ( Fig. 5-b). In eight experiments, the average response in the presence of adrena line was 125.00 of the reference contraction (ranging from 106 to 144"'). The average value is comparable to that in the strips unexposed to ACh (125.4%, see Table 1-f), which means that the stimulatory effect of adrenaline is not reduced after a possible depletion of the cellular Ca-store. Therefore, the possibility that the enhanced contractile response to Ca in the presence of adrenaline may be due to a release of Ca ions from a cellular depot is remote.
DISCUSSION
The results presented in this paper confirmed the observations made by previous au thors that catecholamines caused inhibition or a fall in tension of the K-contracture and of Ca-contractions of the guinea-pig taenia coli through beta-receptor activation (16,23).
In addition, the present results clearly indicate that adrenaline and noradrenaline can also cause enhancement of the Ca-contraction through alpha-receptor activation.
One of the possible reasons why the stimulatory effect has not been observed hitherto may be that an investigation of the action of catecholamines on the Ca-contraction has not been done at a higher temp. (37°C). In fact, the stimulatory effect was very temp. dependent to the point where it was difficult to demonstrate the alpha-effect at a lower temp.
(20°C). The result could be explained by a reduction of the alpha-effect or augmentation of the beta-effect at low temp. If the alpha-effect is mediated by accelerated Ca entry clue to increased permeation of this ion across the cell membrane as suggested in the present paper, it could be effectively reduced when the Ca concentration gradient between the bathing solution and the tissue is lowered. Bauer, Goodford and Huter (24) observed that the tissue Ca content of the guinea-pig taenia coli increased after exposure to room temp., and also they suggested that the elevated Ca content was due to inhibition of a Ca extrusion mechanism which requires energy supply. This may be the case also in the depolarized taenia coil and, consequently, the alpha-effect may be reduced. It has been suggested that some metabolic process is involved in the beta-effect, which supplies energy for the process of removing Ca ions from the environment of contractile elements (9). Therefore, by assuming that the metabolic process related to the beta-effect is saturable and that it can be saturated to a less extent at a low temp., an augmentation of the beta-effect could be expected.
Two possible reasons for the difficulty in demonstrating the alpha-effect at a lower temp. have been mentioned. There may still be however, additional underlying causes.
To fully comprehend the difference of the effects of catecholamines on Ca-contractions at different temp., detailed information concerning genesis of the Ca-contraction and dependency of each step of the genesis on temp. is required as well as knowledge of the mechanism of the action of catecholamines.
In similar experiments on rabbit uterus and rat seminal vesicle, contractile responses to adrenaline were obtained in polarized as well as in depolarized muscle preparations (25). The taenia coli of the guinea-pig is exceptional in that adrenaline produces the op posite effect through alpha-receptor activation before and after the membrane depolari zation.
In view of the importance of Ca inside the cell to regulate contraction, it is logical to assume that the stimulatory effect of adrenaline and noradrenaline may be due to an increase in concentration of intracellular Ca ions which activate the smooth muscle con traction. The catecholamines may possibly be responsible by liberation of Ca ions from a cellular store and/or by an increase in permeation of Ca ions across the cell membrane.
It has been suggested that ACh causes a liberation of cell-bound Ca (20)(21)(22) as well as an increase in the membrane permeability to the ions (26,27). The present findings are that (1) the ACh-induced contraction is composed of two components, a fast and a slow one, (2) the fast component of the ACh-contraction has a much faster rate of rise than a Ca contraction of about the same magnitude and the tension is lost within less than 15 sec, If the present inference made about the movements of Ca from mechanical changes produced by the catecholamines is correct, changes in fluxes of Ca would be observed in association with the increased or decreased contractile response to Ca. There are how ever no studies of Ca fluxes under these conditions available. Briggs and Melvin (29), observed that in the rabbit aorta, contractions by adrenaline were accompanied by in creased 45Ca influx and Grossman and Furchgott (30), in the guinea-pig atria, showed that adrenaline increased exchange of "Ca in association with the increased force of con traction.
Many available reports suggest that intracellular Ca is an important regulator of mem brane permeability to cations as well as of contractile elements. Recently, it was found that high internal Ca caused an increase in the potassium permeability of human red cell membrane (31,32). In polarized smooth muscle, catecholamines may also allow more Ca to enter the cells via the alpha-receptor mechanism as in depolarized smooth muscle and thereby increase Ca ions in the intracellular fluid. The increment of intracellular Ca ions would be expected to cause an increase in the membrane permeability to potassium in the way suggested by Bulbring and Tomita (9,10). Thus hyperpolarization of the membrane and reduction of the membrane resistance would result (3,6,7,10). | 2018-04-03T01:25:09.739Z | 1973-01-01T00:00:00.000 | {
"year": 1973,
"sha1": "a27bed5f8a08c01f1b2a441f9acc1591ccb9048b",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jphs1951/23/4/23_4_467/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2454978c514d9cc92efde57a3647cab6c8956708",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
4424468 | pes2o/s2orc | v3-fos-license | Competence of Aedes aegypti, Ae. albopictus, and Culex quinquefasciatus Mosquitoes as Zika Virus Vectors, China
In China, the prevention and control of Zika virus disease has been a public health threat since the first imported case was reported in February 2016. To determine the vector competence of potential vector mosquito species, we experimentally infected Aedes aegypti, Ae. albopictus, and Culex quinquefasciatus mosquitoes and determined infection rates, dissemination rates, and transmission rates. We found the highest vector competence for the imported Zika virus in Ae. aegypti mosquitoes, some susceptibility of Ae. albopictus mosquitoes, but no transmission ability for Cx. quinquefasciatus mosquitoes. Considering that, in China, Ae. albopictus mosquitoes are widely distributed but Ae. aegypti mosquito distribution is limited, Ae. albopictus mosquitoes are a potential primary vector for Zika virus and should be targeted in vector control strategies.
Z ika virus is a mosquitoborne flavivirus that poses a serious threat worldwide (1). Because cases of Zika virus disease in humans have been sporadic and symptoms mild, Zika virus has been neglected since its discovery in 1947 (2,3). The first major Zika virus outbreak was reported on Yap Island in Micronesia in 2007 (4). However, the Zika virus disease outbreak in French Polynesia during 2012-2014 surprised the public health communities because of the high prevalence of Guillain-Barré syndrome (5). In addition, the ongoing Zika virus epidemic in the Americas since 2015 was associated with congenital infection and an unprecedented number of infants born with microcephaly (6,7). In 2015, the Zika virus epidemic spread from Brazil to 60 other countries and territories; active local virus transmission (8) and cases of imported Zika virus disease are occurring all over the world (9,10). In view of the seriousness of the epidemic, the World Health Organization declared the clusters of microcephaly and Guillian-Barré syndrome a Public Health Emergency of International Concern (11).
Experimental studies have confirmed that Aedes mosquitoes, including Ae. aegypti, Ae. albopictus, Ae. vittatus, and Ae. luteocephalus, serve as vectors of Zika virus (12)(13)(14)(15). However, vector competence (ability for infection, dissemination, and transmission of virus) differs among mosquitoes of different species and among virus strains. Ae. aegypti mosquitoes collected from Singapore are susceptible and could potentially transmit Zika virus after 5 days of infection; however, no Zika virus genome has been detected in saliva of Ae. aegypti mosquitoes in Senegal after 15 days of infection (12,14). Ae. albopictus mosquitoes are a secondary vector for Zika virus transmission (16). In Italy, the population transmission rate is lower and the extrinsic incubation period is longer in Ae. albopictus than in Ae. aegypti mosquitoes (17). Transmission of Zika virus may also involve mosquitoes of other species such as those of the genera Anopheles and Culex; the virus had been detected in An. coustani and Cx. perfuscus mosquitoes from Senegal (18,19).
In February 2016, China recorded its first case of Zika virus infection in Jiangxi Province; the case was confirmed to have been caused by virus imported from Venezuela (20). Since then, 13 cases caused by imported Zika virus have been reported from several provinces (21); no evidence of autochthonous transmission has been found. In China, Ae. aegypti mosquitoes are found only in small areas of southern China, including Hainan Province and small portions of Yunnan and Guangdong Provinces (22). The predominant mosquitoes across China, especially in cities, are Ae. albopictus and Cx. quinquefasciatus (23,24); Ae. albopictus mosquitoes are the primary vector of dengue virus (family Flaviviridae) (25). Cx. quinquefasciatus mosquitoes are the primary vector for the causative organisms of St. Louis encephalitis, Rift Valley fever, lymphatic filariasis, and West Nile fever (26). The potential for Cx. pipiens mosquitoes to be Zika virus vectors (27) needs further confirmation. Because cases of Zika virus disease caused by imported virus have been reported in China, we investigated the potential vectors.
Mosquitoes
The Guangdong Provincial Center for Disease Control and Prevention collected Ae. albopictus and Cx. quinquefasciatus mosquitoes from different sites in the cities of Foshan (in 1981) and Guangzhou (in 1993) in Guangdong Province. In 2005, the China Center for Disease Control and Prevention collected Ae. aegypti mosquitoes from the city of Haikou in Hainan Province. All mosquitoes were maintained under standard insectary conditions of 27 ± 1°C, 70%-80% relative humidity, and a light:dark cycle of 16 h:8 h. To obtain enough individuals for the experiments, we collected eggs from mosquitoes of all 3 species and hatched them in dechlorinated water in stainless steel trays. The larvae (150-200/L water) were reared and fed daily with yeast and turtle food. Pupae were put into 250-mL cups and placed in the microcosm (20 cm × 20 cm × 35 cm cage covered with nylon mesh) until they emerged. Adults were kept in the microcosms and given 10% glucose solution ad libitum.
Zika Virus
Zika virus (GenBank accession no. KU820899.2), provided by the Guangdong Provincial Center for Disease Control and Prevention, was originally isolated from a patient in China in February 2016 and classified as the Asian lineage (28,29). The virus had been passaged once via intracranial inoculation of suckling mice and twice in C6/36 cells. In the laboratory at Southern Medical University (Guangzhou, China), C6/36 cells were infected by virus stocks with a multiplicity of infection of 1 and left to grow at 28°C for 5-7 days. The cells were suspended and separated into an aliquot and stored at -80°C. The frozen virus stock (3.28 ± 0.15 log 10 copies/ µL) was passaged once through C6/36 cells before the mosquitoes were infected. The fresh virus suspension (5.45 ± 0.38 log 10 copies/µL) was used to prepare the blood meal.
Infection of Mosquitoes
We transferred 5-7-day-old female Ae. aegypti, Ae. albopictus, and Cx. quinquefasciatus mosquitoes to 500-mL cylindrical cardboard containers covered with mesh, where they were starved for 24-48 h. The infectious blood meal was prepared by mixing defibrinated sheep blood (Solarbio, Beijing, China) with fresh virus suspension at a ratio of 1:2. The blood meal was warmed to 37°C and transferred into a Hemotek blood reservoir unit (Discovery Workshops, Lancashire, UK). Mosquitoes were then fed by using the Hemotek blood feeding system. Quantitative reverse transcription PCR (qRT-PCR) was used to detect the virus concentration (copy level) in the blood meal before and after feeding. After 30 min of exposure to the infectious blood meal, mosquitoes were anesthetized with diethyl ether. Fully engorged females were transferred to 250-mL paper cups covered with net (10-15 mosquitoes/cup). The infected mosquitoes were provided with 10% glucose and maintained in an HP400GS incubator (Ruihua, Wuhai, China) at 28°C, 80% relative humidity, and a light:dark cycle of 16 h:8 h. The experiments were conducted according to standard procedures in a Biosafety Level 2 laboratory.
Zika Virus Infection in Whole Mosquitoes
To determine Zika virus infections in Ae. aegypti, Ae. albopictus and Cx. quinquefasciatus mosquitoes, we selected 18-30 mosquitoes at postinfection days (dpi) 0 (same day as blood meal), 4, 7, 10, and 14. Each mosquito was placed in 50 µL TRIzol (Ambion, Life Technologies, Carlsbad, CA, USA) and homogenized in a tissue grinder (Kontes, Vineland, NJ, USA). Total RNA was extracted according to the manufacturer's protocol of TRIzol reagent and dissolved in 20 µL RNase-free water.
Each 20 µL of qRT-PCR was amplified by a 7500 Real-Time PCR System (Applied Biosystems, Foster City, CA, USA) under the following conditions: 1 cycle at 50°C for 2 min, 95°C for 2 min; 40 cycles at 95°C for 15 s, 60°C for 15 s, and 72°C for 1 min. Zika virus RNA copies from each sample were quantified by comparing cycle threshold value with the standard curve. The efficiency of this qRT-PCR system was evaluated by using blank control, uninfected C6/36 cells, C6/36 cells infected with Zika virus or DENV, and mosquitoes infected with Zika virus or DENV; the result displayed that its minimum detecting amount is 6.23 copies/µL of Zika virus and that the specificity is 100%.
Zika Virus Infection in Mosquito Tissues
To further analyze Zika virus tropisms and vector competence in Ae. aegypti, Ae. albopictus, and Cx. quinquefasciatus mosquitoes, we infected another batch of mosquitoes with Zika virus and then dissected the midgut, head, and salivary glands of each mosquito at dpi 0, 4, 7, 10, and 14 by using 18-30 mosquitoes per time point. The legs and wings of mosquitoes were removed and placed into cold phosphate-buffered saline. Each tissue was dissected and washed 3 times in phosphate-buffered saline and transferred to 50 µL TRIzol (30). Following the above-mentioned procedure, we extracted total RNA, and the NS1 region of Zika virus from samples was detected by RT-PCR. The viral RNA copies from the positive samples were quantified by qRT-PCR. For those mosquitoes with Zika virus-negative midguts by RT-PCR and qRT-PCR, which we considered to be uninfected, we did not further analyze the heads and salivary glands. Vector competence of mosquitoes of 3 species was evaluated by calculating infection rate (no. infected midguts/no. tested midguts), dissemination rate (no. infected heads/no. infected midguts), transmission rate (no. infected salivary glands/no. infected midguts), and population transmission rate (no. infected salivary glands/no. tested mosquitoes).
Statistical Analyses
All statistical analyses were performed by using SPSS version 20.0 (IBM, Chicago, IL, USA). Logistic regression was used to compare the infection, dissemination, and transmission rates for different mosquito species at the same time or for the same species of mosquito at different times. p value significance was corrected by Bonferroni adjustments. The Zika virus RNA copy levels were log-transformed and then compared among mosquitoes of different species at the same time or among mosquitoes of the same species at different times by using post hoc Tukey honest significant difference tests.
The amount of Zika virus from the mosquitoes with midgut infection was further tested by qRT-PCR. The trend for mean Zika virus copies in Ae. aegypti and Ae. albopictus mosquitoes was an increase with time after infection, but that for Cx. quinquefasciatus mosquitoes was a decrease (Figure 1, panel B). For Ae. aegypti mosquitoes, Zika virus copies increased quickly from dpi 0 to 4 (p<0.05 by Tukey honest significant difference test), then increased gradually. For Ae. albopictus mosquitoes, the trend for copy levels of Zika virus was similar to that for Ae. aegypti mosquitoes, but levels were slightly lower before dpi 7 (p<0.05). However, the copy levels were the same for mosquitoes of the 2 species at dpi 10 and 14 (p>0.05). For Cx. quinquefasciatus mosquitoes, the virus copy levels were low before dpi 7 and totally diminished afterward (Figure 1, panel B).
Vector Competence of Mosquitoes after Oral Challenge
The infection, dissemination, and transmission rates for Zika virus were assessed by detecting infection status of mosquito midguts, heads, and salivary glands. Another 414 mosquitoes (138 from mosquitoes of each species) were infected by Zika virus, and the midguts were measured; the overall infection rates were 89.86% for Ae. aegypti, 87.68% for Ae. albopictus, and 15.94% for Cx. quinquefasciatus mosquitoes (Table). At dpi 0, 100% of midguts were infected because of the undigested blood meal containing the virus, while no virus appeared in other tissues. High infection rates were maintained in Ae. aegypti and Ae. albopictus mosquitoes during the experimental period; no significant difference between Ae. aegypti and Ae. albopictus mosquitoes was found at dpi 4, 7, and 10 (z = 1.706, 1.777, 0.401; p>0.05) (Figure 2, panel A). At dpi 14, the infection rate for Ae. albopictus was higher than that for Ae. aegypti mosquitoes (z = 1.971; p = 0.04873). Compared with the infection rates for Ae. aegypti and Ae. albopictus mosquitoes, that for Cx. quinquefasciatus mosquitoes was significantly lower at dpi 4 (z = -5.081, -4.539; p<0.01) and 7 (z = -4.682, -4.264; p<0.01), and no midguts were positive for Zika virus at dpi 10 and 14 ( Figure 2, panel A).
The dissemination of Zika virus in the heads of Ae. aegypti mosquitoes started from dpi 4 and increased rapidly up to 100% after dpi 7 (Figure 2, panel B). The spread of Zika virus in the heads of Ae. albopictus mosquitoes was first detected at dpi 7, and the rate was lower than that for Ae. aegypti mosquitoes at the same time point (z = -3.832; p<0.05) (Figure 2, panel B). Peak dissemination occurred during dpi 4-7 for Ae. aegypti (z = 4.344; p<0.001) and 7-10 for Ae. albopictus (z = 3.543; p<0.001) mosquitoes. Overall, Zika virus infection was disseminated in 73.39% of midgut-infected Ae. aegypti mosquitoes but only 42.15% of midgut-infected Ae. albopictus mosquitoes (Table). Zika virus was not detected in the head tissues of Cx. quinquefasciatus mosquitoes.
The amount of Zika virus in mosquito midguts, heads, and salivary glands was measured by qRT-PCR. The Zika virus copies (log 10 ) in midguts of Ae. aegypti, Ae. albopictus, and Cx. quinquefasciatus mosquitoes did not differ significantly at dpi 0 (p>0.05). For Ae. aegypti mosquitoes, the Zika virus copies (log 10 ) of midguts at dpi 4 were rapidly raised to 5.96 ± 0.92, which was higher than that at dpi 0 (5.00 ± 0.34) (p<0.05). Levels then increased continuously over time and reached 6.82 ± 0.47 at dpi 14 ( Figure 3, panel A). For Ae. albopictus mosquitoes, the trend of increasing mean Zika virus copies was slow before dpi 7 and significantly lower than that for Ae. aegypti at the same time (p<0.05). After that, the growth of Zika virus became rapid and the Zika virus copies (log 10 ) at dpi 14 reached 7.20 ± 0.48, which exceeded that in Ae. aegypti mosquitoes (p<0.05) (Figure 3, panel A). However, the amount of Zika virus continued to decrease in Cx. quinquefasciatus mosquito midguts after infection (Figure 3, panel A).
Discussion
Because of the absence of vaccines and specific treatment, the major approach to prevention and control of Zika virus disease is vector control (31). Identification of the mosquito species that could transmit Zika virus and determination of the extrinsic incubation period of Zika virus will provide a guide for vector control. In this study, we demonstrated experimentally that Ae. aegypti and Ae. albopictus mosquitoes in China possess the ability to transmit Zika virus, whereas Cx. quinquefasciatus mosquitoes were not able to transmit the virus under our laboratory conditions. Our results demonstrate that Ae. aegypti mosquitoes could serve as vectors to spread Zika virus in China and that Ae. aegypti mosquitoes were better vectors than Ae. albopictus mosquitoes because transmission rate was higher and extrinsic incubation period was shorter for the former. The strong vector competence of Ae. aegypti mosquitoes could be associated with Zika virus rapid reproduction in the midgut during dpi 0-4, which enabled the viral particles to easily overcome the midgut barrier and be released into the hemolymph cavity and invade the salivary gland (32). Our findings are consistent with those for Ae. aegypti mosquitoes from Singapore and Italy (12,17). Although the distribution of Ae. aegypti mosquitoes is very limited in southern China, ranging from latitude 22°N to 25°N (33), the higher susceptibility of Ae. aegypti mosquitoes for Zika virus required the authorities in China to pay close attention to local epidemics of Zika virus in these regions.
Under the same experimental conditions, the wholemosquito infection rates and midgut infection rates for Ae. albopictus and Ae. aegypti mosquitoes were similar, but the replication of Zika virus in midgut was slower for Ae. albopictus mosquitoes. The dissemination and transmission of Asian genotype Zika virus by Ae. albopictus mosquitoes in China started on 7 and 10 dpi, respectively, which indicated lower vector competence than that for Ae. albopictus mosquitoes from Singapore infected with East African genotype Zika virus from Uganda but higher than that for Ae. albopictus mosquitoes from the Americas infected with Asian genotype Zika virus from New Caledonia (13,34). Although the extrinsic incubation period was longer for Ae. albopictus than for Ae. aegypti mosquitoes, Ae. albopictus mosquitoes are widely distributed in China, especially in Guangdong Province, where dengue was often epidemic (35). Moreover, Ae. albopictus mosquito density and survival time has increased with urbanization (36,37). Taken together, these findings indicate that Ae. albopictus mosquitoes can potentially become the primary vector for Zika virus in China and need attention in the vector control strategy.
Cx. quinquefasciatus are common blood-sucking mosquitoes in China, especially in southern cities, and are the vector of Western equine encephalitis virus (38). However, in this study, at dpi 0, all Cx. quinquefasciatus mosquitoes had ingested the virus, but the infection rate and Zika virus copies gradually decreased and no virus was detected in any tissues after dpi 7. The few positive midgut samples before dpi 7 could have resulted from an undigested blood meal because Cx. quinquefasciatus mosquitoes are larger and might take more blood than Aedes mosquitoes. Our results illustrate that Cx. quinquefasciatus mosquitoes in China are not able to transmit Zika virus, a finding that is consistent with the Zika virus susceptibility of Cx. pipiens mosquitoes from Iowa, USA, and Cx. quinquefasciatus mosquitoes from Rio de Janeiro, Brazil (15,39). However, our results contradict those of Guo et al., which indicated that Cx. p. quinquefasciatus mosquitoes are potential Zika virus vectors in China (40). These contradictory results might come from different experimental conditions, virus strains, or mosquito species and need more study.
In our study, Zika virus from C6/36 cells or infected mosquitoes was sensitively and specifically identified by qRT-PCR. We used qRT-PCR to detect virus copies because the Zika virus strain isolated from the patient who imported the virus into China can infect C6/36, Aag2, and Vero cells but did not show obvious cytopathic effect, which could be associated with the patient's mild clinical signs. Furthermore, previous research proved that the viral copies calculated by qPCR were consistent with the PFU detected by plaque assay (41). Although passage of the Zika virus we used in C6/36 cells was relatively low, the preliminary result demonstrated the highest virus reproduction in C6/36 cells compared with Aag2 and Vero cells.
In conclusion, our findings indicate that in China, Ae. aegypti and Ae. albopictus mosquitoes are susceptible to Zika virus, whereas Cx. quinquefasciatus mosquitoes are not able to transmit the imported Zika virus. Comparatively, the vector competence of Ae. albopictus mosquitoes is inferior to that of Ae. aegypti mosquitoes, but considering their wide distribution, Ae. albopictus mosquitoes might become the primary vector for Zika virus in China. These updated findings can be used for Zika virus disease prevention and vector control strategy. | 2017-06-20T20:29:20.965Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "2c268d469bb077565f6b5f23bae2654f934daab4",
"oa_license": "CCBY",
"oa_url": "https://wwwnc.cdc.gov/eid/article/23/7/pdfs/16-1528.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "20c6966fe0bfa97b76c9541fa017e24550781721",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
22383139 | pes2o/s2orc | v3-fos-license | Genotoxicity of a Low-Dose Nitrosamine Mixture as Drinking Water Disinfection Byproducts in NIH3T3 Cells
N-nitrosamines (NAms), which can arise as byproducts of disinfection agents, are reportedly found in drinking water, and their potential carcinogenicity is a concern; however, little research exists regarding the genotoxicity or carcinogenicity of NAms exposure as a low-dose mixture. The three most common NAms components in China's drinking water are N-nitrosodimethylamine (NDMA), N-nitrosodiethylamine (NDEA) and N-nitrosomethylethylamine (NMEA). Thus, we measured the genotoxic and carcinogenic potential of these compounds and measured the cell cycle and gene expression. The data show that exposure to the NAms-mixture doubled the revertants in the TA98 and TA100 S. typhimurium strains and increased the DNA double-strand breaks and the micronuclear frequency in the NIH3T3 cells compared to a single exposure. After long-term NAms mixture exposure, a malignant transformation of NIH3T3 and a significantly increased G2/M distribution were observed. Furthermore, P53, CDK1, P38, CDC25A and CyclinB expressions were down-regulated in the NAms-mixture exposure group; however, P21 and GADD45A genes were up-regulated. Interestingly, the CHK1/CHK2 and CDC25A genes had two responses, depending on the NAms concentrations. Thus, we observed mutagenic, genotoxic and carcinogenic effects after a low-dose NAms-mixture exposure in drinking water, and DNA repair and apoptosis pathways may contribute to these adverse effects.
Introduction
(NAms) are disinfection byproducts (DBPs) [1,2] found in drinking water, and their potential carcinogenicity is a concern [3]. Nine NAms compounds have been identified, as well as their molecular structures, physical traits, classifications and risks, and these data appear in Supplemental Table S1 and Figure S1 [4,5]. The International Agency for Research on Cancer (IARC) has classified eight NAms compounds as potential (Group 2A) or possible carcinogens (Group 2B) to humans [6]. The US Environmental Protection Agency (EPA), the Agency for Toxic Substances and Disease Registry and the Department of Human Health Services also suggest that this group of NAms may be considered a human carcinogen that is hazardous in low concentrations. In addition, some developed countries have exposure limits for NAms. Ontario Canada has set a drinking water standard for N-nitrosodimethylamine (NDMA) of 9 ppt [7]. The California Department of Health Services has set notification levels for NDMA, N-nitrosodiethylamine (NDEA) and N-nitrosodipropylamine (NDPA) at 10 Ivyspring International Publisher ppt and seeks to decrease this to 3 ppt [8]. The National Institute for Public Health and the Environment (RIVM, Netherlands) proposed a provisional guideline value for NDMA in drinking water of 12 ppt. These actions are precursors to formal regulations; however, most countries have not developed guidelines for NAms exposure due to a lack of sufficient risk-assessment data.
Exposure to NAms has been shown to be associated with tumors in epidemiological studies of human and laboratory animals [9]. NDMA, NDEA and N-nitrosomethylethylamine (NMEA) are highly mutagenic compounds that are suspected human carcinogens, and it is estimated that NDEA as low as 0.2 ppt in drinking water is associated with a 10 −6 increased lifetime cancer risk [10]. NDMA is carcinogenic in experimental animals through several exposure routes, including the ingestion of drinking water. Most studies have focused on a single substance at a high concentration; however, many NAms coexist in drinking water, and individual toxicity may differ from a mixture exposure, which could be toxic at low doses [11,12]. Presently, humans and environmental species are exposed to an almost infinite number of possible chemical combinations; thus, evidence of low-dose exposures to mixtures of environmental chemicals is of interest. Therefore, we investigated NAms mixtures at low doses to understand the health risk of pollutants in drinking water.
For this study, we selected a dose-addition approach and the most sensitive transformation cell line (NIH3T3) and common NAms compounds (NDMA, NDEA and NMEA) to assess the genotoxic and mutagenic potential and the possible molecular mechanism underlying low-dose exposure to a NAms mixture in drinking water.
Database of exposure
Meta-analysis was performed to summarize average NAms in drinking water. PubMed, Web of Science and Google Scholar were used to review the existing literature (from 1980 to 2015) [13][14][15][16][17][18][19][20] regarding NAms exposure in China's drinking water. The search terms were "N-nitrosamines", "Disinfection Byproducts or DBPs", "Drinking Water" and "China or Chinese" in various combinations. We chose eligible full texts and contacted the authors to confirm information when necessary.
Cell viability assay
Cytotoxicity was measured using a CCK-8 assay kit (Kumamoto, Japan) [21]. We used 1-30,000-fold concentrations of NAms to measure toxicity in NIH3T3 cells. Ten thousand cells with five replicates were plated in 96-well microplates and cultured for 24 h at 37°C. Then, the cells were incubated for an additional 72 h in media containing different NAms concentrations. Subsequently, optical density (OD) was measured at 450 nm with a Bio-Rad microplate reader. Each experiment was repeated three times. The 50% lethal concentration (LC50) was calculated using a dose-response curve.
Ames test
TA98, TA100, T97 and TA102 Salmonella typhimurium strains were cultured (1 × 10 9 cells/ml) overnight, and 0.5 ml S9 mix or PBS, 0.1 ml NAms and 0.1 ml bacterial suspension were mixed in tubes and cultured for 1 h at 37℃ with shaking (100 times/min). Then, 2 ml of top agarose was added to each tube and poured onto the underlying medium. The mixture was incubated for 48 h at 37°C before counting the revertant colonies. Each test was performed in triplicate with positive and negative controls, as shown in Table S2. A chemical was regarded as positive when the number of revertant colonies was at least twice the negative control [22].
Comet assay
A comet assay was performed similar to previous studies [23]. NIH3T3 cells were treated with different concentrations of NAms for 24 h, and cell viability >75% H 2 O 2 (500 μg/ml) and DMSO (0.5%) were used as positive and negative controls, respectively. Cells were embedded in an agarose micro-gel and lysed. DNA was denatured and electrophoresed under alkaline conditions (pH=13) and stained with EB solution (20 μg/ml) for 10 min. At least 100 randomly selected cells were analyzed for each group, with triplicates, using fluorescentmicroscopy (Nikon, Japan). For quantifying DNA damage, the percentage of tail DNA was calculated using a CASP image analysis system (CaspLab, Poland) [24].
8-OHdG assay
After exposure to NAms, the NIH3T3 cells' supernatant was centrifuged at 3,000 rpm for 10 min. We added 50 μl of standard solution to standard wells, 10 μl sample and 40 μl dilution buffer to sample wells and then 100 μl of HRP-conjugate reagent to the standard well and sample well, respectively. The plate was then incubated 1 h at 37°C and was washed five times. Next, 50 μl of TMB and HRP chromogenic substrates were added to each well and incubated for 15 min in the dark at 37°C and then stopped with 50 μl stop solution. OD was measured and the 8-OHdG was calculated. Each treatment was carried out in triplicate [25].
Cytoplasm block micronucleus (CBMN) assay
A CBMN assay was performed following the Organization for Economic Co-operation and Development's method (OECD-T487) [26]. The NIH3T3 cells were exposed to different NAms levels for 40 h (1.5-2 normal cell cycles). Mitomycin C (1μM) and 0.5% DMSO were used as positive and negative controls. At least 2,000 binucleated cells were scored per group under fluorescence microscope (Nikon, Japan). Micronucleus (MNi), Nuclear Budding (NBUDs) and Nucleoplasmic Bridge (NPB) were calculated [26,27]. The experiments were repeated three times.
Cell colony formation assay
NIH3T3 cells were used due to their wide applicability in cell malignant transformation studies. We seeded the NIH3T3 cells into a 6-well plate (100 cells/well). After culturing for 24 h at 37°C, the cells were treated with different NAms concentrations, positive control (3-methylcholanthrene, 3-MCA), solvent control (0.5 % DMSO) and negative control (distilled water) for 72 h, respectively. After washing twice with PBS, the cells were continually cultured for seven days at 37°C, and the medium was refreshed every three days. Then, the cells were fixed with methyl alcohol and stained with 10% Giemsa, and the colonies with more than 50 cells were counted [21,28]. This was used to quantify colony-forming efficiency (CFE) and relative colony-forming efficiency (RCFE). CFE and RCFE were calculated as follows:
Cell transformation assay
The NIH3T3 cells were seeded at a density of 2,000 cells/dish (10 cm), and the cells were cultured for 24 h. Cells were treated the same as the cell colony formation assay for 72 h. After rinsing with PBS, the cells were continually cultured for 14 days at 37°C, and the medium was replaced every three days. The cells were stained with 10% Giemsa, and the transformation frequency (TF) was calculated as follows [28]: TF = [total number of transformed colonies per treatment/(total cells plated per treatment × CFE)] × 100%
Concanavalin A (Con A) agglutination
The transformed malignant cells induced by NAms were seeded (1,000 cells/dish) for the Con A agglutination assay, and the untransformed cells were labeled as negative controls. On day 14, the cells were harvested by adjusting them to 10 4 cells /ml with PBS. Then, 100 μl of single-cell suspensions and different concentrations of Con A were added to 24-well microplates for 10 min. Cell agglutination with Con A was observed by microscope (Nikon, Japan) [21].
Soft agar assay
A 3-ml aliquot of 1.2% agar in a culture medium was plated in 60-mm dishes. Then 1,000 cells of transformed malignant or untransformed cells were mixed with 3 ml of 0.35% agar in a medium and plated on the solidified bottom agar. When the top agar solidified, the dishes were transferred to an incubator and cultured for 30 days. Two or three drops of the medium were added to each dish three times a week. After culturing for 30 days, the visible cell colonies were photographed and counted [29].
Cell cycle determination
NIH3T3 cells were seeded at a density of 3.2 × 10 4 and cultured for 24 h. The cells were treated with different concentrations of NAms for 72 h. After rinsing with cold PBS, the cells were fixed with cold 70% ethanol for 12-24 h. Then, the cells were rinsed twice with cold PBS and stained using 0.5 ml of a stain agent (0.25% Triton X-100, 10 μg/ml PI, 100 μg/ml RNase) for 30 min in the dark. Measurements were performed with flow cytometry (BD, USA).
RNA extraction and real-time RT-PCR
RNA samples were extracted with TRIzol reagent (Invitrogen, Carlsbad, CA, USA) and quantified by Nanodrop (Thermo, Wilmington, DE). Total RNA was converted to cDNA by using SYBR Green PCR Kit (Qiagen, Germany). The primers were designed using Primer Express Software v2.0 (Applied Biosystems, Carlsbad, CA, USA) and synthesized by the Beijing Genomics Institute (BGI, China). All the primers' sequences are shown in the Supplemental (Table S3). The RT-PCR reactions were performed with ABI ViiA7 Sequence Detection Real-Time PCR System (Applied Bio-systems, USA). The cycle threshold (Ct) values were used to show the relative gene expression. Eighteen ribosomal RNA genes (rRNA, Hs99999901_s1, 18S) were used as an internal control. The difference in each group's gene expression was calculated with the 2 −ΔΔCt method [12,30]. All experiments were run in triplicate.
Western blot
Western blot analysis was performed as the previous description [12]. The cells were harvested after being washed three times with cold PBS and placed in a lysis buffer (Beyotime Institute of Biotechnology, China) on ice for 15 min. The cell lysates were centrifuged at 12,000 × g at 4°C for 10 min, and the supernatants were collected. The concentration of protein was detected via the BCA method. Proteins were separated in 8% SDS polyacrylamide gel by electrophoresis and transferred onto nitrocellulose immunoblot membranes (Millipore, Bedford, MA, USA) after 4h of blocking in TBST solution containing 5% skim milk, and then they were incubated with primary antibodies ( 2000). The membranes were washed and then incubated with horseradish peroxidase-conjugated (HRP) secondary antibodies for 1 h at room temperature. Proteins were detected by enhanced chemiluminescence using ECL reagent (Beyotime Institute of Biotechnology, China) and visualized on an image system (Image Quant LAS 4000 mini, USA). Signal densities were quantified using software Image J 1.44 (National Institutes of Health, Bethesda, MD, http://rsbweb.nih.gov/ij/).
Statistical analysis
Data were analyzed by SPSS version 18.0 (Armonk, IBM Corp, NY, US) and presented as means ± standard deviations (SD), with p <0.05 considered statistically significant. Results were analyzed using the GraphPad Prism software (version 5.0, GraphPad Prism Inc., San Diego, CA). A Dunnett-t multiple comparison analysis of treatments versus controls was applied.
Occurrence and concentration of NAms in China's drinking water
According to the literature [13][14][15][16][17][18][19][20], nine NAms compounds are found in China's drinking water. NDMA, NDEA and NEMA are the most common and are found at the greatest concentration; thus, in our study, we used 10, 5 and 5 ng/L, respectively. Upon examination of the chemicals' toxic effects, cytotoxicity and proliferation rates were determined to be confounding factors. To avoid this, we measured the viability of NIH3T3 cells exposed to these compounds, as well as a mixture of all. Table S2 shows these data and the actual concentrations for the biological experiment.
Cytotoxicity of NAms in NIH3T3 cells
Cell survival data appears in Figure 1. Survival decreased with an increased concentration of NAms in both single and mixture exposure groups. Treatment with mixed NAms reduced viability the most. The data show that a 1,000-fold concentration of mixture exposure was chosen as an optimal concentration, according to the OECD's proposed genotoxicity dose requirements.
NAms mixture exposure increased Ames assay colonies
An Ames test was performed in TA97, TA98, TA100 and TA102 S. typhimurium strains, and the data demonstrate that 100-fold (20 × 10 2 ng/L) and 1,000-fold (20 × 10 3 ng/L) NAms mixtures increase colonies, as shown in Figure 2A. The mutagenic index (MI) data appear in Supplemental Table S2. No differences were observed for S. typhimurium revertant colonies among the three NAms alone or in the mixture of TA97 and TA102 strains (Supplemental Figure S2).
NAms mixture exposure caused chromosomal damage according to
a CBMN test
A CBMN test was used to measure chromosomal damage after exposure of an NIH3T3 cell line to NAms alone or a mixture, and the data show that the mixture increased MNi in NIH3T3 cells but that a single exposure has no influence ( Figure 2B). NBUDs and NPB were similar (Supplemental Figure S4).
NAms mixture exposure can induce DNA double-strand breaks, as measured with a comet assay
A comet assay confirmed that a mixture of 1,000-fold (20 × 10 3 ng/L) NAms exposure significantly increased the percentage of tail DNA (p <0.05) in NIH3T3 cells compared to the controls; however, no such changes were observed in NAms alone (1 to 1,000-fold) or with low-dose mixed exposure (1 to 100-fold; Figure 2C). The 8-OHdG data agree with the comet assay (Supplemental Figure S3)
Colony formation assay
Cell colonies were counted as depicted in the Methods section, and CFE (%) and RCFE (%) data appear in Figure 3A.Compared with the controls, RCFE (%) was not significantly different among NDMA, NDEA, NMEA and mixture exposures (p >0.05). RCFE for each treatment was expressed as a percentage of CFE from the negative controls.
NAms mixture exposure transformed NIH3T3 cells
Individual NAms do not induce NIH3T3 cell transformation; however, the mixture increased TF in the NIH3T3 cell ( Figure 3B). Con A and the soft agar assay confirmed phenotypic changes in the transformed cells ( Figure 3C and Figure 3D). Furthermore, the untransformed NIH3T3 cells were agglutinated with only a high (100µg/ml) concentration of Con A, whereas NAms-transformed cells were agglutinated with 25µg/ml Con A (Supplemental Table S4). Negative controls failed to grow when suspended in soft agar; however, NAms-transformed cells grew in soft agar (Supplemental Table S5).
NAms mixture exposure leads to G2/M arrest by multiple gene regulation
Cell-cycle analysis showed significant G2/M arrest after 1,000-fold (20 × 10 3 ng/L) NAms mixture exposure ( Figure 4A and 4B). Because this concentration can induce cell transformation and G2/M arrest, we speculated that some genes may regulate the effects of the mixture exposure. Gene expression data show that genes were altered, and compared to the single exposure, the mRNA and protein for P53/CDK1/CDC25A/P38/CyclinB1 were down-regulated with the 1,000-fold (20 × 10 3 ng/L) NAms mixture exposure group, and the P21 and GADD45A genes were up-regulated ( Figure 4C and Supplemental Figure S5). After 1,000-fold NAms mixture exposure, the CHK1/CHK2 genes were up-regulated, and the CDC25A genes were down-regulated ( Figure S5).
Discussion
NAms are nitrogen non-halogen DBPs mainly generated during chloramine disinfection [31,32], and they may be carcinogenic. Thus, some countries have guidelines for specific NAms components; however, these guidelines are based on single NAms exposures. Better data about NAms mixtures may help establish safety limits for these types of exposures in drinking water, yet no reports for mixtures are available [33,34].
Furthermore, the genotoxic assessments of NAms are usually assessed as mg/L, which is 40-50 million times the concentrations present in drinking water. For example, NDMA has been studied using an Ames assay at 20 mM [35] and with a comet assay at 2.39 mM [36]. Previous genotoxicity studies with NAms-DBPs did not assess the genotoxicity thoroughly with different end points [35,36]. Research may also ignore potent adverse effects when investigating the predicted effects (or risk assessment) for mixed chemicals at a low dose. Thus, we must better characterize the genotoxicity and mutagenicity of NAms [37]. Therefore, we used a genotoxicity test proposed by the International Council for Harmonization (ICH) [38], including an Ames, comet, 8-OHdG and CBMN assays, to evaluate the genotoxicity of NAms individually and as mixtures in vitro. We found increased mutagenicity in double revertants, chromosome abbreviation and DNA double-strand breaks after exposure to a three-compound mixture at 1,000-fold (20 × 10 3 ng/L) of actual NAms in drinking water, and at this concentration, genotoxic and carcinogenic effects were noted. The mutagenic potential of NAms was measured using a cell transformation assay in vitro [39], and NAms in a 1,000-fold (20 × 10 3 ng/L) mixture induced a malignant transformation in NIH3T3 cells. Thus, the mixture had greater genotoxicity and mutagenicity than the individual constituents.
To evaluate genotoxicity and mutagenicity of a NAms mixture exposure, we conducted cell-cycle and gene-expression evaluations in the transformed cells. The data show that 100-fold and 1,000-fold concentrations of NAms exposure could lead to G2/M arrest; however, only the 1,000-fold concentrations induced cell transformation. RT-PCR and western blot showed that p53, p21, CDC25A/B1, CHK1/2 and CDK1/2 expressions were up-regulated in the 100-fold (20 × 10 2 ng/L) exposure but down-regulated after a 1,000-fold exposure. DNA damage from NAms included DNA adducts and an activated DNA damage checkpoint. The NIH3T3 cells' exposure to NAms caused DNA damage and initiated DNA repair pathways, triggering cell-cycle arrest to repair the damage. However, the activated repair response was not effective when the exposure exceeded a threshold that produced a transformed malignant cell. Our study had some limitations in that we did not elucidate how each NAms component contributed to toxicity independently as a carcinogen. Thus, more study is needed. Furthermore, the gene-expression data were uncertain, as activity varied with increasing concentrations. We require RNA and/or protein data to better understand these preliminary observations. This is the first genotoxicity and carcinogenicity study of low-dose NAms mixture exposure, and the data suggest NAms are hazardous to public health; however, their toxicity and mechanism must be clarified. Human epidemiological studies must be performed to understand NAms' adverse effects on public health and our environment. | 2018-04-03T01:25:55.844Z | 2017-08-18T00:00:00.000 | {
"year": 2017,
"sha1": "11d3ce2c88447be8b198a5157c13bb58324a7897",
"oa_license": "CCBYNC",
"oa_url": "https://www.medsci.org/v14p0961.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9afb9205af283dbdb25655dcbf5c54b7cec045b5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
245672555 | pes2o/s2orc | v3-fos-license | Agreement between low-dose and ultra-low-dose chest CT for the diagnosis of viral pneumonia imaging patterns during the COVID-19 pandemic
Background Chest CT scan has an important role in the diagnosis and management of COVID-19 infection. A major concern in radiologic assessment of the patients is the radiation dose. Research has been done to evaluate low-dose chest CT in the diagnosis of pulmonary lesions with promising findings. We decided to determine diagnostic performance of ultra-low-dose chest CT in comparison to low-dose CT for viral pneumonia during the COVID-19 pandemic. Results 167 patients underwent both low-dose and ultra-low-dose chest CT scans. Two radiologists blinded to the diagnosis independently examined ultra-low-dose chest CT scans for findings consistent with COVID-19 pneumonia. In case of any disagreement, a third senior radiologist made the final diagnosis. Agreement between two CT protocols regarding ground-glass opacity, consolidation, reticulation, and nodular infiltration were recorded. On low-dose chest CT, 44 patients had findings consistent with COVID-19 infection. Ultra-low-dose chest CT had sensitivity and specificity values of 100% and 98.4%, respectively for diagnosis of viral pneumonia. Two patients were falsely categorized to have pneumonia on ultra-low-dose CT scan. Positive predictive value and negative predictive value of ultra-low-dose CT scan were respectively 95.7% and 100%. There was good agreement between low-dose and ultra-low-dose methods (kappa = 0.97; P < 0.001). Perfect agreement between low-dose and ultra-low-dose scans was found regarding diagnosis of ground-glass opacity (kappa = 0.83, P < 0.001), consolidation (kappa = 0.88, P < 0.001), reticulation (kappa = 0.82, P < 0.001), and nodular infiltration (kappa = 0.87, P < 0.001). Conclusion Ultra-low-dose chest CT scan is comparable to low-dose chest CT for detection of lung infiltration during the COVID-19 outbreak while maintaining less radiation dose. It can also be used instead of low-dose chest CT scan for patient triage in circumstances where rapid-abundant PCR tests are not available.
Introduction
Chest CT scan has an important role in the diagnosis and management of COVID-19 infection caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [1]. In previous studies, chest CT was introduced as a highly sensitive method to screen for COVID-19 pneumonia [2]. A previous meta-analysis showed that chest CT scan has a pooled sensitivity of 94% and pooled specificity of 37% in the diagnosis of COVID-19 [3]. However, the studies included in this meta-analysis showed considerable heterogeneity that precludes conclusive results. The causes of heterogeneity include different methods of diagnosis confirmation (e.g., repeated RT-PCR test), difference in the prevalence of the infection reported from different geographic locations, and other variables. Low-dose CT scan is a promising method shown to have acceptable diagnostic accuracy in the diagnosis of COVID-19 pneumonia [4][5][6]. In a previous study, lowdose chest CT was demonstrated to have a sensitivity of 86.7% and specificity of 93.6% for the diagnosis of COVID-19 [4].
Non-enhanced chest CT scan has been proposed as an option to assess the possibility of COVID-19 infection in adults [7]. Since radiation dose is a main concern, especially when managing asymptomatic individuals, efforts have been made to reduce the radiation dose [5,8]. To the best of our knowledge, no study has used ultra-lowdose chest CT for this purpose.
Ultra-low-dose chest CT has been proven effective for screening lung nodules [9]. Additionally, ultra-low-dose CT was defined as an effective method to reduce radiation dose as well as motion artifact, and its radiation dose equals chest X-ray [10].
This prospective study was performed to determine diagnostic performance of ultra-low-dose chest CT compared to low-dose chest CT in detecting lung infiltration during the COVID-19 pandemic.
Study design
A total of 167 patients were prospectively enrolled in the current study. The patients were candidates for coronary angiography or other elective surgeries at a hospital in Tehran, Iran. The consecutive sample underwent chest CT scan two times. First, low-dose chest CT was performed. Then, ultra-low-dose CT was performed shortly after the low-dose scan. Low-dose chest CT scan is recommended by the Iranian Ministry of Health for the management of patients with suspected COVID-19 pneumonia [11].
Chest CT protocols
Images were obtained with a single General Electric LightSpeed-4 scanner (GE, Milwaukee, WI, USA). Scanning parameters for low-dose and ultra-low-dose protocols are presented in Table 1. The volume of CT dose index (CTDI vol ) was a fixed number for each protocol (Table 1). Dose-length product (DLP) was variable according to the thoracic length in each patient, and the numbers for low-dose and ultra-low-dose protocols were reported in the summary page of each CT scan. Effective dose (ED) was calculated by the formula of DLP × k, in which k [mSv/(mGy cm)] was set to 0.014 for chest CT according to a report from the American Association of Physicists in Medicine published in 2008 [12].
Image analysis
Two radiologists (SA and GIA with six and five years of experience in reporting chest CT scan, respectively) independently reviewed all ultra-low-dose CT images first and decided on positive or negative CT findings suggestive of COVID-19 pneumonia, based on previous reports [13]. A third radiologist (HBM) checked the results, and in case of non-accordance, read-out was performed by a senior radiologist (MST) with 18 years of experience in reporting chest CT scan who was blinded to low-dose images. After one week, the above protocol was repeated for low-dose scans, and the results were recorded.
Chest CT findings for each protocol was recorded by a single radiologist (SA). Chest CT score was calculated based on visual estimation of involvement percent in each lobe as following: 0 for none, 1 for 1-25%, 2 for 26-50%, 3 for 51-75% and 4 for 76-100%. The sum of five lobe scores recorded as the total chest CT score (range between 0 and 20) [14].
Sample size
A former study reported the sensitivity of low-dose chest CT to diagnose COVID-19 pneumonia as 86.7% [4]. Since we anticipated the sensitivity value of ultra-lowdose chest CT to be comparable with low-dose CT with a precision of 10% and prevalence of COVID-19 diagnosis about 30% at the sampling location, the minimum required sample size was calculated as 150 patients.
Statistical analyses
Descriptive statistics, including frequency, percentage, mean, and standard deviation (SD), were used to describe the data. Diagnostic performance of ultra-lowdose chest CT was reported using sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). An inter-reader reliability analysis using the kappa statistic was performed to determine consistency between two radiologists regarding the diagnosis of COVID-19. Cohen suggested the Kappa result be interpreted as follows: values ≤ 0 as indicating no agreement and 0.01-0.20 as none to slight, 0.21-0.40 as fair, 0.41-0.60 as moderate, 0.61-0.80 as substantial, and 0.81-1.00 as almost perfect agreement [15]. Bland-Altman and Passing-Bablok regression methods were used to describe the agreement between low-dose and ultra-lowdose CT scores [16]. Values less than 0.5 are indicative of poor agreement, values between 0.5 and 0.75 indicate moderate, values between 0.75 and 0.9 indicate good, and values greater than 0.90 indicate excellent agreement [17]. Analyses were performed using Statistical Package for Social Sciences (SPSS) program version 25.0.
Ethics
Institutional review board approval was obtained. Ultra-low-dose chest CT had sensitivity and specificity values of 100% and 98.4%, respectively compared to lowdose chest CT as the reference, for imaging findings of COVID-19 pneumonia (Fig. 1). Two patients (1.2%) were falsely categorized to have pneumonia on ultra-low-dose CT scan (Fig. 2). Upon further review, these two false positive results were due to expiratory phase imaging that aggravated the lung markers and air-trapping at lung bases accentuated by increased noise in ultra-low-dose protocol at lung bases.
The inter-rater reliability for the two readers (SA and GI) was found to be kappa = 0.77 (P < 0.001) for ultralow-dose and kappa = 0.81 (P < 0.001) for low-dose CT scan which was substantial and almost perfect, respectively. Table 3 presents the agreement between COVID-19 features on low-dose and ultra-low-dose chest CT scans. As seen, perfect agreement between low-dose and ultralow-dose scans was found regarding diagnosis of groundglass opacity, consolidation, reticulation, and nodular infiltration. None of the patients had reverse halo sign in low-dose protocol. However, a single reverse halo sign was reported on ultra-low-dose CT scan (Fig. 3). One patient had lymphadenopathy and four patients had pleural effusion on both low-dose and ultra-low-dose scans (kappa = 1, P < 0.001) with complete agreement. Table 4 shows distribution and laterality of the lesions on low-dose and ultra-low-dose chest CT scans. There were good agreements between low-dose and ultra-lowdose CT scans regarding distribution and laterality of the lesions. Agreement between ultra-low and low dose CT score CT score ranged between 1 and 16 in both protocols with mean of 3.3 (± 3.3) in ultra-low-dose and 3.8 (± 3.4) in low-dose protocol. The Bland-Altman analysis results to assess agreement between two measurements of CT score (low-dose and ultralow-dose) are displayed in Fig. 4. The mean difference between the two scores was 0.023 with the 95% agreement limits of − 0.031 to 0.079 and 16/167 = 9.5% of data outside the limits of agreement. Lin's concordance correlation coefficient of absolute agreement was 0.99. Passing-Bablok regression analysis results indicated that overall correlation of CT scan score measurements between low-dose and ultra-low dose was excellent and there was non-significant deviation from linearity in this association (P > 0.02); (Fig. 5). Pearson's correlation coefficient was r = 0.99 (P < 0.001) for CT score between two protocols.
In ultra-low-dose protocol, radiation dose results including CTDI vol , DLP and ED were one third to one fourth less than similar numbers in low-dose protocol ( Table 5).
The effect of BMI
The sensitivity and specificity values of ultra-low-dose CT scan to diagnose consistent features of COVID-19 infection in patients with BMI values less than 30 kg/ m 2 were 100% and 97.3%. These values were respectively 100% and 85.7% in those whose BMI values were more than 30 kg/m 2 .
Discussion
The aim of this study was to compare viral pneumonia findings between ultra-low-dose and low-dose chest CT scan during the COVID-19 pandemic. Since RT-PCR was not available, especially at earlier stages of the pandemic, lung CT scan had an integral role in the screening and diagnosis of COVID-19 pneumonia. A new screening and triage algorithm has been proposed by chest CT imaging of suspicious patients [18]. Many medical centers adhered to the proposed strategy, and chest CT imaging was used for the triage of suspected patients. This in turn is translated to an increasing pattern in the number of chest CT scans and consequently a higher radiation dose [19]. When this high number of CT acquisitions is coupled with repeated imaging for following progression/absorption of the lesions, patients are exposed to high degrees of irradiation. Therefore, keeping the radiation dose as low as possible became a priority for patient safety purposes. First, this goal was investigated using a low-dose chest CT approach (< 3 mGy) that has been recommended by some experts [20,21]. The evidence shows that low-dose chest CT had high sensitivity (96.6%) for triage of COVID-19 in a study by Bahrami-Motlagh et al. [22] on 163 patients with suspected COVID-19 where 80 cases had positive RT-PCR results. At the next step, we intended to investigate ultra-low-dose chest CT and determine its sensitivity and specificity for triage purposes. Hence, we tried to assess the agreement between the two methods in diagnosing characteristics on chest CT suggestive of viral pneumonia during the COVID-19 pandemic.
New CT scanning machines with innovations in dosesparing technologies have the benefit of lowering the radiation dose level. Effective radiation dose of ultra-lowdose can be as low as 0.14-0.5 mSv. This dose is very similar to the effective dose of a chest radiography (0.1 mSv) [6,8]. This very low radiation dose with better image quality for visualization of abnormalities is advantageous Fig. 4 Bland-Altman plot of difference in CT score (ultra-low-dose score minus low-dose CT score) against the mean score of the two methods; cases over limit = 10 (5.99%); cases under limit = 6 (3.59%) Fig. 5 Passing-Bablok regression method to assess agreement between low-dose and ultra-low-dose of CT-score. Linearity Test (Test for deviation from linearity) had P > 0.20 and Passing-Bablok line was "Y = X", with the R-square of 0.99 Table 5 Radiation dose results in low-dose and ultra-low-dose protocols CTDI vol CT dose index volume, DLP dose-length product, ED effective dose, SD standard deviation CTDI vol DLP, Mean (± SD) to chest radiography. In case that diagnostic value of ultra-low-dose CT is satisfactory, this method can have significant clinical implications. In a study on a large sample of confirmed COVID-19 cases, Kuo et al. [23] reported that chest radiography had no role in screening of asymptomatic patients. A comparative study on 56 patients with mean age of 14 years and laboratory confirmed COVID-19 by Das et al. [24] showed that some suggestive abnormalities such as GGO and consolidation were detected in 46% of patients (26 cases) on chest CT. However, chest radiography detected the abnormalities in only 11 patients (19.6%). Therefore, a modality that can provide image quality similar to chest CT and superior to chest radiography, but with much lower radiation dose (i.e., ultra-low-dose chest CT) would be of interest for radiologists.
ED, Mean (± SD) P value
Our findings suggested that ultra-low-dose chest CT scan had a very good diagnostic performance for diagnosis of lung infiltration suggestive for viral pneumonia during the COVID-19 pandemic compared to low-dose CT scan with a sensitivity of 100% and specificity of 98.4%. Most abnormalities included GGO and consolidation. Overall, CT scores were low as the population studied comprised asymptomatic patients admitted for elective procedures. According to the Radiological Society of North America Expert Consensus Statement, GGO with peripheral distribution accompanied by consolidation is considered a typical appearance of COVID-19 pneumonia [13]. In a similar study to describe the diagnostic value of ultra-low-dose chest CT compared to standard dose CT, Greffier et al. [25] found that 97 patients out of 380 cases suspected to have viral pneumonia had CT patterns compatible with a viral pneumonia. Ultra-low-dose CT had a sensitivity of 98.9% and a specificity of 99% compared to standard dose CT as the reference standard to diagnose viral pneumonia pattern. Similar to our study, the patients were recruited during the COVID-19 outbreak between March and April 2020. Additionally, the CT patterns used to define viral pneumonia were those used by the current study including bilateral diffuse GGO, patchy consolidations, crazy paving, and other less frequent abnormalities. The mean effective radiation dose was 0.2 for ultra-low-dose and 1.6 for standard dose CT protocols.
Introducing imaging methods with lower radiation dose is promising for diagnosing COVID-19 pneumonia. Of course, a major concern in performing chest imaging, besides its accuracy, is radiation dose. Low-dose and ultra-low-dose CT scans are useful methods that can be implemented in such settings [26]. A previous study examined low-dose CT scan for this purpose with promising results [4] with sensitivity of 86% and specificity of 93%. According to our findings, ultra-low-dose chest CT was a reliable method with good accuracy when compared to low-dose chest CT in the diagnosis of COVID-19 pneumonia. Mean effective radiation dose in ultra-low-dose group was 0.14 mSv that is similar to the radiation dose of antero-posterior chest radiography of 0.14 mSv [27].
There was one false positive result in the ultra-low-dose group, detected as ground glass opacity at the lung bases, which was not confirmed on low-dose images. This finding was due to increased soft tissue thickness and noise at the lung base, which will be accentuated by inadequate inspiration and obesity. We suggest using a higher tube current of 20 mA instead of 10 mA in cases with high BMI to overcome this limitation.
Our study has several limitations. We were not able to perform RT-PCR to confirm COVID-19 infection in patients with suggestive imaging findings. This was due to a shortage of diagnostic kits at the time of the study conduction. However, this study focused on comparing the imaging abnormalities between ultra-low-dose and low-dose CT scans, not confirming the diagnosis of COVID-19. Our population study mainly consisted of asymptomatic candidates for elective surgeries during the COVID-19 outbreak, which resulted in fewer positive cases with less severe lung involvement that limits the generalizability of our results. A significant number of COVID-19 patients will require serial chest CT scans for different reasons. Unfortunately, follow-up ultra-lowdose scans were not obtained in our study; therefore, we were not able to evaluate its accuracy compared to the low-dose CT scan.
Conclusion
Ultra-low-dose chest CT is an accurate method with less radiation compared to low-dose CT to diagnose lung infiltrations during the COVID-19 pandemic in patients admitted for elective or semi-urgent medical/surgical procedures. This technique could be used instead of lowdose CT during outbreaks when high number of patients may require chest imaging and there is shortage of diagnostic kits or there is uncertainty regarding the accuracy of laboratory tests. We suggest performing further studies to determine accuracy of ultra-low-dose chest CT in comparison to laboratory diagnosis and its role in the follow-up of COVID-19 patients. | 2022-01-05T14:41:04.051Z | 2022-01-05T00:00:00.000 | {
"year": 2022,
"sha1": "a2be6b656c75c988ca07dad12dfeee9cc667c7be",
"oa_license": "CCBY",
"oa_url": "https://ejrnm.springeropen.com/track/pdf/10.1186/s43055-021-00689-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2be6b656c75c988ca07dad12dfeee9cc667c7be",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228917536 | pes2o/s2orc | v3-fos-license | A numerical Study on Processes of Charge and Discharge of Latent Heat Energy Storage System Using RT27 Paraffin Wax for Exhaust Waste Heat Recovery in a SI Engine
A numerical Study on Processes of Charge and Discharge of Latent Heat Energy Storage System Using RT27 Paraffin Wax for Exhaust Waste Heat Recovery in a SI Engine Habib Gürbüz, Durukan Ateş 0000-0001-7069-8633, 0000-0002-6604-7384 1 Department of Automotive Engineering, Faculty of Engineering, Süleyman Demirel University, 32200, Isparta/Turkey 2 Graduate School of Natural and Applied Sciences, Süleyman Demirel University, 32200, Isparta/Turkey
Introduction
Even in today's engine technologies, approximately 30-40% of the fuel energy in vehicles with internal combustion engines (ICEs) is thrown into the atmosphere as waste heat energy [1][2][3]. On the other hand, carbon dioxide (CO2) emitted into the atmosphere by the exhaust gases of internal combustion engines is the main source of greenhouse gas emissions. Also, unburned hydrocarbons (UHC), carbon monoxide (CO), nitrogen oxides (NOx) and particulate matter (PM) emissions emitted into the atmosphere from internal combustion engines pose health exposures [4][5][6]. The engine efficiency could be increased by approximately 4-5% by converting the exhaust waste heat energy of ICEs into useful energy with appropriate waste heat recovery systems. With a suitable heat HEX design, a considerable part of the exhaust waste heat energy could be recovered without increasing the back pressure of exhaust gas too much [7,8]. Recently, the latent heat thermal energy storage (LHTES) systems using PCMs as storage media for exhaust waste heat recovery are widely used by many researchers [9][10] due to its large energy storage capacity [11] and almost constant operating temperature [12] in a narrow temperature interval [13]. A wide variety of PCMs with various melting temperatures, such as paraffin waxes, organic and inorganic compounds, and hydrated salts, are used in LHTES systems [14]. LHTES systems utilize the melting/solidification enthalpy of PCMs [15]. PCMs, provide an opportunity to stabilize the thermal behaviour of the LHTES system by providing increased thermal inertia in the melting range [16]. Phase change temperature, stability, amount of latent heat, and thermal conductivity should be considered in the selection of the appropriate PCM for the system to be used [15]. Also, the melting temperature of the PCM used for exhaust heat recovery is a decisive factor in PCM selection. Generally, PCMs with a melting temperature above 293 K is used to storing heat [17]. On the other hand, the low thermal conductivity of most PCMs limits the performance of LHTES systems, leading to a much longer charging or
Abstract
In the present paper, the numerical analyses of the heat charge and discharge processes of the latent heat energy storage (LHTES) system designed for the recovery of the exhaust waste heat energy of the SI engine presented. In the LHTES system as phase change material (PCM), the charge and discharge ability of paraffin wax commercially identified with code RT27 were analyzed depending on time. Two closed-loop fluid circulation system consisting of two heat exchangers (HESs) was designed, someone connected to the exhaust path of SI engine for waste heat recovery, and the other used for the charging and discharging of waste heat energy in the PCM. To transfer the waste heat from the hot exhaust gases to the PC, cold water was used as the heat carrier fluid. In the numerical analysis, the exhaust gas temperature and flow rate values of a singlecylinder, air-cooled having a stroke volume of 476.5 cm 3 SI engine operated with gasoline at 1600 rpm engine speed and 1/2 throttle position were used. As a result, at designed LHTES system and numerical analysis performed for RT27 paraffin wax under boundary conditions, the process of heat charge (melting) completed at 8000.sec with 93% liquid-fraction, while the process of heat discharge (solidification) completed at 55000.sec with 15% liquid-fraction.
Keywords: Latent heat energy storage, Serpentine tube heat exchanger, Exhaust waste heat recovery, SI engine. discharging process [18]. Another disadvantage of the solid-liquid phase change process is the mechanical stability of the PCM and the volume change that occurs during the melting process [19]. Therefore, plenty of research has been done on the thermal performance of LHTES systems with different designs, operating conditions and PCMs. Sharifi et al. [20] investigated experimentally and computationally the melting and solidification of PCM surrounding a vertical heat pipe (HP), finned with metal foils in a vertical cylinder to increase the heat transfer surface area. The increase of phase change was found by increasing the heat transfer surface area of the combination of HP-metal foils. Pandiyarajan et al. [21] found that in a thermal energy storage system using a finned tube HEX and cylindrical PCM capsules, approximately 10-15% of the fuel energy can be stored as heat energy at different engine loads. Hatami et al. [22] optimized the finned-tube HEX to improve exhaust waste heat recovery in a diesel engine. It was determined that the fin height is effective on the pressure drop, and the number of fins is effective in increasing the heat recovery performance. Tiari et al. [18] simulated the heat charging process of the finned heat pipe supported LHTES system using a temporary twodimensional model. It was found that decreasing the heat pipe spacing caused the melting rate to increase and the base wall temperature to decrease. Also, natural convection accelerated the melting process and shortened the total charging time by approximately 30%. Although there are PCM container arrangements in rectangular, cylindrical, spherical and annular configurations [23,24], it is common to using cylindrical PCM container structure that can be mounted both vertically and horizontally [12,25]. In the literature studies, it is seen that some of the exhaust waste heat energy can be converted into useful energy by using PCMs as a storage area in LHTES systems. However, additional experimentally and computationally studies are required on the appropriate PCM selection, HEX design, operating conditions and general structure of the LHTES system in terms of optimization of parameters such as charge/discharge time, phase change capability and thermal conductivity. In this paper, an LHTES system using PCM as a storage area was designed to store and reuse the exhaust waste heat energy of the SI engine. Paraffin wax commercially identified with the code RT27 was used as the PCM. Time-dependent Computational Fluid Dynamics (CFD) analyses were performed for processes of heat charge (melting) and discharge (solidification) in the LHTES system. Timedependent data and contour images of heat flux, mean temperature, liquid-fraction were obtained and interpreted.
Material and Method
In the present paper, an LHTES system for reuse by storing in PCM of the exhaust waste heat energy of the SI engine was designed. Modelling of LHTES system is given in Fig. 1. Two closed-loop fluid circulation system consisting of two HEXs was planned for charge and discharge of heat to the PCM. A U-tube copper HEX is positioned inside the rectangular prism muffler mounted to the exhaust line of the SI engine for waste heat recovery. In the insulated container where the PCM is stored, two intertwined serpentine tube heat exchangers (STHEX) are designed to be used in the processes of heat discharge and heat charge of PCM. A closed-loop fluid circulation system was consisted between of the outer serpentine tube and the Utube copper HEXs to store some exhaust waste heat in the PCM. To transfer the heat stored in the PCM to the environment to be heated, a second closed-loop fluid circulation system was created between the inner serpentine tube and a cooling radiator. It is planned to use water as a heat carrier in the closed-loop fluid circulation systems, and it's to be circulated between the HESs with electric pumps. A cylindrical container with a diameter of 230 mm, a height of 210 mm and a total internal volume of 8395 cm 3 was designed for the storage of PCM. In the container of PCM, the outer STHEX used for the heat charge is designed from copper pipe having 10 mm pipe outer diameter, and 150 mm helix diameter and length. The outer serpentine tube has an 8-helical wrapped vertically on top of each other. The STHEX used for the heat discharge is designed having 9 mm pipe outer diameter, helix 75 helix diameter, and 150 mm helix length. The inner serpentine tube has a 16-helical wrapped vertically on top of each other. The exhaust muffler has a 190x160x56 mm rectangular prism structure having 62.5 mm length nozzle and diffuser at inlet and outlet and connected to the exhaust line with the help of cylindrical pipes by 43 mm inner diameter. A U-shaped copper pipe having 7 mm inner and 9 mm outer diameter is positioned along the exhaust gas flow to transfer the waste heat from the hot exhaust gases to the water. In numerical analysis, the inlet/outlet of water to the PCM container for the heat charge period and heat discharge period was defined as Hot_inlet/Hot_outlet and Cold_inlet/Cold_outlet, respectively. Also, surfaces of STHEXs were defined as Hot_wall and Cold_wall, respectively. A User-Define-Functions (UDF) code was developed to calculate the water temperature cyclically in the closed-circuit liquid circulation system. In this code, it calculates the first-time step with the initial limit values and records the surface area average of the temperature values on the surfaces of "Water_outlet", "PCM_outlet (hot or cold)" at the end of the first-time step. At the beginning of the next time step, it moves the values taken from the "Water_outlet" surface to the "PCM_inlet (hot or cold" surface and the values taken from the "PCM_outlet (hot or cold)" surface to the "Water_inlet" surface. This cyclic calculation continues step by step until the numerical analysis completed for both heat charge and heat discharge. Design, boundary conditions and mesh generation steps of the exhaust HEX and PCM container designed for numerical modelling is given in Fig. 2.
Governing equations
The numerical simulations were carried out using the three-dimensional time-dependent turbulent flow model. The presented governing equations were solved timedependent using the ANSYS-Fluent 14.5 software. Also, "User Manual" of ANSYS-Fluent 14.5 software is used in Eq.1-12 and their explanations [26]. In time-dependent problem solving, number of film sub-time steps was used to advance the film time to the same physical flow time, and the film sub-time step is calculated by "Transient equation" specified in Eq.1.
Skewness
Node Element Where, ∆ is the flow time steps, and is the number of film time steps. The general "Continuity equation" reflecting mass conservation for flows incompressible as well as the compressible flow is expressed as in Eq.2.
Where Sm is the mass added from one phase to the other in the case of the 2nd different phase, ρ is the density, and ⃗ is the velocity vector. "Momentum equation" is defined by Eq.3.
Where is the static pressure, ⃗ is the gravity body force, ⃗ outer body forces, and ̅ ̅ is the stress tensor. ̅ ̅ is defined by Eq.4.
Where μ is the molecular viscosity, is the unit tensor, and the second term (2/3) from the right shows the effect of volume expansion. In the solidification and liquefaction analyses, the modified "Momentum equation" defined in Eq.5 used to take into account the declined porosity in the mushy-zone.
Where β is the liquid-fraction and ε is a very small number that avoids division by zero. ⃗ and ⃗⃗⃗⃗⃗ are solids velocities (shrinkage velocity) that vary due to the withdrawal of solidified material out of the area. Amush is the mushy area constant and is defined as 10 -6 . The energy equation is defined by Eq.6.
Where E is the total energy, keff is the effective heat conduction coefficient, ⃗ is the diffusion flux of the j type and hj is the enthalpy of the j type. Sh is the heat generated by chemical reaction and/or another heat source identified. The total heat content of the material or enthalpy (H) of the material is calculated as the total (H=h+∆H) of sensible heat (∆H) and latent heat (h). h is defined by Eq.7.
Where href is the reference enthalpy, Tref is the reference temperature, and cp is the specific heat at constant pressure. When solidification (Tsolidus) and liquefaction (Tliquidus) for Eq.7 are defined as the liquid-fraction determined depending on the temperature (Eq.8); Where is the liquid-fraction, Tsolidus is solidification temperature, and Tliquidus is the liquefaction temperature. The latent heat content (∆H) can be formulated by ∆ = according to the latent heat value of material in an L type. ∆H can be taken a value between zero (for solid) and L (for liquid). Thus, the Eq.9 as a modified form of Eq.6 for PCM analyses in which phase change occurs was obtained.
Where k is the heat transfer coefficient. The sedimentations formed by the change in liquid -fraction during liquefaction and solidification was added to the turbulence equation (Eq.5) to calculate the turbulence in the solidified material and the mushy region, as given in Eq.10.
Where S is the source term describing the decrease in the velocity of the material, ∅ is the amount of turbulence dissolved (k, ε, ω).
The RNG k-ε model using at the numerical analysis is derived using statistical techniques called renormalization group theory. Transport equations for RNG k-ε model in the general form are the following (Eq.11 and 12): Where Gk is the production of turbulence kinetic energy due to the average velocity gradients defined in the turbulence generation model of the k-ε module. Gb is the generation of turbulence kinetic energy due to the effect of Buoyancy in the k-ε module. YM is the contribution of unstable expanding compressible turbulence to the total dispersion rate. αk and αε are inversely effective Prandtl numbers for ke and ε, respectively. Sk and Sε are userdefined resources. ke is the kinetic energy per mass, ε is the turbulence dispersion ratio, μeff is the effective dynamic viscosity, is the velocity vector, Rε is the gas constant. C1ε and C2ε are model constants and they get the values of 1.44 and 1.68, respectively. In the LHTES system as PCM, the paraffin wax commercially identified with code RT27 was used. Thermophysical properties in the solid and liquid-phases of PCM is given in Table 1. In numerical analysis, the exhaust gas temperature and the mass flow rate were used of gasoline SI engine which is operated under engine speed of 1600 rpm, stoichiometric air-fuel mixture, and 1/2 throttle position. SI engine a single-cylinder, air-cooled engine having a stroke volume of 476.5 cm 3 , maximum output power of 13 HP and torque of 25 Nm. To transfer the waste heat from the hot exhaust gases to the PCM container, cold water was used as the heat carrier fluid. Thermophysical properties of exhaust gas and water is given in Table 2. Also, in the analysis of heat charge and discharge periods, the initial boundary conditions accepted for exhaust gas, PCM and water is given in Table 3. Table 3. Initially boundary conditions for exhaust gas, water and RT27
Results and discussions
The main objectives of this paper are analyzed the timedependent change of heat flow, mean temperature, and liquid-fraction of RT27 in defined boundary conditions. Therefore, time-dependent 3D numerical analyses of processes of heat charge and discharge were performed for designed cylindrical PCM container. However, 2D contour images taken from the vertical middle section of the PCM container at specified time intervals were presented to deal with in detail the temperature and liquid-fraction during the heat charge and discharge processes. In numerical analysis, the example 3D analysis results at 250.sec and 1000.sec during the heat charge and discharge processes of RT27 paraffin wax is given in Fig. 3 Fig. 4, in the heat charge proses RT27 arrived at maximum heat flux in 300.sec and then decreased rapidly until the approximately 1000.sec. The first stages of the melting process are conductiondominant heat transfer mode. At this stage, since the narrowness of the liquid nano-PCM layers allows the dominance of the viscous force over the buoyancy, an almost immobile liquid nano-PCM structure prevails [13]. Therefore, the heat flux has a higher value at the beginning of the melting process for RT27. At approximately 3000.sec, the heat flux for RT27 declined at a very low value and remained almost constant until the end of the analysis. At the beginning of the heat discharge process, the heat flux increased rapidly due to the high-temperature difference between the hot STHEX wall and the colder RT27. The heat flow then rapidly decreased until the 6000.sec and remained almost constant from the 15000.sec to the end of the analysis. Time-dependent variation of mean temperature in processes of heat charge is given in Fig. 5. The mean temperature in the PCM container filled with RT27 arrived approximately 316 K from the initial value of 293 K at 1000 sec. In the early stages of the heat charge process, the increasing heat flux with the conduction-dominant heat transfer mode increased rate of heat transfer from hot water passing through STHEX to the colder RT27 in the PCM container causing rise faster of the mean temperature. After 1000.sec, the slower increased mean temperature of RT27 reached 336 K at the 8000.sec. Since the RT27 is in the form of mush in the regions close to STHEX after 1000.sec, the heat flux decreasing with the dominance of natural convection reduced the heat transfer rate, and this caused the temperature increase rate to decrease. With the increase in the percentage of RT27 in mush form in the PCM container, the heat transfer rate and temperature increase rate decreased further. As seen in Fig. 6, initially the temperature gradients have distributed horizontally from the first rings of STHEX to the center of PCM container, then vertically down from the center. After 450.sec, the temperature gradients condensed to a higher value around STHEX and in later stages, the temperatures around STHEX increased to higher values than initially.
As seen in Fig. 5, at the beginning of the melting process (in the range of 0-900 sec), due to the high-temperature difference between the hot STHEX wall and the colder RT27, the rate of melting and hence the increase in the liquid-fraction is higher. The liquid-fraction of RT27 in the PCM container is reached approximately 80% within the first 900 sec. After this time, the increase in the liquidfraction has progressed very slowly, reaching only a maximum of 93% in 8000.sec.
The liquid-fraction increased rapidly in the range of 0-900 sec as can be seen in Fig. 7. Initially, the melting that starts around STHEX and the increase in liquid-fraction spread around the SHTEX to the inner and edge regions of the PCM container. While the rate of increase of the liquidfraction slows down significantly in the range of 900-2000 sec, the liquid-fraction remained almost unchanging between the 2000-8000 sec. 8000.sec, it appears that there is still some RT27 in the bottom of the PCM container that does not change to the liquid phase. This amount of solid and/or mushy PCM at the bottom of the PCM container is around 7% as seen in Fig. 5. The amount residue solid and/or mushy PCM was already around 20% at 900.sec and about 10% at 2500.sec. As can be seen, the heat transferred to the PCM container after about 2500.sec did not greatly contribute to the completely melting and passing to the liquid phase of RT27. Time-dependent variation of mean temperature in process of heat discharge is given in Fig. 8. The mean temperature in the PCM container filled with RT27 decreased at approximately 316 K from the initial value of 363 K at approximately 900.sec. At the start of the solidification process (in the range of 0-900 sec), due to the high heat flux due to the high-temperature difference between the cold STHEX wall and the warmer RT27, the average temperature of RT27 in the PCM vessel decreased rapidly. After this period, the rate of decrease in the average temperature slowed down significantly and only could be reached 294 K at the 55000.sec. While the mean temperature decreases from 316 K to 307 K in the range of 900-6000 sec, decreases linearly from 307 K to 294 K in the range of 6000-55000 sec.
As seen in Fig. 9, initially the temperature gradients have distributed horizontally from the inlet rings of STHEX to the center and sides of the PCM container, then vertically to the upper regions of the PCM container. The distribution of temperature gradients was initially very high speed, as in Fig. 8, but slowed down after 900.sec.
As can be seen in Fig. 8, a period in which the liquidfraction almost does not decrease until about 900.sec at the beginning of the heat discharge (solidification) process occurred. In this period, solidification occurred only in a very small area region in around of STHEX. Because the mean temperature approximate 316 K in the PCM container was much higher than the RT27's solidification temperature of 297 K. The decrease in the liquid-fraction (solidification), the started effectively after about 1000.sec was reached about 60% until 6000.sec. At the end of the 55000.sec, the liquid-fraction of RT27 in the PCM container was declined a low value as 15%, and completed of the solidification process.
As seen in Fig.10, the significant decrease in the liquidfraction started at 1000.sec, and the areas indicated in the color scale, first orange, then green, and finally blue, indicating solidification from red to blue in the PCM container, increased. On the other hand, the solidification started from the lower part of the PCM as the input of cold water of STHEX and proceeded upwards. Also, not fully solidified some RT 27 remained in the upper part and the near regions to the edges of the PCM container.
Conclusions
The findings obtained in the numerical analyses of heat charge (melting) and discharge (solidification) processes using RT27 paraffin wax in the LHTES system designed for the exhaust waste heat recovery of a typical SI engine are summarized following; In the RT27's heat charge process, the maximum heat flux at 300.sec was reached, and then the heat flux decreased rapidly up to 1000.sec. At this stage, conduction heat transfer is dominated. After 3000.sec the heat flux declined to a very low value and remained almost constant until the end of the analysis. In the RT27's heat discharge process, in the beginning, the heat flux increased rapidly due to the high-temperature difference between the hot STHEX wall. Then, heat flux decreased rapidly until the 6000.sec and remained almost constant from the 15000.sec to the end of the analysis. At the beginning of the heat charge process, the heat flux hence heat transfer rate rapidly increased due to the conduction dominant heat transfer mode, so the mean temperature of RT27 and the liquid-fraction rapidly increased, too. In the later stages, due to the predominance of natural convection, the heat flux hence heat transfer rate decreased, so the increasing trend of both the mean temperature and of the liquidfraction of RT27 slowed down. Also, although increased slightly of the mean temperature after 3000.sec, the liquid-fraction has remained almost constant. At the end of the analysis (at 8000.sec) the mean temperature of RT27 in the PCM container reached 336 K and the liquid-fraction to 93%. The analysis was terminated as the increase in temperature and liquid-fraction came to a near stop. At the start of the heat discharge (solidification) process due to the high heat flux due to the hightemperature difference between the cold STHEX wall and the warmer RT27, the mean temperature of RT27 in the PCM vessel decreased rapidly. However, the solidification could not begin up to 1000.sec, as the temperature could not rise to the solidification temperature (297 K) of RT27. At the end of the analysis (at 55000.sec) the mean temperature of RT27 in the PCM container declined 294 K and the liquidfraction to 15%. As a result, the process of heat charge completed at 8000.sec with 93% liquid-fraction, while the process of heat discharge completed in a very long time such as 55000.sec with 15% liquid-fraction. | 2020-11-26T09:03:52.628Z | 2020-12-31T00:00:00.000 | {
"year": 2020,
"sha1": "9076a49bd915e438ed7667032a474b6441aa0c3d",
"oa_license": "CCBY",
"oa_url": "https://dergipark.org.tr/en/download/article-file/1312561",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "412af19551388639bf5b38a742ef27add5d432a4",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
236848846 | pes2o/s2orc | v3-fos-license | Some Aspects of Harmonisation of Ukraine’s Competition Legislation to EU Standards
: In modern conditions, obtaining information about market dynamics, trends in demand, and alternative offers from competitors is vital to support the effective operation of enterprises. It is also common business practice to discuss legislative initiatives, non-confidential technical information, quality and safety standards, and various aspects of the industry. However, the direct or indirect exchange of information may be accompanied by various wrongful intentions of economic entities (for example, elimination of competitors, creation of entry barriers, agreement on price levels, certain discounts, sales volumes, and the market's geographical distribution, etc.). In Ukraine, there are currently no analogues of a full-fledged guide to information exchange between competitors, which determined the relevance of this study. The purpose of the study is to establish regulatory, economic principles for assessing the exchange of information between enterprises operating in the relevant market, in the context of compliance with legislation on protection of economic competition; analysis of the progressive international practice of cessation of violations in the form of information exchange, which leads to distortion of economic competition. In Ukraine, it is necessary to adopt the Guidelines for the Exchange of Information between Competitors (from now on referred to as "the Guidelines"), raising awareness of the business community (including associations and chambers of commerce), lawyers, and society in general regarding the main aspects of the competition compliance with competition law in order to promote fair business activities, protect the competitive environment and, as a consequence, consumer welfare.
INTRODUCTION
Obtaining information and, in particular, the exchange of certain data may carry significant risks of distortion of the competitive environment, which means that under certain conditions such actions may be considered by the competition authority as an anticompetitive practice. In this regard, the question remains as to the line between lawful action and violation of legislation on economic competition protection. In practice, the assessment of the admissibility of information exchange is usually accompanied by many enforcement issues, including collecting the relevant evidence of informal arrangements, especially the proving of a causal link between the information exchanged and changes in the relevant market. This is an issue that competition authorities often encounter, especially in cases where it is necessary to prove the restriction of competition in terms of consequences. Notably, from the standpoint of protecting the effective competitive environment, the most dangerous (anti-competitive) manifestations are the development of practical co-operation between the enterprises as a result of data exchanges. Antitrust investigations increasingly often occur in associations, consulting, marketing agencies, and sociological organisations, through which businesses can obtain individualised information about the market, market trends, etc. For example, back in 2009, the European Commission proved a cartel conspiracy in the market of thermal stabilisers with the participation of a Swiss consulting firm that ensured the functioning of the cartel. According to the investigation, the consulting firm acted as a driver for the exchange of market-sensitive information -collecting information and further disseminating it to cartel members. This raises the issue of what information and in what cases may cause risks of infringement of competition law.
In the European Union, there is a European Commission Directive on the application of Article 101 of the Treaty on the Functioning of the EU on horizontal co-operation agreements (Consolidated version of the treaty… 2012), the vast majority of EU member states have an internal explanatory document on this matter and relevant practice. European Union competition law makes provision for an exception: information exchange agreements that violate Article 101 of the Treaty on the Functioning of the European Union (TFEU) may not be covered by this rule if they create cost-effectiveness for consumers and there are no alternatives other than information exchange to create such efficiency and if the parties to the information exchange agreement do not have significant market power. An analysis of current European legal practice indicates that agreements on the exchange of information between competitors considered by the European Commission are usually covered by Article 101 of the TFEU. However, information exchange agreements can be considered as a separate violation of competition law, or as part of another violation -the relevant agreement (for example, a cartel) between competitors. In cases where the exchange of information is part of another agreement between competitors, it must be evaluated together with the latter. This study focuses on the analysis of information exchange between competitors, which can be qualified as an individual violation of competition law.
In December 2009, the Competition Bureau of Canada published a number of Guidelines to Competitors' Co-operation (Government of Canada 2009). The documents address the exchange of information between competitors, both in the form of direct and indirect exchanges and through trade associations. The Guidelines state that, for the most part, such exchanges do not violate the law, as competitors usually avoid sharing information in order to maintain a competitive advantage. In some cases, an agreement that makes provision for the unilateral disclosure or exchange of information between competitors may distort competition by reducing uncertainty about competitors' strategies and reducing the commercial independence of each exchange participant. Thus, in assessing information exchange agreements between competitors, the Competition Bureau of Canada analyses the following factors: the nature of the information exchanged, the timing of information exchange, market power, the order in which information is collected and disseminated, (compensation for anti-competitive effects of information exchange). For example, in Mexico, constitutional changes took place in 2013, and the new Federal Law on Economic Competition (Comisión Federal… 2015) came into force in 2014. Since the discussion of the new Law on Economic Competition, the issue of information exchange has caused great uncertainty for companies. The Fair Trade Commission of Japan published a Guide to Trade Associations, which was harmonised with antitrust law in October 1995 (Information Exchanges… 2010). Although the Guide directly addresses the potential impact on competition through trade associations, a detailed assessment of trade association information exchange by the Fair Trade Commission of Japan can be applied even outside the context of a trade association. The Fair Trade Commission of Japan assumes that the tacit conspiracy will certainly facilitate the exchange of information, in particular related to important competitive factors concerning the current or future business activities of the firms involved.
The issue of information exchange is partially regulated only within the framework of the assessment of anti-competitive risks in the establishment of business associations; namely, the parameters are set, in compliance with which the permission of the Antimonopoly Committee of Ukraine for concerted actions is not required. In particular, the Committee's standard requirements for establishing an association include an exclusive list of data that the association may collect about its members (such as data on technical and scientific information, data on efficient technologies and cost-cutting tools, environmentally friendly technologies; industry problems and solutions, foreign experience, information for co-operation with public authorities and other organisations).
Given the above issues, the main objectives of the study are as follows: analysis of factors that determine whether the exchange of information is inconsistent with competition legislation; -analysis of European practice of termination of violations in the form of exchange of information between competitors, which has led/may lead to distortion of economic competition; -development of proposals on the qualification of violations of the legislation on protection of economic competition in the form of anticompetitive information exchanges between the enterprises operating in the corresponding market, taking into account results of consideration of similar cases in the member states of the European Union.
Studies of many scientists address the analysis of the specific features of information exchange among competitors in the context of compliance with competition legislation. Scientists investigate the permissible and illegal forms of information exchange in the context of protection of economic competition, criteria for assessing the content of information exchanged, the risks of enterprises under investigation by the competition authority, the specifics of cases of termination of competition law in the form of anticompetitive information exchanges.
ANALYSIS OF EUROPEAN PRACTICE OF DETECTING AND TERMINATING VIOLATIONS OF COMPETITION LEGISLATION
Review of cases of detection and termination of violations of competition legislation in the form of illegal information exchanges between enterprises operating in the relevant market allows to develop scientifically sound proposals on the criteria of admissibility of data exchange in terms of preventing elimination or distortion of economic competition; factors that should provide a deterrent effect for market participants from similar wrongdoing in the future; the necessary powers of the Antimonopoly Committee of Ukraine in the context of detection, investigation, termination of this category of offences.
EU competition law lacks clear rules governing the exchange of information between competitors. Article 101(1) of the Treaty on the Functioning of the European Union (TFEU) (2012), which prohibits agreements between undertakings incompatible with the internal market, decisions by associations and concerted practices which may affect trade between the Member States, and which are intended to or lead to the prevention, restriction, or distortion of competition in the internal market, there is no clear prohibition on concluding an information exchange agreement. Thus, the current regulation of information exchange agreements is based on the case-law of the European Commission and the European Courts on the application and interpretation of Article 101 of the TFEU. Article 101 of the TFEU can be applied to treaties, including information exchange agreements, which meet four conditions: agreements between undertakings, restrictions of competition, significant effects on competition, and influence on trade.
In several documents, the European Commission has made recommendations on the legal assessment of such agreements. The first document of this kind is the Notice of the European Commission on Cooperation Agreements (1968) (Information Exchanges… 2010; Švirinas 2012). This Notice states for the first time that the exchange of information between competitors may in some cases violate Article 101 of the TFEU, but in each particular case, it is necessary to assess information about the market (its structure) and analyse other important factors.
The Notice identified the following factors that may influence such an assessment, namely (Information Exchanges… 2010): only the exchange of information that may affect competition is relevant in accordance with competition rules; restrictions of competition are more likely in oligopolistic markets for homogeneous goods.
For example, if post-exchange information is used to restrict the ability of enterprises to operate freely in the market, and the exchange of information itself takes place to coordinate the actions of enterprises, it is likely that it will be considered as such that is restricting competition.
In the VII Report on Competition Policy of the European Commission (1978), a separate part was covered the information exchange agreements. The Report noted that the exchange of information does not constitute a restriction of competition as such; therefore, it is necessary to assess the impact of such an agreement on competition in the corresponding (relevant) market (Information Exchanges… 2010; Švirinas 2012). In its report, the European Commission identified three main criteria to be followed upon considering such situations: Firstly, when assessing the consequences of such transactions, it is necessary to take into account, above all, the structure of the market. The structure of the market may affect the probability that these types of contacts will create incentives for coordinated behaviour between competitors (market participants). Increased transparency resulting from the exchange of information strengthens the interdependence between firms and reduces the intensity of competition in oligopolistic markets, as expanding market knowledge (i.e., transparency) allows participants to track competitors' strategies and respond quickly (and effectively) to each other's actions.
Secondly, the nature and extent of information exchange are important to assess the probability that this information may actually be used by the recipient to coordinate market strategies rather than to achieve more intense competition.
Thirdly, whether the exchange of information is private, because this form of co-operation between enterprises usually improves knowledge of market conditions, or can have a broad public impact on consumers, and therefore creates an opportunity to compare different proposals and increase competition (Information Exchanges… 2010).
Although the first political statement of the European Commission on the evaluation of information exchange dates back to 1968, in practice it was made only in the early 1990s in the case of assessing the admissibility of information exchange in the market of agricultural tractors in the UK (Information Exchanges… 2010;Commission Decision… 1992). In this case, the European Commission conducted a comprehensive assessment of the potential restrictive effects of the autonomous information exchange system. The decision of the European Commission was considered on appeal and by the European Court. Both courts rejected the complaints and fully supported the approach of the European Commission (Information Exchanges… 2010). Following the case in the United Kingdom, the European Commission began to apply the principles stipulated in the various decisions in order to further clarify their scope. Notably, the decision in the case in the United Kingdom led to numerous appeals from businesses seeking individual exemption from liability for possible violations of Article 101(3) of the TFEU.
The Guidelines on the applicability of Article 101 of the Treaty on the Functioning of the European Union to horizontal co-operation agreements (Communication from the Commission… 2011) (hereinafter referred to as "the Treaty") have replaced the European Commission Notice on Cooperation Agreements of 1968. They lack a separate chapter on the exchange of information. In paragraph 10 of the Guidelines, the European Commission has clarified that they do not regulate the above agreements and that some types of horizontal agreements between competitors (for example, information exchange agreements) are considered separately. For example, it is stated that commercialisation schemes arising from joint sales raise two important issues, one of which is "a clear opportunity to exchange classified commercial information, in particular on marketing strategy and pricing" (Paragraph 146). Paragraph 150 of the Guidelines also state that the more concentrated the market, the more useful information on prices or marketing strategies is to reduce uncertainty, the more incentives there are for market participants to share such information (Švirinas 2012).
After revising the 2001 Guidelines 10 years later, the European Commission analysed separately and in sufficient detail the information exchange agreements in paragraphs 55 to 110 of its Communication "Recommendations on the application of Article 101 of the Treaty on the Functioning of the European Union to horizontal co-operation agreements" (hereinafter referred to as the Communication) (Švirinas 2012; Communication from the Commission… 2011; Commission's Guidelines… 2001). The rules set out in this Communication on information exchange agreements constitute the most thorough study of the European Commission, which not only summarises the legal practices of the European Commission and the Court of Justice but also represents the quintessence of the European Commission's critical opinion. The relevant rules of the Communication will be analysed in more detail below when considering specific aspects of information exchange.
Specific Guidelines for the exchange of information between competitors should be considered in the broader context of the revision of the competition rules applicable to the various forms of competition between competitors (actual or potential). The drafts of the three documents were published by the European Commission for comment from stakeholders. Firstly, two exemptions from the project (exemptions) -for research and development (R&D) agreements and special arrangements -have been published (and are intended to replace the existing block exemptions, which expired on 31 December 2010). In addition, the European Commission has published a carefully analysed and expanded version of the Guidelines for Horizontal Cooperation Agreements (2001). The European Commission has filled an important gap with a new section of competition legislation to assess the exchange of information. Prior to that, there were no clear rules for assessing the admissibility of the exchange of information between competitors -except for the maritime sector -and only the Court's case law and the European Commission's decision-making practice could be relied upon. One of the greatest advantages of the Guidelines is that, for the first time, well-organised and clear evaluation conditions are offered (European rules… 2010). The Guidelines do not apply to forms of information exchange that aim to establish or facilitate prohibited price-fixing and market sharing agreements. The assessment of such forms of information exchange within the framework of the competition legislation is not particularly difficult: in principle, they will always be considered (and fined) as illegal cartels.
The European Commission has focused on assessing the situation where the exchange of information exists as a fact, independently of any cartel, where the main economic process is essentially the exchange of information (for example, the exchange of sectoral statistics with or without an association). The document clearly states the position of the European Commission that the exchange of information often promotes competition, as companies gain a deeper understanding of the market, which can lead to significant efficiencies. However, some forms of information exchange increase market transparency to such an extent that companies become aware of the market strategies of their competitors, which may result in the coordination of their competitive behaviour. The European Commission considers that only in exceptional situations can it be demonstrated sufficiently effectively that this form of information exchange is acceptable (European rules… 2010).
INFORMATION EXCHANGE: TYPES, IMPACT, LEGAL FRAMEWORK AND KEY THREATS TO THE COMPETITIVE ENVIRONMENT
As already mentioned, the impact of information exchange on competition must be assessed on a caseby-case basis. The probable negative impact of information exchange on the most important parameters of competition, such as prices, production volumes, product quality, product range and innovation, etc., should be considered. Therefore, this study investigates the most important parameters and features of the relevant market, which must be analysed upon establishing the fact of anti-competitive impact of information exchange on economic competition.
In modern conditions of market relations, the ability to exchange information between competitors is of paramount importance for the effective implementation of economic activity in the market. Access to reliable information about the state of the market can allow companies to effectively plan and forecast their production and commercial activities, as well as invest in new production facilities or research and development, which, in turn, can improve quality and reduce prices for the offered goods and services and increase innovation (Sofia Competition Forum 2011; Information cartels… 2016; Sloan 2014).
Sharing information can also benefit consumers by lowering search costs, which directly leads to better welfare. Consumers will be capable of making an effective choice only if they are well informed about the prices, characteristics, features of use, and quality of various goods offered on the market. Obviously, one of the prerequisites for the development of effective competition in the market for consumers is to enable them to compare prices and commercial conditions offered by different suppliers of goods or services. The presence of such an opportunity indicates the provision of a certain level of market transparency, which, in turn, constitutes a necessary condition for the development of a competitive market process (Sofia Competition Forum 2011; Lourenço 2017; Behar-Touchais 2015; OECD 2014).
The exchange of market information, which increases market transparency, is necessary for effective competition, as long as it does not create the conditions for concerted or coordinated behaviour of market participants. In this context, it is possible to identify legitimate sources of information about competitors. The openness of information, both for businesses and consumers, helps to increase transparency, which is one of the factors necessary to ensure market stability. Therewith, the artificial elimination of uncertainty about the actions of competitors, which, inter alia, is an inherent feature of competition, can in itself preclude normal competition. This is especially true for highly concentrated markets, where increased transparency allows companies to better predict or anticipate the behaviour of their competitors and adapt to them (Sattler 2012;Boychuk, 2017;Skliar 2014). Thus, in the understanding of competition legislation, information exchange constitutes a form of horizontal co-operation between competitors, through which they offer each other directly or indirectly, unilaterally or bilaterally, historical, current or forecast data on important parameters of their business.
An analysis of the current practices of competing agencies indicates that business associations can play a leading role in the exchange of information between competitors. Despite the undeniably useful activities they carry out in terms of economic development, associations often function as centres for the accumulation and exchange of confidential commercial information between their members. Such co-operation between enterprises is subject to legal regulation under competition law, as it can lead to pro-or anticompetitive effects in at least three areas: the way in which individual enterprises are guided in making economic decisions; the way in which customers are guided in choosing the appropriate products; as well as ways of competitive pressure that are implemented by participants in the relevant market (Sofia Competition Forum 2011; OECD 2016).
The issues of access, exchange, and use of information are crucial for building an effective business strategy. However, such a seemingly ordinary process can create antitrust risks for the company. The competition authority may assess this process of exchanging and using the information as evidence of anti-competitive concerted actions. The exchange of information between competitors can take various forms (so-called coordination of prices for goods and services, division of markets, elimination of competitors from the market or restriction of market access to potential competitors, etc.); it can increase or restrict competition (Sofia Competition Forum 2011; OECD 2016).
First of all, when assessing the exchange of information as a potential violation of competition legislation, the following should be established: whether it is part of another form of prohibited horizontal co-operation between enterprises and can, in essence, be a mechanism that facilitates or controls the implementation of anti-competitive practices in the market as a cartel; whether it is an independent form of co-operation and distorts competition unassisted due to the anti-competitive effects it causes or may cause. When the exchange of information between competitors takes place in the context of another form of prohibited horizontal co-operation between undertakings (for example, a cartel as the most serious breach of competition law), it must be analysed in the context of the investigation of the breach. For example, the exchange of information can serve to enhance the internal stability of a cartel by giving the participating companies the necessary level of market transparency, i.e., by helping them to control the companies' compliance behaviour, and by taking appropriate measures and sanctions for non-compliance. The exchange of information can also be a mechanism to enhance the external stability of the cartel by giving participating companies the opportunity to monitor potential new entrants and to take concerted action to eliminate potential competitors. The advantage, in this case, is that the assessment of information exchange as a form of prohibited behaviour does not require additional economic analysis of its anti-competitive effects. In this case, the exchange of information is prohibited in itself in accordance with Article 101 (1) The exchange of information between undertakings should be considered as a separate form of prohibited conduct only when it is intended to prevent or restrict or distort competition in the relevant market. Specific manifestations of prohibited anti-competitive behaviour of enterprises, in particular, are defined in Article 101(1) of the TFEU, namely: setting prices or other conditions of trade; division of markets or sources of supply; restriction or control of production, trade, technical development or investment, etc. Therefore, information that can be restricted in the competition is often related to the following parameters of competitors' economic behaviour: prices, volumes, suppliers and customers, introduction or closure of production facilities, application of technologies and standards, etc. (Sofia Competition Forum 2011;OECD 2016).
When establishing the compliance of information exchange between competitors to the rules of competition law, it is important to analyse the terms of the agreement between enterprises (Švirinas 2012). This condition is necessary to qualify an infringement in accordance with Article 101 of the TFEU or an article of the national law on economic competition. If competitors exchange information without concluding an agreement within the meaning of Article 101 of the TFEU or an article of national economic competition law (i.e., in the form of an agreement, concerted practise or decision of the association), the relevant articles cannot be applied. This is stated in the Communication: "the exchange of information may be decided only in accordance with Article 101 if it is approved or is part of a contract, agreed practice or decision of an association of undertakings" (paragraph 60) (Švirinas 2012).
For example, if a competitor's confidential information becomes available to businesses not directly but through the media or other third parties, this should not be construed as an agreement. Or, if a third party (marketing agency) individually collects, systematises and provides information to its customers, even if such information is received from competing companies, such information should also not be considered as an agreement to exchange information, despite the fact that competitors receive information about each other (Verkhovna Rada of Ukraine 2001). In the Communication, the European Commission states that one of the forms of concerted practice in the presence of an agreement may be the creation of a situation where only one company discloses strategic information to its competitor(s).
Therefore, if one competitor publicly discloses certain information that will not be exchanged and that becomes known to another competitor, the fact of the agreement may be established, except that the participant who received the information clearly did not wish to receive such information. For example, if an employee of one company emails information on the company's sales volume, and an employee of another company who receives this information by email does not respond in any way to the information received (for example, believing that this information is not important) and does not reply that he or she does not want to receive such information, then, according to the logic of the European Commission, it can be stated that an agreement on the exchange of information has been concluded. This conclusion cannot be considered valid, as, according to experts, in this situation, the act of coordination of actions is clearly lacking (Švirinas 2012;Bakalinska et al. 2017).
One of the rare cases where information was publicly declared, and no de facto agreement was reached was analysed in the Wood Pulp case (the cellulose case). Businesses have publicly announced price increases, information has been spread very quickly among traders and buyers through the local media, and no agreement has been announced between competitors. The European Commission took two factors into account when deciding on the existence of concerted action. Firstly, there was a direct and indirect exchange of information between enterprises, resulting in the creation of artificial transparency regarding price information in the market.
Secondly, the economic analysis indicated that the market was not purely oligopolistic, in which the existence of parallel prices would be possible. Most probably, the market was competitive: sellers were dealing with a variety of products, competitors were dealing with different cost structures, they were located in different countries and had to set different prices in the absence of conflicts. Therefore, the only explanation for the parallel setting of prices was, according to the European Commission, the concerted actions of enterprises (Lourenço 2017). The European Commission Communication states that concerted action cannot be ruled out, for example, in a situation where a unilateral and public announcement made by a company (for example in a newspaper) provokes public statements from competitors, not least because strategic Competitors' responses to each other in public statements can be a strategy for reaching a common understanding of the coordination plan (Švirinas 2012).
Based on the position of the European Commission, it is problematic when a company announces information on its website, for example, regarding the reduction of prices for its services, which will be valid for one month, and a competitor after reviewing this notice will announce a reduction in its prices for the same period. From the explanations of the European Commission provided in the Communication, one thing is clear -the European Commission will need very little information to analyse the agreements concerning the exchange of information for compliance with competition legislation in cases where the information of one competitor "reaches" another competitor, and the latter does not state that he or she did not want to receive information and did not want to receive it in the future (Švirinas 2012).
It is considered that the agreement can be identified only in cases where the association, as an intermediary for its members, collects confidential information from them and provides access to it to members, or distributes data between them. Therefore, if the third party through which the information was exchanged is a business combination, the agreement is likely to take the form of a business combination decision. Obviously, the form of the agreement -the agreed (cartel) practice or the decision of the association -will not be decisive in this case. The European Commission assumes that a situation where an association of private companies X disseminates individualised information about potential future prices only to its members should be considered as an exchange of information to restrict competition and the European Commission will not even determine whether it is an association agreement or a decision. (Paragraph 105 of the Communication) (Notice concerning agreements… 1968;Federal Trade Commission… 2000).
In addition to the specified types of information exchange, there are also the following ones:
I.
Direct and vertical exchanges. Direct exchanges between competitors are the most obvious ways to exchange data. Any agreement between competitors on this matter falls within the scope of Article 101 of the TFEU. The lack of acceptable data in the direct exchange of information is unlikely to conceal the anticompetitive nature of such agreements. Following the UK Agricultural Tractor case, the European Commission questioned the legality of the vertical exchange of information between producers and retailers, finding that such exchange was not objectionable if the information provided concerned only the retail sales of the producer concerned. In addition, such an exchange of information violates Article 101 of the TFEU if: 1) it allows the identification of competitors' sales; 2) such information prevents the retail activities of dealers or parallel importers.
ІІ.
Dissemination of market data by independent third parties. In many cases, information on the market structure is disseminated by independent consultants whose activities include market monitoring, collection, aggregation, and sale of industry data and market research for market participants. Although these studies may constitute a source of confidential information for market participants (in particular, market shares), the European Commission generally accepts the legitimacy of such activities due to the following reasons: firstly, in such cases, there is no real exchange of information between competitors, as the information is collected independently by the consulting company from the market and not directly from the participants. Therefore, one of the conditions for the application of the requirement of Article 101 of the TFEU (i.e., the existence of an agreement between competitors) is not met; -secondly, the information used for this market research is usually publicly available; as noted above, if the market itself is transparent, the exchange of information does not create any risk of collusion; -thirdly, the use of specialised consultants to gather marketing information saves money, which increases the efficiency of the company's business.
However, if the results of a market study prepared by an independent consultant are jointly shared by market participants (i.e., there is an agreement between competitors to provide a joint mandate to the consultant), the consultant may play a role similar to a trade association and the risks of violating the competition law would be rather high (Information Exchanges… 2010).
Thus, each entity must independently determine the market policy that it intends to follow. It is for this reason that businesses are not allowed to establish any direct or indirect contacts with other operators that may influence the behaviour of competitors or disclose their own current or future behaviour if the purpose or consequences of these contacts create conditions of competition that do not correspond to the usual conditions for the relevant market (Sofia Competition Forum 2011; Federal Trade Commission… 2000). The advantage of assessing the exchange of information as a separate violation of competition rules is that the collection of evidence is relatively simplified. The main difficulties are related to the in-depth economic evaluation of the evidence gathered, which should confirm or refute the anti-competitive effect (purpose or consequences) of a particular exchange of information between competitors (European rules… 2010).
STRATEGIC TYPES OF INFORMATION IN TERMS OF ANTI-COMPETITIVE EFFECTS
The qualification of violation of competition legislation in the form of illegal information exchange will depend on numerous parameters of information, such as its content, nature, level of detail, antiquity of information, how often and in what way information is exchanged, etc. Below, the study considers them in more detail.
Strategic (Confidential) Commercial Information
The exchange of non-public and especially confidential information is problematic from the standpoint of compliance with competition legislation. For the purposes of competitive analysis, information that is equally available to all relevant market participants and consumers, including entities that do not take part in the exchange of information, is considered public. Businesses, for example, are required to publish their annual financial statements, which include cost and revenue data. Moreover, in carrying out their business activities, companies usually disclose information to their customers and consumers about prices, quality, features, and use of goods and services. This information exists in the so-called "free access" and can be obtained without any obstacles. As a result, access to such information does not require the creation of a specific exchange system. This is the reason why competitors, as a rule, do not take part in the work of coordination mechanisms for the exchange of such information. Next, the study considers the most important types of information that are strategic in terms of the occurrence of anti-competitive effects, in particular the implementation of a probable conspiracy:
Price
In some circumstances, the exchange of price information can have pro-competitive effects: for example, the exchange of information on current prices for input materials (for example, workforce, raw materials) can reduce the cost of finding companies, which usually aims to benefit consumers through lower sales price. However, in the vast majority of cases, the exchange of price information has anti-competitive effects.
Quantity
The exchange of information on future or previous volumes has the same effect as the exchange of information on prices. The Communication states that sales information can be strategic, but in practice it is difficult to detect such cases. For example, although turnover information is related to the level of sales of a particular product and maybe strategic, the same situation may not be relevant to a wider range of products. For example, the aforementioned case against tractor manufacturers in the UK was one of the first when the European Commission banned the exchange of information with further analysis of the effects of restricting competition. Eight major tractor manufacturers in the UK exchanged three types of information through their trade associations: 1) on sales in the industry, in particular, broken down by product, time period, and territory; 2) the total sales volume and market share of each individual manufacturer, in particular, broken down by goods, terms and territories; 3) on sales of dealers in the distribution network of each participant with a breakdown of imports and exports in their territories (Capobianco 2004).
Demand
The exchange of individual information on the level of demand is understudied. On the one hand, aggregate information on demand (for example, in the form of market research) can contribute to the development of the enterprise. On the other hand, individualised information about demand can constrain competition depending on market characteristics. For example, in the Palaces Parisiens case, the French Competition Council fined six luxury hotels for repeatedly exchanging information on the market regarding the average price per room, revenue per room and occupancy rate (calculated by dividing the number of rooms rented by the number of rooms available for that period).
Costs
Cost information is also ambiguous, as is demand: on the one hand, if it is aggregated, it can generate efficiency through benchmarking. On the other hand, it can help allocate cartel quotas and thus facilitate coordination, as in the presence of asymmetric costs, cartel members must redistribute production in favour of the most profitable member.
Research and Development (R&D), Technology
Exchange of R&D information in accordance with Article 101(1) of the TFEU may substantially affect the innovative capacity of the enterprise, which constitutes one of the parameters of competition.
Investment Plans
By disclosing its investment plans, the company can thereby inform competitors of its intentions: for example, the announcement that the company plans to acquire a new facility to better serve customers in a particular industry; a press release stating that the company plans to invest in a recently acquired facility.
Individual Production, Production Capacity
The exchange of information on the volume of products produced and sold eliminates uncertainty regarding the behaviour of competitors and thus facilitates the monitoring of the market situation. As for the individual notification of the projected volume of production, then by disclosing its production plans, the company essentially discloses to competitors the information about its intentions. The opposite opinion is that production plans are not binding, but enterprises will adjust their output after one of their competitors announces a production cut. Even in the absence of such an announcement, information about production plans can contribute to collusion.
Orders and Deliveries
Supply information can be strategic because it gives a clear idea of the level of sales of the company. Thus, in the Steel Beams case (Federal Trade Commission… 2014;InfoCuria 1993;Case 38907 Steel Beams 2006), the European Commission authorised a weekly exchange of information on orders and deliveries by individual companies in each member state. In its assessment, the European Commission considered the fact that the European market for these products was oligopolistic and the products were homogeneous.
General Information about the Business
The consequences of information exchange in this area should be assessed on a case-by-case basis, depending on the features of the exchange and the market. Thus, in the case of colour semiconductor manufacturers, the European Commission has condemned the regular exchange of information on research and development, production, sales promotion, raw material supply, commercial management, data processing, and overall business strategy. The Commission concluded that the information exchanges violated the requirements of Article 101(1) of the TFEU, as the undertakings concerned developed an oligopoly in respect of a number of products covered by the agreements (Bovet 2011).
If one of the main criteria in evaluating information exchange agreements is to eliminate uncertainty about a competitor's behaviour, then in assessing the importance of information content the main question should be whether the analysed information can allow to predict a competitor's behaviour in trade and adapt to it, that is, to limit the independence of this enterprise in the decision-making process and to limit competition in the relevant market. Notably, the anti-competitive information exchanges are severely sanctioned within the framework of the competition legislation not only in EU member states. For example, in China and Brazil, similar illegal actions are also subject to criminal prosecution. Next, this study considers examples of significant violations of EU competition legislation, when high fines were imposed on: flat glass manufacturers; suppliers of galvanised steel tanks; modelling agencies and associations; a group of mobile companies; importers of bananas; group of tour operators; vegetable oil producers and industry association; companies that offered television services; car companies; waste management companies and associations; industry association and dairy producers; oil companies.
Case against T-Mobile Netherlands BV, KPN Mobile NV, Orange Nederland NV, Vodafone Libertel NV (2009)
The case against T Mobile Netherlands BV (T Mobile), KPN Mobile NV (KPN), Orange Nederland NV (Orange) and Vodafone Libertel NV (Vodafone) is important in terms of understanding the concept of "concerted practice": establishing a causal link between agreed actions and market behaviour of enterprises, evaluation of evidence in accordance with the rules of national competition legislation, sufficiency to prove the violation of one meeting or the necessary concerted action on a regular basis over a long period. The decision of the European Court is essentially an opinion in the legal analysis of exchanges of information between competitors. Finally, the decision contains interesting but, in the opinion of European experts, controversial wording for the evaluation of information exchange.
The exchange of information between competitors has recently become one of the priorities for European competition authorities. Penalties imposed on banana importers, high-end cosmetics suppliers, school managers, hotels, and other companies have contributed to an increasingly conservative approach to the interpretation of the application of competition legislation in this area. In many cases, fines must deter companies and trade associations from benchmarking, gathering market information, publishing statistical performance, and other information that could lead to anti-competitive effects.
The Case Against Oil Companies and Industry Associations (Spain)
Spain's National Competition Committee fined 5 major oil companies totalling 32 million euros. In the summer of 2013, after many complaints and reports pointing to insufficient competition in the gasoline distribution sector in Spain and higher retail prices than in neighbouring countries, the investigative department conducted various inspections at the offices of 5 major oil companies and their industry association. Using the information obtained during these inspections and subsequent research, the Competition Authority prepared charges against companies for coordinating information related to prices, customers, commercial conditions, and the exchange of official commercial information in the automotive fuel market (decision of the National Competition Committee of Spain dated February 20, 2015) (Annual report on competition policy… 2015). The Spanish National Competition Committee imposed a fine of 20 million euros on REPSOL, 10 million euros on CEPSA, 1.3 million euros on DISA, 800 000 euros on GALP and 300 000 euros on MEROIL.
FEATURES OF INFORMATION EXCHANGE AND SPECIFIC ASPECTS OF THE PROCESS DUE TO THE CIRCUMSTANCES
The exchange of information is more likely to have a restrictive effect on competition if the undertakings involved in the exchange have sufficiently large market shares. Interpretation of a "sufficiently large" market share depends on each particular case. At present, there is no clear threshold to ensure legal certainty in this matter. Some European experts are proposing to introduce a secure area for the exchange of actually generalised information, that is, information that would not allow the recognition of an individualised level of information of the company, between competitors who "do not cover" more than 60% of the market. The comprehensive factual assessment provided for by the European Commission Regulation should only be carried out if this threshold is exceeded (European rules… 2010; Sofia Competition Forum 2011). The properties of information determine the features of information exchange; therefore, they are very similar.
Frequency and Forms of Information Exchange
The frequency of information exchange is crucial for its assessment as a form of prohibited behaviour of companies. As a rule, the more frequent the exchange of information between competitors, the more favourable are the conditions for coordinating the market behaviour of enterprises. Frequent exchange of information facilitates the coordinated market behaviour of enterprises and reduces or even eliminates their willingness to compete with each other.
Public/non-Public Exchange of Information
If the exchange of information is available on the same terms for all buyers and competitors, and not just for the companies involved in the exchange, the probability that the exchange of information will lead to collusion in the market is reduced. As already mentioned, the exchange of public information is also unlikely to be a violation of competition rules (Behar-Touchais 2015;OECD 2014). In carrying out business activities, companies usually disclose and disseminate public information about their prices, characteristics, quality, use of products and services, etc. (Sofia Competition Forum 2011). This, however, is only true for actual public information. If the costs of collecting data (for example, by sampling customers) are so high as to prevent other competitors and consumers from using this data, then, as mentioned earlier, it is possible that the market transparency achieved through the exchange will benefit only some companies, which risks that this transparency will lead to anti-competitive practices.
Direct and Indirect Exchange of Information
A very important component of assessing the exchange of information between competitors is the analysis of its mechanism -whether it is carried out in direct exchange between enterprises, or indirectly within the association of enterprises or other structure that acts on their behalf or protects their economic interests. In practice, in most cases, information exchange takes place with the involvement of associations, as a result of which their activities are also subject to analysis in order to establish forms of prohibited conduct in accordance with competition legislation (New York Amends Credit… 2011).
As a general rule, the exchange of information between competitors should not be considered a breach of competition legislation if the association or other entity acting on their behalf does not function as: (1) a forum for meetings of cartel members; (2) organisation for the issuance of anti-competitive recommendations or forecasts for the market behaviour of its members; (3) a clearinghouse that reduces or eliminates the level of uncertainty about the functioning of competition in the market (Sofia Competition Forum 2011; New York Amends Credit… 2011).
Unilateral and Bilateral Exchange of Information
The exchange of information can be unilateral or bilateral, depending on whether companies provide their commercial information to competitors unilaterally, or take part in the mutual exchange of such information. A situation in which only one company discloses confidential commercial information to its competitors is likely to be considered a violation of competition legislation (Sofia Competition Forum 2011;Østerud and Steen 2020). In addition, as already mentioned, if the company receives strategic data from a competitor (in a meeting, by mail or in electronic form), it is considered that it has accordingly adapted its behaviour in the market. As will be indicated in the second section of the case study, this assumption can be refuted if the company provides evidence that it has clearly stated to its competitor that it does not want to receive data on its business activities (Posada and Frutos 2014).
In determining the impact of the agreement on competition, it is necessary to take into account the following factors: the real conditions of the conclusion and implementation of the agreement, especially the economic context of the behaviour of the parties to the agreement; the type of goods or services and the actual structure of the relevant market (Commission's Guidelines… 2001). Thus, the exchange of information causes two problems in terms of protecting economic competition: it can facilitate collusion and lead to restrictions.
Collusion Facilitation
It is generally accepted that three elements must be adhered to in order to ensure a merger: the ability to reach an agreement, the ability to monitor compliance with agreements, and the ability to correct deviations from the agreement.
Restriction of Competition
The exchange of information can restrict competition at two levels: competitors who are not participants in the exchange and between participants in the exchange. It is obvious that actual competitors who do not take part in the exchange will be placed at a competitive disadvantage because they do not consider the information exchanged, while potential competitors may face high barriers to market entry, regardless of whether they decided to join this exchange. If they do not join, they will not be able to compete fairly with exchange participants, who benefit from more accurate and detailed data. If they join, they will have to disclose their confidential information so that other competitors can take immediate action against them. Thus, as already mentioned, the exchange of information can have anti-competitive effects by increasing the external resilience of the cartel.
An anti-competitive restriction may arise in the same market where information is exchanged with respect to independent competitors. This may also be the case for third parties in the relevant market. For example, by exchanging information in a high-level market, vertically integrated companies may be capable of increasing the price of a key component for a downstream market in order to increase the costs of their competitors in it (Behar-Touchais 2015). There are also aspects of information exchange that to some extent, restrict competition but are specific in nature due to specific circumstances. These include:
Exchange of Information in Procurement Markets
Procurement markets are subject to legal transparency requirements to avoid abuse by the public sector. However, full transparency of the procurement process and its results can facilitate collusion. In particular, it is easier to engage in collusion in open tenders, which facilitate communication between bidders, than in closed tenders, where bidders make a "better and final" offer.
Regulation (EC) No 1907/2006 of the European
Parliament and of the Council of 18 December 2006 concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH), establishing a European Chemicals Agency, imposes the obligation to share and jointly present technical data, in particular related to its inherent properties. REACH is not a noncompetitive area: the Regulation explicitly prohibits the exchange of information on market behaviour, in particular on production capacity, production or sales volumes, import volumes, or market share (Article 25 Para. 2). Although REACH only makes provision for the exchange of technical data, in some cases even such data may lead to restrictions of competition.
Information Exchange in the Context of Economic Concentration
Prior to or during discussions on joint venture opportunities, acquisitions, or mergers, the parties should exchange information, especially during the due diligence procedure, to consider whether they wish to join the proposed transaction. Therefore, the application of Article 101(1) of the TFEU should be less stringent, provided that certain precautionary measures are taken. The reasonableness of information exchange depends on several factors, such as the competitive sensitivity of the transaction (i.e., whether the parties are direct competitors); competitive sensitivity of the information; proximity to closing (i.e., the closer to the completion of the transaction, the more necessary the exchange of information).
Information Exchange in the Context of the B2B Platform ("Business for Business")
For several years, starting in the 2000s, the European Commission also considered the exchange of information in the context of new forms of online trade. In particular, the European Commission has carefully examined whether online trading systems can allow a participant to access confidential information about its competitors or their customers. The speed with which information spreads over the Internet and its global reach allows B2B and other online services to implement virtual meetings in which competitive information can be exchanged (Sattler 2012). B2B stores allow industrial buyers and sellers to conduct transactions online and via the Internet. On the one hand, they increase efficiency by integrating markets, reducing information retrieval costs, and improving inventory management, which ultimately leads to lower consumer prices. On the other hand, they can be an ideal place to engage in collusion by increasing transparency and facilitating the exchange of information (Rivas and Van De Walle De Ghelcke 2012).
Potential Pro-Competitive Result of Information Exchange
The existence of an anti-competitive object or the result of the exchange of information between competitors is an element of the infringement under Article 101(1) of the TFEU. But in cases where the exchange of information is not part of the cartel between enterprises, the competitive assessment should include an assessment of its potential pro-and anti-competitive results. As already mentioned, the exchange of information in certain cases can serve as a tool for maintaining a competitive market structure (Bovet 2011), in particular through investment decisions and organisational training, product positioning, consumer benefits, mitigation of the "winner's curse", market integration.
In many EU member states, competition authorities allow forms of "communication" between competitors where clear consequences of performance improvement are established. In these cases, compliance with competition laws balances the potential restrictive effects of interaction between competitors with the potential benefits to consumers. Another important requirement that an information exchange agreement must meet is the criterion of necessity: the exchange of information must be necessary to achieve efficiency. In the Communication, the European Commission clarifies that the parties will need to demonstrate that the subject matter, aggregation, limitation, confidentiality, and periodicity of the data, as well as the sampling of data for exchange, carry minimal risks and are indispensable for efficiency.
To comply with the block exemption requirements, the information exchange agreement must also satisfy two additional requirements: the "transfer" of efficiency gains to consumers and the impossibility of eliminating competition in respect of a significant part of the goods concerned (Paragraphs 103-104 of the Communication). "Transfer to consumers" should be such as to override restrictive effects on competition caused by information exchange, for example, when the market power of the parties to the information exchange agreements is low, it is more probable that the efficiency gains will affect consumers so much that they outweigh restrictive effects on competition, and vice versa -the higher the bargaining power, the less likely consumers are to benefit (Villani 2016;Wait 2011).
Responsibility for anti-competitive exchange of information between competitors. If the assessment of the above conditions of exemption indicates that the exchange of specific information between competitors is not allowed, it will be considered a violation of Article 101 of the TFEU and/or the relevant article of the national competition law (for example, as a violation of Article 6 of the Law of Ukraine "On protection of economic competition"). Enterprises and associations of enterprises involved in the exchange of information are liable for violations. In these cases, the competition authority must impose sanctions on enterprises or associations of enterprises in accordance with the Law "On protection of economic competition" in the amount not exceeding 10% of the total revenue of the business entity from sales for the previous financial year (Ukraine has a similar liability for anti-competitive concerted actions) (Rivas and Van De Walle De Ghelcke 2012).
CONCLUSIONS
An analysis of European practice has concluded that anti-competitive information exchanges are most probable in markets that are transparent, highly concentrated (especially oligopolistic), simple and stable, where new players rarely appear, including through significant entry barriers to such markets. Enterprises involved in the exchange of information, in most cases, are homogeneous in terms of their value, product range, market share. Markets with such characteristics create favourable conditions for enterprises to conclude tacit agreements, successfully monitor their implementation and apply sanctions for evasion of agreements. Under such conditions, the result of the development of competitive relations proceeding from the information exchange depends both on the initial characteristics of the market where the exchange takes place, and on possible changes in these characteristics that may arise as a result of the exchange of information. Therefore, it is necessary to analyse not only the initial characteristics of the market in which the exchange of information takes place but also the forecast of the market situation without such an exchange.
The need for regulation such as the Guidelines is justified not only by the possibility of substantial anticompetitive effects (especially if the exchanges took place within the existence/maintenance of the cartel) but also by the importance of market information availability and the positive effects of information exchanges. Although this document should not duplicate the provisions of the Law of Ukraine "On protection of economic competition" but will be of a recommendatory nature -legal principles are established to ensure certainty in cases of information exchanges between competitors, as well as to encourage voluntary compliance with fair market rules. This is important to protect the legal position of the competition authority during the consideration of relevant cases in the courts. Therefore, the adoption of the Guidelines (recommendation clarifications) in Ukraine is necessary for all target groups: the Antimonopoly Committee of Ukraine, companies, associations, and chambers of commerce, legal scientists (lawyers, judges), society in general. Given Ukraine's aspirations for European integration, it is important to consider the approaches adopted in the European Union to address these issues during the development of such a document.
Taking into account the experience of European countries and analysing the relevant documents, the following structure of Recommendation clarifications for application by the bodies of the Antimonopoly Committee of Ukraine is proposed: 1) general part (purpose and scope); 2) analysis of characteristics related to the circumstances and mechanisms of information exchange, as well as analysis of the parameters of the exchanged information; 3) recommendations on the admissibility of information exchanges with an emphasis on cases where there may be risks of distortion and restriction of competition. | 2021-08-04T00:04:07.747Z | 2020-12-31T00:00:00.000 | {
"year": 2020,
"sha1": "f555f96d7dacd6690b9816963b9ac785d4020aea",
"oa_license": "CCBY",
"oa_url": "https://lifescienceglobal.com/pms/index.php/ijcs/article/download/7094/3713",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "54b1a24764bc3399fdc5cb0fbbc8c90077cb8783",
"s2fieldsofstudy": [
"Law",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
233664642 | pes2o/s2orc | v3-fos-license | Correlation between Air Quality and Wastewater Pollution
Recently, air pollution is a universal problematic concern which adversely affects global warming and more importantly human body systems. This chapter focuses on the importance of air quality, and indicates the negative effects of emissions originated from both municipal and industrial wastewaters to atmosphere. More importantly, the improvements in wastewater treatment plants to eliminate the crisis of emissions on environment and human health is also clarified. Urbanization and distribution of industrials in urban areas influence the air pollution via releasing pollutants and contaminants to environment. The pollutant emissions from wastewaters are volatile organic compounds, Greenhouse gases and other inorganic pollutants (heavy metals) which are causes to many reactions through atmosphere, then products detriment whole environment and living organisms including human. Moreover, contaminants are also released into air from influents of municipal wastewaters and they are considered as the main resources of most threatened infections in human and other animals. As conclusion, because of the persistently development urbanization and industrialization as the wastewater pollutant sources, the environmental technology regarding wastewater treatments must depend on prevention of emissions to air before thinking on cost and good quality effluents.
Introduction
The world recognizes air pollution as detrimental issue that significantly affects public health. There has been intensive studies and documentation of the effects of air pollution around the world [1,2]. Sustainable development in any society provides a good living standard for the individuals. Also, these include social progress and equality, environmental protection, conservation of natural resources, and stable economic growth [3]. Industrial and transportation emissions and their burden in regional and global harm on health, climate and vegetation have been well studied in last few decades [4].
Health effects due to air pollution are a big concern for the World Health Organization. Air pollution does not only cause toxicological effects on human health, it has also significantly degraded the environment in the last years [5,6]. Now a day, wastewater treatment plants (WWTPs) are definitely known as one of the most crises on air quality and availability of gases, chemical pollutants and biological contaminants in environment directly resourced from sewage wastewaters [7].
Moreover, municipal wastewater drastically increased, and due to household waste contents and draining to trunk cannels close to the urban areas, they adversely affect human health rather than environmental damage.
Water pollution is a problematic on humanity and the aquatic life, and increase catalyzes climatic changes [8]. For instance, various human activities as well as the release of greenhouse gases by industries greatly contributes to global warming, planet temperature enhancement, and lowering of atmospheric air quality.
Sustainability of environment among different societies is importantly developed as an initiation of living standard improvement for individuals. And It aimed to solve the challenges faced to environment, economy and society) without effects on human and environment in the future. Sustainability is also important to progression and equality of social, environmental safety, preservation of natural resources and economic growth [3].
The main contributors of air pollution and their cooperativeness cause to increase risks on air quality. For instance, With the growth of population, there is also a growth in demand for gas, oil, and other energy sources. This has also increased the number of refineries and petroleum wastewater treatment plants [9]. The pollutants are mostly chemicals which present in items used by individuals, chemicals containing preservative compounds, dyes, hydrocarbons, proteins as nutrients, etc. In last few decades, the demand on synthetic chemical products increased and products easily delivered to homes, due to advertisement and evolution in lifestyle such as; internet availability and easily contact in society. On the other hand, more urbanization around the world leads to increase wastes per individuals, however, this is different among various countries while still considered as one of the most reasons of developed more and more liquid and soil wastes (municipal wastewater).
Fortunately, along all society wastewater treatment plants (WWTPs), purification seems to be familiar and properly applicable to remediate municipal and industrial wastes and there are good understandings in this aspect. However, the wastewater treatment systems are variously performed within different society, while the all of them are targeted for one reason of improvement air quality and human health. Thus, all organization including WHO and governments hardly work to be far from wastewater emission impacts and improvement of air quality.
This chapter is aimed to better understanding in adverse impacts of wastewater effluents on air quality via emission process which is directly and indirectly affects human health via respiratory and skin diseases. It is also aimed to keep air emission in line level in recent technologies, however the risks associated with exposure to emissions from WWTPs are uncertain and require more research, stronger regulatory frameworks and safer design consideration.
Wastewater pollutants emitted to air
Generally, the presence of high concentrations of pollutants in atmosphere are results from unsustainable regional policy and lack of affordable green technology transfer [10]. There are different pollutants and contaminants emissions, the diversity of chemical pollutants leads to classify emissions according to their etiological agent within different types of wastewaters. The design of constructed sewage channels also affects the emission rate into the atmosphere. Open wastewaters are more efficiently exhaust emissions than close box or underground constructed wastewaters, as a result of abiotic effects which leads to worm the water and stimulate more volatilize and release. The followings are the air pollutants which are also originated from wastewaters effluents and easily releasable:
Hydrocarbons
Hydrocarbon pollutants are defined as one of the great serious emissions that effects all life forms [11]. Aliphatic and aromatic hydrocarbons are released into the air from industrial outlets rother than solid wastes which are directly emitted into the air, particularly from petroleum industries [12]. Recently, in developed countries industrial wastes undergo several processes of purification such as; conversion, separation and treatment, while during processing hydrocarbon emission persistently occur and adversely affect air quality. Despite of the above occurrence of emission, transportation of refined or purified products through tanks and pipelines can also leakage to water bodies and additional hydrocarbon emission occurs through wastewater treatment plants [12]. According to Aljuboury et al., the effluents/outlets from petroleum industries containing pollutant products with easily emission [13]. Therefore, auxiliary emissions arise when volatile organic compounds are stripped off from the contaminated wastewater in aeration basins, drains, and ponds which are all considered as indirect emissions [12]. The pollution via hydrocarbons sometimes due to the accidents during over sea transportations, when crude oil and gasses leak and release on the surface of water body. Finally, these pollutants directly and indirectly reach human, animals and plants and adversely affect them [12].
Volatile compounds
Volatile compounds are chemical substances; they have low boiling points and are immediately released into the air after contact. The concentration and identity of volatile compounds in wastewater and their emission to air varies according to the wastewater resources, transport system, characteristics of the employed treatment plant and the weather (physical) conditions. The aeration process and mechanism involved in oxygen diffusion in wastewater treatment plant states transfer characteristics between air and wastewater. And several organic substances in wastewater are either adsorbed, biodegraded or volatilized [14].
The emission of volatile organics from municipal wastewater plants is the main problem for wastewater treatment systems. Different types of pollutants (solvents and chemicals) that originated from municipal wastewater considered as a major source of VOCs. They are also presence in gaseous forms and leads bad odors/toxicity, they are crises on natural environment and air pollution resulted in the availability of VOCs [15]. In addition, VOCs also released during the composting of different organic wastes [16]. According to He and his colleagues, volatile organic compounds are also released during bio-drying of municipal solid waste. Biodegradation of wastes causes the production of these compounds in composting sites. During the process of biological decomposition, a huge quantity of VOCs is released from the organic matrix as well [17]. Because of the close relationship between wastewater streams and landfills and high occurrence of leaking from solid wastes of landfills to water streams, it is important to discuss the efficiency of landfills in VOCs emissions. Gases are also produced in landfills when household chemical products are vaporized in the landfill sites [18]. Landfills in many countries closed to municipal wastewaters, therefore solid wastes from this site certainly drained to wastewaters. The abandoned landfills had volatile organic compounds over the permissible limits, and their release to wastewater and directly for atmosphere is estimated over the permissible levels. Benzene, toluene, ethylbenzene, and xylene were the major volatile organic compounds detected in the air [19].
Petroleum as highly pollutant in environment contains high concentration of VOCs. Controlling the release of volatile organic compounds into the air is a big Environmental Sustainability -Preparing for Tomorrow challenge for petroleum and the oil refining industries [12]. Malakar and Saha (2015) had concluded that, High concentrations of VOCs are derived from streams effluents of petroleum industries and refineries [20]. Moreover, the applicable fossil fuels in treatment plants (desalination) are noticed to release about 16,000 tons of VOCs. Thus, there may be a constitution fuel for desalination process [21]. Despite the anthropogenic or municipal wastes, emissions of VOCs from industrial wastes are also defined, and different industrial sectors are presented in the Figure 1.
Greenhouse gases
Municipal wastewater treatment plants are known to be one of the minor sources of greenhouse gases that are distributed in the atmosphere. Generally, there are three major sources of greenhouse gases (methane, carbon dioxide, and nitrous oxide) which are easily and frequently emitted into space, they are also found to cause indirect emissions from energy generation process [23,24]. Aerobic biological treatment plants produce very large amounts of greenhouse gases as they require a large amount of energy to carry out various processes. The quantities of the resulted gasses depend on the influent of the wastewater, off-site treatments, and treatment processes in WWTPs [25]. According to United State Environmental Protection Agency (USEPA) in 2018, the three gasses emission in United States of America were ~ 81%, 10% and 7% for CO 2 , methane and N 2 O, respectively. With the remained 3% of emitted Fluorinated gases [26] (Figure 2).
Effect of wastewater treatments coming out of refineries and petrochemical industries is definitely problematic to environment and human health. Nevertheless, they are also known to cause large-scale emission of greenhouse gases into the atmosphere. According to Li et al., (2016), in United States of America around 0.40% of the total greenhouse gasses are emitted by wastewater treatment plants of refineries and petroleum industries [27]. Logistical pollutants which are frequently rely on fossil fuel consumption are considered as main reason of CO 2 and GHG emissions, and they are classified as significant contributor which affects environmental sustainability [28,29].
Airborne microbial contaminants
Another critical issue that negatively affect air quality is the availability of microorganisms in atmosphere termed as microbial air pollution. The emission of bio-aerosols from wastewater to environment resulted from pollution of main sewage streams by human excreta (urine and feces) which containing a lot of microorganisms specifically bacteria (gram negative bacteria) [30]. The most common bacterial types released from municipal wastewaters are mostly include mesophilic pathogenic bacteria and psychrophiles, among them; S. aureus, Coliform bacteria, Pseudomonas fluorescens [31]. Salmonella sp., Shigella sp., Pseudomonas aeruginosa, Clostridium perfringens, Bacillus anthracis, Listeria monocytogenes, Vibrio cholerae, Mycobacterium tuberculosis, Streptococcus faecalis, Proteus vulgaris [32]. It is worthy to note that, many pathogenic microorganisms died and removed, while still some of them can survive in sewage sludge for the period of months [33]. Wastewater treatment plants are also known to release aerosols into the atmosphere and cause health issues to people working with the plants and living in the surrounding areas. The droplets from WWTPs documented to carry ten to thousand times more bacteria into the atmosphere than the sources of water pollution. The release and emission of microorganisms mainly depend on the temperature, wind velocity, humidity, smog, and other factors. Importantly, the available humidity triggers microorganism's proliferation, as it eliminates the solar efficiency to eradicate microorganisms [34]. The presence and absence of microorganisms in wastewater also related to the quality and climate of the site, even in purified wastewaters depend on the employed method of purification [35]. During purification processes, the microorganisms can get atmosphere via aerosols, particularly when the wastewater undergo aeration process by using air diffuser and biological bioreactor chamber [36].
The basic nutrients in wastewater (N, C and P) directly affects microbial life, due to this, their availability in wastewaters and any other sites leads to increase microbial activity. Furthermore, the presence of abundant microbial community in environment commonly sensed around the world, particularly in highly polluted Environmental Sustainability -Preparing for Tomorrow 6 zones. Microorganisms either contaminate atmosphere by themselves or involved in degradation of chemical compositions and finally a part of produced pollutants in the form of gasses (Volatile compounds) emitted into the air.
Nitrogen oxides and sulfur oxides
There are various processes that take place in the wastewater treatment plants. These processes lead to the production of oxides of sulfur and oxides of nitrogen. Nitrous oxide emission from wastewaters is known as a problematic contaminant which needs to be addressed. During this several years, dramatic increase of N2O was noticed. However, N2O is threatened but the emitted value is less than other chemical pollutants [37]. Particularly, the design of the sewers and their operational conditions potently facilitate N 2 O emission into surrounded environment. Domestic wastewater which is originated from household activities by human also known to contain high concentrations of different nitrogen forms rother than phosphor and other chemical pollutants. The plants which take more efforts in nitrogen removal emit a less amount of nitrous oxide into the air [37]. The released nitrous oxide in atmosphere interact with other VOCs to make products such as tropospheric ozone [38]. The flux of atmospheric GHGs directly increases with the input nutrients, and it is also different from one type of wastewater to another one [23]. For instance, N 2 O emissions are related to nitrification and denitrification processes which is triggered by some particular microorganisms. Denitrification of NO 3 and NO 2 as a result of metabolism of Nitrobacter more N 2 O emitted into the air (Figure 3).
Combustion of fuel to run these plants also leads to the large-scale production of oxides of sulfur and nitrogen [10]. Continuously, sulfur dioxide (SO 2 ) emission is a detrimental issue in many developing countries, especially from influents of coal fired power plants and the coal industries. It is a pollutant that is directly emitted from the source and released into air, unlike tropospheric ozone which is indirectly produced from combination of chemical pollutants in atmosphere. The consumption of coal in human activities in form of wood, dung and crop residues for domestic energy at home also contributes to ambient SO 2 concentrations and adversely affect the children and adults who are exposed to high levels of this pollutant [39]. Sulfur oxide may create acidic forms of sulfuric and sulfurous as a result of presence vaporized water, then acid rain precipitation occurs [40]. On the other hand, the precipitated acid rain causes disturbance fresh water and vegetation on earth. Both nitrogen oxides and sulfur dioxide considered as the biggest sources of pollutants from desalination plants for wastewaters [41]. Desalination plants are known as producers of NOx and SOx with values about 60,000 and 200,000 tons per year, respectively [21].
Heavy metals
Heavy metals are listed at the top of inorganic pollutant with wide range of negative effects on organisms, plants, and human [42]. Heavy metals released into the environment via different routes such as industries, domestic, mining activities, agricultural activities and etc. [43]. Heavy metals are not degradable and accumulated in living systems [44]. Thus, air pollution by heavy metals is considerable even at low concentrations and the long-term cumulative is threatened for human health [45].
The effects on human health and the environment from exposure to the three most common heavy metal pollutants (mercury, lead and cadmium) include: DOI: http://dx.doi.org /10.5772/intechopen.95582 Mercury is known as one of the toxic forms of metals that can harm different systems from human body (the brain, heart, kidneys and lungs), and lowering immune response against foreign objects of all ages. Moreover, childhoods affected in central nervous system and causes to less able to think and learn [46]. The mercury cycle is particularly important for understanding how it can reach atmosphere from different sources within wastewater, specifically illustrates how this metal that can be methylated. There are many compounds which are widely used in industrial processes and big demand on this metal observed around the world. Inorganic mercury that concentrated at the bottom muds of the water was methylated by anaerobic bacteria of the genus Desulfovibrio (in Figure 4 below).
Collaboration among each of atmosphere, apical aerobic water layer and anaerobic sediment is seriously affecting Hg cycle. Some anaerobic microorganisms at the bottom, can convert free mercury to methylated forms that can be transported to water and the atmosphere such as Desulfovibrio as an example of bacteria. Methylated mercury processed through biomagnification. The produced volatile elemental mercury (HgO) is easily distributed into upper oxygenated water body Environmental Sustainability -Preparing for Tomorrow 8 and even atmosphere, and then to waters and the atmosphere. Ionic mercury can react with sulfide in anaerobic sediment and resulted in less soluble HgS.
The produced methylated mercury is characterized as volatile and lipid soluble, and the mercury concentrations increased in the food chain (by the process of biomagnification) and also increased in atmosphere. For example, the directly emitted Hg into atmosphere is evaluated by 2500 tons/year, which is accounting for about 31% of the overall emissions [47].
Lead as a non-essential heavy metal causes to late neuro-developmental in children, even at trace levels of exposure. Other effects include cardiovascular, renal, gastrointestinal, hematological and reproductive effects. Children six years old and under are most at risk. Currently, the thresholds of Pb are unknown. Lead frequently reaches human, animal and plant bodies and accumulates as a toxic substance [48].
Cadmium as a toxic metal and it is presence can make many problematic issues including: pulmonary irritation, kidney disease and cancer, bone weakness and prostate. Among the Cd sources, food and cigarette smoke are the common which easily exposure for the general population. About 90% of Cd exposure from dairy sources into the environment and to people who are not smoking.
Cadmium (Cd) as a toxic metal to plants, animals and microorganism. It may cause lowering growth rate of plants and many serious diseases for human as a result Cd accumulation, mainly in the kidney and liver of vertebrates and in aquatic invertebrates and algae. Severe toxic effects on fish, birds and other animals may include death or fetal malformations [49].
The availability of Pb, Hg and Cd was studied by Du and his team in Heilongjiang City-China, samples were collected from 27 WWTPs with intervals monthly during 2015. And the results ensured the removal of heavy metals from wastewater is highly affective as they released to environment which finally adversely affects humanity [7].
Effects of polluted air on human health
According to the investigation published by the World Health Organization, the number of died people was estimated by 7 million as a result of air pollution exposure in 2012 [50]. This number indicates one out of eight total global deaths and confirming that air pollution is the world's largest health risk. Thus, the mortality rate much higher than that caused by malaria and AIDS. Air pollution is not only the concern of one nation or country, while it is increasing daily along with increase urbanization and industrialization. The universal cooperation is the only solution to overcome this critical issue (air pollution) which has crisis on humanity, and air is a natural resource without geopolitical boundaries [4]. Individuals are affected by different types of emissions directly and indirectly via inhalation of pollutants and climate change (for instance; when solar radiation gets trapped by gaseous and suspended particulate matters in atmospheric layers, respectively [51]. Many people exposed to these emissions and microorganisms may show unhealthy signs of respiratory problems and digestive system issues [34]. Bio-aerosols are known to contain various types of microorganisms that can cause disorders of the respiratory system, digestive system, and skin. Bio-aerosols also affect the quality of air in the surrounding. Moreover, it was found that domestic sewage containing animal and human excreta contains the highest amounts of microorganisms. They are usually treated and released by municipal wastewater plants which cause various micro-organisms to enter the atmosphere [52]. The spread of microorganisms in atmosphere depends on the weather and season [53]. Chronic obstructive pulmonary disease, acute lower respiratory illness, ischemic heart disease and lung cancer have been contributed to most air pollution related cases and even deaths. Inhalation of fine particles from air (particulate matter) and produced ozone are detected as the origin of those diseases [50]. All studies agreed on the presence of correlation between green technology improvement and environmentally sustainable. For instance, Khan and his colleagues (2020) have prepared two important hypotheses, which are "(1. Greater environmental performance reduces the health expenditure) and (Country environmental performance has a positive correlation with economic growth)" [28].
How to keep air pollution in line from wastewater emissions?
The previous wastewater treatment plants were afforded only to obtain large purified effluents from wastewaters and cost-effective protocol, while no consideration about the emissions as a result of biological reactions. Now a day, global attempts considered to increase environmental sustainability which is derived from greenhouse gasses (GHGs), organic and inorganic compounds that directly emitted into the atmosphere and air quality disruption.
To minimize GHG emissions from wastewater treatment plants, the following recommendations can improve the practical systems with lowering gasses emissions. To minimize N 2 O emissions, biological wastewater treatment plants should be operated at high solid retention times (SRT) to preserve low ammonia and nitrite concentrations. Moreover, big bioreactors are suggested to dispose of systems able to large volume loading buffer and to decrease the risk of transient oxygen depletion. The emissions N 2 O can be reduced (if nitrous oxide stripping by aeration is limited since microorganisms would have more time to consume it) [19]. On the other hand, application of anammox processes can be used to remove ammonia. On the basis of the metabolism of anammox bacteria, N 2 O is not directly produced [54], and therefore, it is considered as a promised process to emission N 2 O in the WWTPs as a constitute of the conventional nitrification-denitrification processes.
Methane (CH 4 ) emissions can be reduced into a minimum value by properly covering sludge tank and sludge disposal tanks, to prevent gas leakages and emitted CH4 captured by hoods which could be undergo through burning with excess biogas in a torch [51]. Methane gas usually produced within WWTP itself and from it is sources. Methane is mainly undergo oxidation by approximately 80% in the activated sludge tanks, which could be exploited to further decline methane emissions to atmosphere from WWTPs [55].
The SRT as a promised invention applied to the biological reactor which triggers GHG emissions to atmosphere. The activated sludge system, especially when SRT value seems to be high, improves biomass endogenous respiration, which stimulates the COD oxidization to CO 2 and reduces the produced sludge. The lower sludge production indicates CH4 decline, thus any reduction of CO 2 release correlated to its combustion [56].
Conclusions
Air quality is not less important than any pandemic diseases throughout the world, due to uncontrolled emissions from anthropogenic wastes. The reports indicated that wastewaters are considered as one of the detrimental sources of pollutants in environment specifically within atmosphere. And the emissions from wastewater to air directly increase with urbanization and industrialization, which Environmental Sustainability -Preparing for Tomorrow
Author details
Karzan Mohammed Khalid Faculty of Science, Soran University, Soran-Erbil, Kurdistan Region of Iraq *Address all correspondence to: karzan.khalid@soran.edu.iq are known as the main sources of wastewater pollution. Any additions of VOCs and GHGs in air directly and indirectly harm environment and human health. The priority of wastewater treatment plant systems must be changed from cost effective and good effluent quality to emission prevention then follow other aspects. | 2021-05-05T00:08:41.847Z | 2021-03-20T00:00:00.000 | {
"year": 2021,
"sha1": "28e64b443609d4eb380876f0fe263d78e64c087e",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/74985",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "df4815eba34f5f9403b67317f3ad771b1e7ba03b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
268535721 | pes2o/s2orc | v3-fos-license | Brentuximab vedotin and chemotherapy in relapsed/refractory Hodgkin lymphoma: a propensity score–matched analysis
Key Points • BV with or without chemotherapy does not increase CMR rates or PFS in R/R cHL but seems to increase PFS in patients with relapsed or stage IV disease.• Sequential treatment with BV and chemotherapy is feasible and could spare salvage chemotherapy in a subset of fast responding patients.
• BV with or without chemotherapy does not increase CMR rates or PFS in R/R cHL but seems to increase PFS in patients with relapsed or stage IV disease.
• Sequential treatment with BV and chemotherapy is feasible and could spare salvage chemotherapy in a subset of fast responding patients.
Several single-arm studies have explored the inclusion of brentuximab vedotin (BV) in salvage chemotherapy followed by autologous stem cell transplantation (ASCT) for relapsed/refractory (R/R) classical Hodgkin lymphoma (cHL).However, no head-to-head comparisons with standard salvage chemotherapy have been performed.This study presents a propensity scorematched analysis encompassing individual patient data from 10 clinical trials to evaluate the impact of BV in transplant-eligible patients with R/R cHL.We included 768 patients, of whom 386 were treated with BV with or without chemotherapy (BV cohort), whereas 382 received chemotherapy alone (chemotherapy cohort).Propensity score matching resulted in balanced cohorts of 240 patients each.No significant differences were observed in pre-ASCT complete metabolic response (CMR) rates (P = .69)or progression free survival (PFS; P = .14)between the BV and chemotherapy cohorts.However, in the BV vs chemotherapy cohort, patients with relapsed disease had a significantly better 3-year PFS of 80% vs 70%, respectively (P = .02),whereas there was no difference for patients with primary refractory disease (56% vs 62%, respectively; P = .67).Patients with stage IV disease achieved a significantly better 3-year PFS in the BV cohort (P = .015).Post-ASCT PFS was comparable for patients achieving a CMR after BV monotherapy and those receiving BV followed by sequential chemotherapy (P = .24).Although 3-year overall survival was higher in the BV cohort (92% vs 80%, respectively; P < .001),this is likely attributed to the use of other novel therapies in later lines for patients experiencing progression, given that studies in the BV cohort were conducted more recently.In conclusion, BV with or without salvage chemotherapy appears to enhance PFS in patients with relapsed disease but not in those with primary refractory cHL.
Introduction
5][6] However, 30% to 40% of patients will relapse within 5 years after ASCT and subsequently have a poor prognosis. 1,7Importantly, it has been shown that patients who achieve a CMR before ASCT have a better prognosis with longterm post-ASCT progression-free survival (PFS) of ~70% to 80%. 1,4,80][11] BV is an antibody-drug conjugate composed of an anti-CD30 monoclonal antibody with a cytotoxic payload of monomethyl auristatin E. 12 In the first-line setting, BV in combination with adriamycin, vinblastine, and dacarbazine (BV-AVD) has been shown to improve PFS and overall survival (OS) in patients with advanced-stage disease compared with standard adriamycin, bleomycin, vinblastine, and dacarbazine (ABVD). 13,146][17][18][19][20][21][22][23][24] These trials showed a high CMR rate before ASCT, and PFS and OS appear to be higher than historical controls. 25However, no randomized controlled trials (RCTs) investigating the addition of BV to salvage chemotherapy compared with chemotherapy alone in R/R cHL have been published to this date.An individual patient data analysis could provide more power for assessing the effect of novel treatments and can also detect interactions between outcome parameters and patient characteristics outcomes, compared with standard meta-analyses.
Therefore, we aimed to perform a large, individual patient data analysis to investigate the effect of BV addition to salvage chemotherapy vs chemotherapy alone on pre-ASCT PET response, PFS, and OS in transplant-eligible patients with R/R cHL.
Literature search and data collection
We performed a literature search on PubMed and ClinicalTrials.gov to identify clinical trials investigating BV in combination with salvage chemotherapy (BV cohort), or salvage chemotherapy alone (chemotherapy cohort) followed by ASCT in transplant-eligible patients with cHL with a first relapse or primary refractory disease after first-line (primary) treatment (supplemental Extended Methods, available on the Blood website.Ten studies were identified that met our inclusion criteria, the investigators of all 10 studies provided the individual patient data for inclusion in the analysis.Seven studies, published between 2017 and 2021, were included in the BV cohort and 3 studies, published between 2010 and 2016, were included in the chemotherapy cohort (supplemental Figure 1; supplemental Table 1).We gathered pseudonymized individual patient data from case record forms or study databases from clinical trials through the corresponding authors and/or investigators of the studies.For secondary use of data for this analysis, a waiver for informed consent was obtained from the ethics committee of all participating centers.
End points and definitions
The primary end point was the 3-year PFS.A cutoff of 3 years was chosen because most relapses occur within 2 to 3 years, and limited follow-up for several studies. 7Secondary end points included event-free survival (EFS), OS, and pre-ASCT CMR rate.PFS was defined as time from enrollment in the clinical trial to progressive disease (PD) or death from any cause, whichever occurs first.To eliminate bias in PFS occurring because of differences in study protocols, patients with stable disease (SD) after salvage treatment who did not proceed to ASCT were censored at time of going off study.Patients who did not undergo ASCT but received BV monotherapy instead were censored at time of end of salvage chemotherapy.EFS was defined as time from enrollment to PD or death, or until end of salvage therapy if patients could not proceed to ASCT because of toxicity or insufficient response (SD/ PD) after salvage therapy.Patients with SD who received additional therapy before ASCT were counted as "event."OS was defined as time from enrollment to death from any cause.
CMR was defined as Deauville score (DS) of 1 to 3 according to the 2014 Lugano criteria. 26A partial metabolic response (PMR) was defined as DS of 4 to 5 without progression or development of new lesions.In the ifosfamide, carboplatin, and etoposide (ICE)-gemcitabine, vinorelbine, and docxorubicin (GVD) study of Moskowitz et al, the pre-ASCT PET scans in the chemotherapy cohort were evaluated according to the international working group criteria, in which a positive scan was defined as uptake greater than the mediastinal or abdominal aortic blood pool (comparable with DS ≥3). 4,27To harmonize response assessment, all positive PET scans from the ICE-GVD study were re-assessed according to the Lugano criteria by a nuclear medicine physician (H.S.). 26e definition of primary refractory disease varied among studies, and not all collected relapse interval data.We defined primary refractory disease as "not having achieved a complete response on first-line treatment," encompassing partial response, SD, and PD, irrespective of relapse interval.Bulky disease was defined as a tumor bulk of ≥5 cm.Early relapse was defined as relapse interval of <1 year.Stage was defined according to the Ann Arbor criteria.
In the study of Santoro et al 5 (n = 59 patients), stage was not collected but information about the number of lymphatic and extralymphatic sites allowed the identification of patients with stage I (1 lymphatic site) or stage IV disease (≥1 lymphatic and ≥1 extralymphatic site; the investigators confirmed that there were no patients with stage IE/IIE disease).However, stage II and III were combined for n = 24 patients because the infradiaphragmatic or supradiaphragmatic distribution was unknown.Primary treatment was categorized into ABVD; escalated bleomycin, etoposide, adriamycin, cyclophosphamide, vincristine, procarbazine and prednisone (escBEACOPP); or other therapies.Patients initially treated with ABVD and later escalated to escBEACOPP were categorized under escBEACOPP.
Statistical analysis
Pearson χ 2 or Fisher exact tests were used to compare categorical variables and Kruskal-Wallis rank-sum test for assessing continuous variables.Survival outcomes were analyzed using the Kaplan-Meier method and pairwise log-rank tests.Univariable and multivariable Cox regression analyses were performed to assess the association between baseline characteristics and survival outcomes.Logistic regression was used to assess the association between baseline characteristics and binary response outcomes.Patients with missing data were only excluded from analyses when the missing variable was required for the specific analysis.
A 1:1 propensity score matching analysis was performed to adjust for the effects of unbalanced covariates between the BV and chemotherapy cohorts. 28We conducted matching based on baseline patient characteristics significantly associated with PFS.To ensure a robust distribution of patients within the matched data set, we repeated the matching process 2000 times as part of internal crossvalidation.More detailed information about the matching procedure is provided in the supplemental Extended Methods.
Statistical analysis was performed using R software version 4.0.3.A P value of <.05 was considered statistically significant.
There was an imbalance in primary refractory cases (55% vs 20% for the BV and chemotherapy cohort, respectively) because of a substantial number of patients enrolled in the study of Josting et al 6 (225 of 382; 59%) that specifically excluded patients with primary refractory disease.Moreover, this study included more patients who were treated with escBEACOPP as primary treatment.An overview of study information including treatment regimens and summarized patient characteristics can be found in supplemental Tables 1 and 2.
Survival outcomes in the matched data set
The following variables were significantly related to PFS and were used for propensity score matching: R/R status, bulky disease, extranodal disease, stage IV, B symptoms (at time of enrollment in the studies), and primary treatment with escBEACOPP (supplemental Extended Methods Table 2).The matched data set consists of a total of 480 patients with 240 patients each in the BV and chemotherapy cohort in which the patient characteristics are now equally distributed, except for World Health Organization performance status 2, but this was not significantly related to PFS (P = .6)or OS (P = .6;Table 2; supplemental Extended Methods Table 2).*For 24 patients in the chemotherapy cohort from the trial by Santoro et al, stage at relapse was not recorded but stage I and IV were deducted from the amount of involved lymph node sites, extranodal sites, and bone marrow involvement.It was not possible to distinguish between stage II and III disease because no data were available on the spatial distribution of nodal sites (ie, infradiaphragmatic and/or supradiaphragmatic location).
†Bulky disease was defined as a single tumor bulk larger than 5 cm.‡Primary refractory disease was defined as not having achieved a CR on primary treatment, ie, patients who had a PR, SD, or progressive disease (PD) on primary treatment were considered primary refractory independent of the relapse interval.NA, not applicable; WHO PS, World Health Organization performance status.
In the matched data set, 3-year PFS did not significantly differ between the BV and chemotherapy cohort, with a 3-year PFS of 72.2% (95% CI, 67-78) vs 67.1% (95% CI, 61-73; P = .14),respectively (Figure 2A; supplemental Table 4).The EFS was similar to PFS.However, there was a significant higher 3-year OS for patients treated within the BV cohort of 91.9% (95% CI, 88-96) vs 79.5% (95% CI, 74-85) for the chemotherapy cohort, P = .00043(Figure 2C).In patients with PD, significantly more patients died in the chemotherapy cohort (31 of 72; 43%) than in the BV cohort (19 of 65; 29%; P = .0011),whereas in patients without PD there was no significant difference in the number of deaths between the BV cohort (5 of 175; 3%) vs the chemotherapy cohort (8 of 168; 5%; P = .4),suggesting that advances in later lines of therapy are most likely the cause of improved OS in the BV cohort.
Subgroup analysis for survival between BV and chemotherapy cohorts
In the matched data set, we tested differences in 3-year PFS between the BV and chemotherapy cohorts for specific subgroups using univariable Cox regression (Figure 3).Patients with relapsed disease in the BV-cohort had a significantly lower risk of PD than those in the chemotherapy cohort (hazard ratio [HR], 0.59; 95% CI, 0.37-0.93;P = .022).Similarly, patients with stage IV disease had significantly lower risk of PD in the BV cohort (HR, 0.53, 95% CI, 0.32-0.88;P = .015).Patients with extranodal disease showed a trend for better PFS in the BV cohort with a HR of 0.65 (95% CI, 0.41-1.03;P = .067)but this was not significant.Exploratory multivariable subgroup analysis of R/R status and stage IV showed a trend for better PFS in the BV cohort for patients who had both stage IV and relapsed disease (n = 97; HR, 0.50; 95% CI, 0.25-1.02;P = .058).
Pre-ASCT PET responses in the whole cohort
Of 10 studies, 9 had PET-computed tomography (CT) data available.
Overall, N = 225 patients from the study of Josting et al were excluded from the chemotherapy cohort because responses were assessed using conventional CT scan.Consequently, the chemotherapy cohort comprised 157 patients with available PET data.The CMR rate in the whole BV cohort was 76% vs 80% in the chemotherapy cohort (P = .30;Table 3).The overall response rates (ORRs) based on PET were not significantly different between the BV and chemotherapy cohorts.However, when including patients from the study of Josting et al in which the ORR was based on conventional CT, the BV cohort displayed a significantly higher ORR of 89%, compared with 79% in the chemotherapy cohort (P < .001;Table 3).
In subgroup analysis, patients with relapsed disease exhibited higher CMR rates compared with patients with primary refractory disease.However, no significant differences in CMR or ORR rates were observed between the BV and chemotherapy cohorts within these subgroups (Table 3).
In the study of Moskowitz et al within the chemotherapy cohort, patients with a PMR or SD after ICE treatment underwent sequential GVD treatment.This sequential therapy resulted in a conversion from PMR/SD to a CMR in 21 patients (of whom 15 were included in the matched cohort).To ensure a comprehensive assessment, we recalculated the CMR rate after ICE-only, excluding these patients from the CMR count.This adjustment yielded a CMR rate of 67% for the total matched chemotherapy cohort.Upon comparing the CMR rate of 76% in the BV cohort with the CMR rate of 67% after ICE-only in the chemotherapy cohort, a notable significance emerged in both univariable (P = .025)and multivariable analysis (P = .0017;Table 3).This distinction was particularly pronounced among patients with relapsed disease, because in this subgroup the CMR rate was significantly higher in the BV-cohort compared with the chemotherapy cohort.Conversely, in primary refractory patients, no significant differences in CMR rates were observed between the 2 cohorts (Table 3).
Slightly more patients underwent ASCT in the BV cohort (335 of 386; 87%) vs the chemotherapy cohort (324 of 382; 85%), but this was not significant in univariable (P = .38)or multivariable analysis adjusted for baseline characteristics (P = .06).For relapsed patients, a significant higher percentage of patients underwent ASCT in the BV cohort than in the chemotherapy cohort (90% vs 86%; P = .012multivariate; Table 3).Among patients who underwent ASCT, those achieving a CMR (n = 398) before ASCT had a 3-year PFS of 78.3% (95% CI, 74-83), which was significantly higher than those who underwent ASCT after a PMR (n = 57) with a 3-year PFS of 64.2% (95% CI, 53%-78%; P = .01),or SD (n = 8) with a 3-year PFS of 37.5% (95% CI, 15-92; P = .0004;Figure 4A).In all patients who received transplantation while having obtained a CMR, there was no difference in 3-year PFS between the BV and chemotherapy cohorts (P = .92;data not shown).Notably, after ASCT, there was a significantly lower OS for patients with SD than those with a CMR (P = .0042),whereas no difference in OS was observed for patients with a PMR vs CMR (P = .286;Figure 4B).
Influence of BV dose and salvage chemotherapy schedule
Within the whole BV cohort (unmatched data set; BV cohort, n = 386), subgroup analysis shows a nonsignificant trend for a higher PFS (HR, 0.72; 95% CI; 0.50-1.04;P = .079)in studies that used BV with a combination of chemotherapeutic agents, for example, dexamethasone, high-dose cytarabine, and cisplatin, ICE, or etoposide, methylprednisolone, cisplatin and cytarabine (ESHAP), vs a single agent, for example, bendamustine or gemcitabine (supplemental Table 6). 16,17,21,24The use of a sequential schedule (ie, BV monotherapy followed by chemotherapy), the number of BV cycles, and the cumulative BV dose did not have an impact on 3year PFS or pre-ASCT CMR rate between studies in the BV cohort.This suggests that more cycles of BV does not improve CMR rates or PFS.Two studies applied BV maintenance after ASCT (11% of total number of patients). 17,19However, not all patients received BV maintenance and many patients received less than the intended number of maintenance cycles because of toxicity or other reasons, which limits an analysis to assess the effect of BV maintenance (supplemental Table 2). 17,19utcomes of sequential treatment Three studies followed a sequential approach: 2 studies in the BV cohort used 2 to 4 cycles of BV monotherapy, allowing patients with a CMR to proceed directly to ASCT whereas patients with positive PET scans received additional ICE salvage chemotherapy before ASCT, and 1 study in the chemotherapy cohort used 2 cycles of ICE and patients without CMR received additional GVD chemotherapy before ASCT. 4,21,24Subgroup analysis showed no significant differences in 3-year PFS between patients achieving CMR with 1 line of therapy (BV monotherapy or ICE only) and those requiring 2 lines of therapy (BV-ICE or ICE-GVD) to achieve a CMR (P = .24;Figure 4C-D).OS also showed no significant differences between these groups (P = .62;supplemental Table 7).
Discussion
In this matched analysis of individual patient data from prospective single-arm clinical trials, we investigated the effect of BV addition to salvage chemotherapy followed by ASCT in transplant-eligible patients with R/R cHL.We found no statistically significant differences in PFS, EFS, and pre-ASCT CMR rate for patients treated with BV with or without chemotherapy compared with patients treated with salvage chemotherapy only.However, with relapsed disease and those with stage IV disease had a significantly better PFS and EFS when adding BV to the salvage treatment.Although OS was significantly better in the BV cohort, this may be influenced by the time in which the BV studies were conducted (2015-2021) compared with chemotherapy cohort studies (2010-2016).]30 The disparity in survival outcomes between patients with primary refractory disease and those with relapsed disease could potentially be explained by the antitumor mechanism of action of BV.BV elicits its antitumor effect through the cytotoxic warhead monomethyl auristatin E, a substrate for the multidrug resistance pump P-glycoprotein. 31It has been shown that BV-resistant cell lines have elevated pump P-glycoprotein, which is known to also occur after exposure to other cytotoxic agents such as doxorubicin. 32,33Thus, tumor cells that are able to resist first-line chemotherapy might use the same mechanism to convey resistance to BV.Because patients with primary refractory disease are more likely to be resistant to chemotherapy, this might explain why they could also be resistant to BV.[36] Patients with stage IV disease had improved PFS in the BV cohort vs the chemotherapy cohort.This may be attributed to a larger total tumor volume, necessitating intensified treatment, which could be achieved by augmenting standard chemotherapy with BV.In subgroup analyses of the Echelon-1 trial, stage IV was also associated with better PFS in patients treated with BV-AVD compared with ABVD, suggesting a similar effect in the R/R setting. 13,14 showed that patients who were treated with a sequential approach who achieved a PMR after BV or ICE only, yet converting to a CMR after salvage chemotherapy with ICE (after BV) or GVD (after ICE) exhibited comparable survival outcomes for those directly achieving CMR.This highlights the feasibility of a sequential approach, potentially sparing chemotherapy in rapid responders.
Emphasizing the significance of attaining CMR before ASCT, our study suggests that improving survival in patient with PMR could be accomplished by inducing CMR through additional salvage chemotherapy or immunotherapy before ASCT. 4,21,24r analysis is limited by missing variables in certain studies, partially mitigated by our matching method.Consequently, not all patients could be included in specific (multivariable) analyses.
Although our analysis approach addresses inherent differences in trial populations and design as much as possible, it is essential to emphasize several significant distinctions in design: a large portion of patients in the chemotherapy cohort lacked response assessment using PET, restricting the comparison of pre-ASCT CMR rates between the BV and chemotherapy cohorts.Unfortunately, we could not evaluate the impact of BV maintenance in our analysis because only a limited number of patients received BV maintenance in our cohort, and the number of BV maintenance cycles differed widely across patients because of various reasons, limiting a proper analysis.Additionally, assessing the impact of radiotherapy was hindered by varying protocols among the studies.
Although some universally applied pre-ASCT radiotherapy to patients with extranodal and bulky disease, others selectively used it on residual lesions either before or after ASCT. 4,16,24nerally, the PFS, OS, and CMR rates in the chemotherapy cohort appear favorable compared with real-world data. 7,37However, the studies in our analysis only included transplant-eligible patients, known for better outcomes compared with patients who are older or unfit.Furthermore, the study of Josting et al specifically excluded patients with primary refractory disease.Although our analysis minimizes bias through matching and inclusion of prospective trials, caution is warranted in generalizing to real-world scenarios.Therefore, the observed results of our analysis should be interpreted with caution and cannot replace an RCT.Nonetheless, at the moment this, to our knowledge, is the largest matched analysis based on individual patient data in R/R cHL, incorporating recent clinical trial data.Therefore, it serves as a benchmark for future (single-arm) studies exploring novel therapies or regimens that aim to replace high-dose chemotherapy/ASCT with novel drugs.
Preliminary results of an ongoing phase 2b RCT, comparing BV-ESHAP to ESHAP alone in a cohort of 150 patients, indicate a higher CMR rate in the BV-ESHAP group. 38However, the limited sample size of the study may impede subgroup analyses for risk factors.In addition, this study evaluates the substitution of ASCT by BV maintenance therapy in patients with a CMR after salvage treatment.Although this investigation could provide valuable insights into the potential replacement of ASCT with maintenance therapy, it may complicate the direct comparison of long-term outcomes between the BV-ESHAP and ESHAP arms.
Emerging novel therapies, including immune-checkpoint inhibitors, are gaining attention in the relapsed/refractory setting.][36] Exploring a similar individual patient data analysis for studies combining chemotherapy with checkpoint inhibitors vs BV + chemotherapy or chemotherapy alone could offer valuable insights.The evolving landscape, in which BV is increasingly being used in newly diagnosed patients, raises questions about its retreatment efficacy in the salvage setting. 13However, retreatment with BV in patients with multiple relapses showed persistent efficacy. 40Preliminary findings from an extensive ongoing RCT comparing nivolumab-AVD with BV-AVD demonstrated favorable outcomes for the nivolumab-AVD arm. 41This outcome might potentially prompt a shift toward integrating checkpoint inhibitors as a first-line treatment, thereby reinstating the use of BV in the salvage setting.Consequently, our results remain pertinent for future treatment contexts.As novel therapeutic options shift to earlier lines of therapy, such as the use of checkpoint inhibitors in the first or second line, studying the sequencing effects of these agents becomes increasingly crucial, ideally through prospective clinical trials.However, it is essential to acknowledge the lack of universal global access to these novel (and often expensive) agents, a consideration that should also be addressed in guidelines outlining the optimal treatment for patients with R/R cHL.
In summary, our study indicates that the addition of BV to chemotherapy did not enhance CMR rates or PFS in the overall population of patients with R/R cHL compared with standard salvage chemotherapy.However, notable PFS improvements were observed in patients with relapsed or stage IV disease undergoing salvage treatment that includes BV.Moreover, a sequential approach involving BV monotherapy followed by salvage chemotherapy is both viable and has the potential to reduce the need for salvage chemotherapy in certain patients.In the absence of RCTs, this propensity score-matched analysis on individual patient data offers valuable insights in the treatment landscape for patients with R/R cHL.
Figure 2 .
Figure 2. Kaplan-Meier survival analyses on the matched cohort.Kaplan-Meier curves showing the PFS, EFS, and OS in the BV and chemotherapy cohort in the matched data set (panels A, B, and C), and corresponding analyses stratified for patients with relapsed (panels D, E, and F) or primary refractory disease (panels G, H, and I).PR, partial response.
Figure 3 .
Figure 3. Forest plot of the association between baseline characteristics and differences in PFS between the BV and chemotherapy cohorts.HRs are shown for univariable Cox regression on subgroup analyses of baseline characteristics for PFS comparing the BV and chemotherapy cohorts.A HR of <1 corresponds to a higher PFS in the BV cohort compared with the chemotherapy cohort.CR, complete response; PR, partial response; yr, year.
Table 1 .
Baseline patient characteristics in the whole data set Patient characteristics are measured at time of enrollment in the studies, that is, at time of relapse or primary refractory disease, unless indicated otherwise.
Table 2 .
Patient characteristics in the matched data set For 24 patients in the chemotherapy cohort from the trial by Santoro et al, stage at relapse was not recorded but stage I and IV were deducted from the amount of involved lymph node sites, extranodal sites, and bone marrow involvement.It was not possible to distinguish between stage II and III disease because no data were available on the spatial distribution of nodal sites (ie, infradiaphragmatic and/or supradiaphragmatic).†Bulky disease was defined as a single tumor bulk larger than 5 cm.NA, not applicable; WHO PS, World Health Organization performance status. *
Table 3 .
Pre-ASCT response rates and patients who underwent ASCT | 2024-03-21T06:17:54.328Z | 2024-03-19T00:00:00.000 | {
"year": 2024,
"sha1": "b6909edb59e88abe3e159cc1c46167f1b2daf6f2",
"oa_license": "CCBYNCND",
"oa_url": "https://ashpublications.org/bloodadvances/article-pdf/doi/10.1182/bloodadvances.2023012145/2218553/bloodadvances.2023012145.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a97c6d2796b42b4d267b5ced8caf83e10581ee4b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8947079 | pes2o/s2orc | v3-fos-license | Both α1- and α2-adrenoceptors in the Insular Cortex Are Involved in the Cardiovascular Responses to Acute Restraint Stress in Rats
The insular cortex (IC) is a limbic structure involved in cardiovascular responses observed during aversive threats. However, the specific neurotransmitter mediating IC control of cardiovascular adjustments to stress is yet unknown. Therefore, in the present study we investigated the role of local IC adrenoceptors in the cardiovascular responses elicited by acute restraint stress in rats. Bilateral microinjection of different doses (0.3, 5, 10 and 15 nmol/100 nl) of the selective α1-adrenoceptor antagonist WB4101 into the IC reduced both the arterial pressure and heart rate increases elicited by restraint stress. However, local IC treatment with different doses (0.3, 5, 10 and 15 nmol/100 nl) of the selective α2-adrenoceptor antagonist RX821002 reduced restraint-evoked tachycardia without affecting the pressor response. The present findings are the first direct evidence showing the involvement of IC adrenoceptors in cardiovascular adjustments observed during aversive threats. Our findings indicate that IC noradrenergic neurotransmission acting through activation of both α1- and α2-adrenoceptors has a facilitatory influence on pressor response to acute restraint stress. Moreover, IC α1-adrenoceptors also play a facilitatory role on restraint-evoked tachycardiac response.
Introduction
Stress situations happen during real or perceived threat to homeostasis or well-being. Stressors include either interoceptive changes (e.g., blood volume or osmolality changes) or environmental threat that may be physical (e.g., hypoxia) or psychological (e.g., presence of a predator). During stress a spectrum of physiological responses are evoked to maintain the physiologic integrity of the organism [1]. The physiological responses to stress are mainly characterized by autonomic nervous system alterations, increase in plasma catecholamine levels and activation of the hypothalamus-pituitary-adrenal (HPA) axis [1,2]. Autonomic responses include increase on both blood pressure and heart rate (HR) [3,4]. Furthermore, cardiovascular changes during stress are accompanied by a resetting of baroreflex toward higher arterial pressure values, thus allowing simultaneous blood pressure and HR increases [5][6][7][8].
Several central nervous system areas, including the prefrontal cortex, were described to be part of the brain circuitry involved on cardiovascular adjustments during stress [3,4,9,10]. In rats, two regions of the prefrontal cortex involved in control of cardiovascular function are the insular cortex (IC) and the medial prefrontal cortex (MPFC) [11,12]. It has been described that the IC is involved in cardiovascular control [13][14][15] and baroreflex modulation [16][17][18][19]. Furthermore, previous results from our group demonstrated that bilateral microinjection of the unspecific neurotransmitter blocker CoCl 2 into the IC of rats reduced both cardiovascular and behavioral responses evoked by either conditioned (contextual fear conditioning) or unconditioned (acute restraint stress) aversive stimuli [2,20]. These results provided the first evidence of a role of the IC in cardiovascular adjustments during stress. However, due to the nonselective blockade of local neurotransmission caused by CoCl 2 [21,22], the specific neurotransmitter involved in the IC modulation of cardiovascular responses to stress is yet unknown.
Central noradrenergic circuitry is shortly activated after a stressful event [1]. Conversely, it has been identified an enhanced release of noradrenaline after stress in several limbic brain regions including the central (CeA) and medial (MeA) amygdaloid nuclei, bed nucleus of the stria terminalis (BNST), lateral septal area (LSA), hippocampus and prefrontal cortex [1,[23][24][25][26]. Noradrenergic terminals in the prefrontal cortex originate mainly from the locus coeruleus and play an important role in the regulation of cortical function [27][28][29][30]. We have previously reported that noradrenergic neurotransmission within the IC is involved in the modulation of baroreflex activity [31]. Also, microinjection of noradrenaline into the IC causes elevation of blood pressure and bradycardia [14]. Although above evidence, the involvement of IC noradrenergic neurotransmission in the control of cardiovascular function during stress situations has never been investigated. Therefore, given the involvement of IC-noradrenergic neurotransmission in cardiovascular control, we hypothesized an involvement of IC a-adrenoceptors in cardiovascular responses elicited by acute restraint stress in rats. To test this hypothesis, we investigated the effect of bilateral microinjections into the IC of selective a-adrenoceptor antagonists in restraint-evoked pressor and tachycardiac responses.
Ethical approval and animals
Experimental procedures were carried out following protocols approved by the Ethical Review Committee of the School of Medicine of Ribeirão Preto, (process number: 167/2007), which complies with the guiding principles for research involving animals and human beings of the National Institutes of Health. Fifty-seven male Wistar rats weighing approximately 250 g were used in the present experiment. Rats were housed in plastic cages in a temperature-controlled room (25uC) at the Animal Care Unit of the Department of Pharmacology, School of Medicine of Ribeirão Preto. Rats were kept under a 12 h :12 h light-dark cycle (lights on between 06:00 am and 6:00 pm) and had free access to water and standard laboratory food, except during the experimental period.
Surgical preparation
Five days before the experiment, the rats were anesthetized with tribromoethanol (250 mg/kg, i.p.). After local anesthesia with 2% lidocaine, the skull was surgically exposed and stainless steel guide cannulas (26 G) were implanted bilaterally in the IC, using a stereotaxic apparatus (Stoelting, Wood Dale, Illinois, USA). Stereotaxic coordinates for cannula implantation in the IC were selected from the rat brain atlas of Paxinos and Watson (1997) and were: antero-posterior = +11.7 mm from interaural, lateral = 4.0 mm from the medial suture and dorso-ventral = 24.5 mm from the skull. Cannulas were fixed to the skull with dental cement and one metal screw. After surgery, the animals were treated with a polyantibiotic preparation of streptomycins and penicillins (i.m., 0.27 mg/kg, Pentabiotico, Fort DodgeH, Campinas, SP, Brazil) to prevent infection, and with the non-steroidal anti-inflammatory flunixine meglumine (2.5 mg/kg, i.m.; BanamineH, Schering Plough, Cotia, SP, Brazil) for post-operative analgesia.
One day before the experiment, rats were anesthetized with tribromoethanol (250 mg/kg, i.p.) and a catheter (a 4 cm segment of PE-10 heat-bound to a 13 cm segment of PE-50, Clay Adams, Parsippany, NJ, USA) was inserted into the abdominal aorta through the femoral artery, and later on used for arterial pressure and HR recording. The catheters were tunneled under the skin and exteriorized on the animal's dorsum. After surgery, the animals were treated with the non-steroidal anti-inflammatory flunixine meglumine (2.5 mg/kg, i.m.) for post-operative analgesia.
Measurement of Cardiovascular Responses
On the day of the experiment, the arterial cannula was connected to a pressure transducer and pulsatile arterial pressure was recorded using an HP-7754A amplifier (Hewlett Packard, Palo Alto, CA, USA) and an acquisition board (Biopac M-100, Goleta, CA, USA) connected to a personal computer. Mean arterial pressure (MAP) and HR values were derived from pulsatile arterial pressure recordings and were processed online. 2.5 mM CaCl 2 ; pH = 7.4). Urethane (Sigma, St. Louis, MO, USA) and tribromoethanol (Sigma) were dissolved in saline (0.9% NaCl). Flunixine meglumine (BanamineH, Schering Plough, Brazil) and poly-antibiotic preparation of streptomycins and penicillins (PentabioticoH, Fort Dodge, Brazil) were used as provided.
Drug injection into the insular cortex
The needles (33 G, Small Parts, Miami Lakes, FL, USA) used for microinjection into the IC were 1 mm longer than the guide cannulas and were connected to a hand-driven 2 ml syringe (7002-KH, Hamilton Co., Reno, NV, USA) through a PE-10 tubing. Needles were carefully inserted into the guide cannulas without restraint the animals. After a 30 s period, the needle was removed and inserted into the second guide cannula for microinjection into the contralateral IC. Drugs were injected in a final volume of 100 nl [2,20].
Experimental procedure: acute restraint stress
On the trial day, animals were brought to the experimental room in their home cages. Animals were allowed one hour to adapt to the conditions of the experimental room, such as sound and illumination, before starting cardiovascular recordings. The experimental room was temperature controlled (25uC) and the room was acoustically isolated from the other rooms. Constant background noise was generated by an air extractor to minimize sound interference within the experimental room. Baseline values of MAP and HR were recorded for at least 30 min. In the sequence, independent groups of animals received bilateral microinjection into the IC of vehicle (ACSF, 100 nl) or different doses of either the selective a 1 -adrenoceptor antagonist WB4101 (0.3, 5, 10 or 15 nmol/100 nl) or the selective a 2 -adrenoceptor antagonist RX821002 (0.3, 5, 10 or 15 nmol/100 nl) [14,31,32]. Ten minutes later, rats were submitted to acute restraint stress by placing them into a plastic cylindrical restraint tube (diameter = 6.5 cm, length = 15 cm), which were ventilated by holes (1 cm of diameter) that comprised approximately 20% of the tube surface. The restraint session lasted 60 min, after which the rats were returned to their home cages [20,33]. Each rat was submitted to one session of restraint in order to avoid habituation. Experiments were performed during the morning period in order to minimize possible circadian rhythm interferences.
Histological determination of the microinjection sites
At the end of experiments, animals were anesthetized with urethane (1.25 g/kg, i.p.) and 100 nL of 1% Evan's blue dye was injected into the IC as a marker of the injection site. They were then submitted to intracardiac perfusion with 0.9% NaCl followed by 10% formalin. Brains were removed and post fixed for 48 h at 4uC and serial 40 mm-thick sections were cut using a cryostat (CM1900, Leica, Wetzlar, Germany). Sections were stained with 1% neutral red for light microscopy analysis. The placement of the microinjection needles was determined analyzing serial sections and identified according to the rat brain atlas of Paxinos and Watson (1997).
Statistical Analysis
Statistical analysis was performed using Prism software (GraphPad, SanDiego, CA, USA). The results are presented as mean6S.E.M. The Student's t-test was used to compare basal values of MAP and HR before and after pharmacological treatments. Time-course curves of MAP and HR changes were compared using two-way ANOVA for repeated measurements (treatment vs time) with repeated measures on the second factor.
When interactions between the factors were observed, one-way ANOVA followed by Bonferroni's post-hoc test was used to compare the effect of the treatments. Nonlinear regression analysis was performed to investigate the dose-effect relationship of treatment with crescents doses of WB4101 and RX821002 on cardiovascular responses to restraint stress.
Results
A representative photomicrograph of a coronal brain section depicting bilateral microinjection sites in the IC of one representative rat is presented in Figure 1. A diagrammatic representation showing microinjection sites of vehicle, WB4101 and RX821002 into the IC and WB4101 and RX821002 into structures surrounding the IC is also presented in Figure 1.
Discussion
The results of the present work provide the first direct evidence for the involvement of IC adrenoceptors in cardiovascular responses observed during aversive threats. We have shown that bilateral microinjection of the selective a 1 -adrenoceptor antagonist WB4101 into the IC reduced restraint-evoked pressor and tachycardiac responses in a dose-dependent manner. Moreover, IC treatment with the selective a 2 -adrenoceptor antagonist RX821002 dose-dependently reduced MAP increase observed during restraint stress without affecting tachycardiac response.
Restraint stress is well accepted in the literature as an unconditioned and unavoidable aversive stimulus that elicits neuroendocrine and cardiovascular responses, the latter being characterized by sustained elevation of blood pressure, HR and the sympathetic activity that last through the restraint period [34][35][36][37]. The IC receives an organized representation of visceral information and is also highly interconnected with subcortical limbic and autonomic-related regions. Based on this combination of sensory input and limbic connectivity it has been descript as an important cortical center for the integration of autonomic and behavioral responses during aversive threats [13]. Conversely, we have demonstrated that CoCl 2 -induced acute bilateral inhibition of IC neurotransmission greatly attenuated both pressor and tachycardiac responses evoked by acute restraint stress [20]. However, due to the nonselective blockade of local neurotransmission caused by CoCl 2 [21,22], the possible neurotransmitter involved was not identified.
It has been showed that diverse array of physical (e.g., immune challenge, hypoglycemia, hypotension, and cold exposure) and emotional (e.g., immobilization, electric shock, loud noise, and restraint stress) stressors activate brain noradrenergic mechanisms [1,23,26,[38][39][40][41][42]. Noradrenergic neural terminals have been identified in the IC [30]. This IC innervation is mainly originated from noradrenergic cells grouped in the locus coeruleus (noradrenergic cell group A6) [27,28,30]. The present work has demonstrated that blockade of local a 1 -adrenoceptor by bilateral microinjection of WB4101 into the IC was able to reduce both pressor and tachycardiac responses evoked by restraint stress. These results corroborate with effects observed previously following CoCl 2 -induced acute bilateral inhibition of IC neurotransmission [20], thus suggesting that local a 1 -adrenoceptors mediates, at least in part, the IC influence on cardiovascular responses to restraint stress. Interestingly, blockade of local a 2adrenoceptors caused by microinjection of RX821002 into the IC also reduced restraint-evoked pressor response, but without affecting tachycardiac response. Therefore, present data suggest that control of cardiac function during restraint stress by IC noradrenergic neurotransmission is due a selective activation of local a 1 -adrenoceptors, whereas control of blood pressure during seems to be mediated by coactivation of local a 1 -and a 2adrenoceptors.
The presence of specific noradrenergic mechanisms within the IC controlling restraint-evoked pressor and tachycardiac responses indicates that different neuronal pathways originating in the IC are involved in control of vascular and cardiac functions during stress. The existence of specific central nervous system circuitries controlling autonomic activity to different organs provides the structural substrate for specific local IC noradrenergic neurotransmission mechanisms modulating cardiovascular adjustments during restraint [43]. Conversely, it has been demonstrated that several brain regions selectively modulate stress-evoked blood pressure and HR responses [44][45][46][47]. The presence of specific noradrenergic mechanisms in the central nervous system modulating vascular and cardiac responses to stress has also been reported [33,48]. Therefore, our results corroborate with previous evidence of selective neural substrates controlling vascular and cardiac function during aversive threats.
Noradrenaline is released in several central nervous system regions, including the prefrontal cortex [23,26], shortly after the onset of a stressful situation [1]. Since noradrenaline act through G protein-coupled receptors, which rapidly transfer their activation to downstream effectors, the rapid rise in their level is quickly translated into behavioral and physiological responses. This profile of fast release and action can explain why effects of local IC treatment with adrenoceptor antagonists are already observed during the early phase of restraint stress. However, it has been proposed that sustained and adaptive components of the stress responses (e.g., consolidation of the memory associated with the stressor) are mediated by mechanisms in the brain that affect gene expression and cell function [1]. A main mediator of these latter effects is corticosteroids acting through glucocorticoid receptors [1]. Since previous studies have demonstrated a role of the IC in Table 1. Effect of bilateral microinjections into the IC of crescent doses (0.3, 5, 10 and 15 nmol/100 nl) of the selective a 1adrenoceptor antagonist WB4101 on mean arterial pressure (MAP) and heart rate (HR) baseline. the memory formation for aversive threat and hypothalamuspituitary-adrenal axis control [2,4,9], further studies are necessary to investigate a possible role of IC in latter consequences of restraint stress. Tachycardiac and pressor responses during stress are sympathetically mediated since they are abolished after the blockade of band a-adrenoceptors, respectively [5,49,50]. Moreover, treatment with parasympathetic blocker increases the tachycardiac response evoked by psychological stress [33,51,52], thus suggesting the simultaneous activation of cardiac parasympathetic and sympathetic activity during psychological stress. It has been reported that the IC modulate the sympathetic nervous activity through a mandatory synapse in the ventrolateral medulla [53,54]. An IC control of cardiac parasympathetic activity has also been shown [31,55]. Therefore, activation of IC a 1 -adrenoceptors could facilitate restraint-evoked tachycardiac response by stimulating facilitatory inputs to sympathetic medullary neurons and/or by stimulating inhibitory inputs to vagal neurons. Connections from the IC to sympathetic medullary neurons could also be the neural substrate for the facilitatory influence of IC a 1 -and a 2adrenoceptors on the pressor response to stress.
Baroreflex stimulus-response curve resets toward higher blood pressure values during aversive threat [5,6]. It has been proposed that such changes on baroreflex activity play a facilitatory role in stress-evoked cardiovascular responses [3,7]. We have previously demonstrated that IC noradrenergic neurotransmission acting through activation of a 1 -adrenoceptor modulates the baroreflex activity in a similar manner to that observed during stress [31]. Therefore, activation of IC a 1 -adrenoceptors could facilitate cardiovascular responses to restraint stress through its modulation of baroreflex activity. However, once IC treatment with selective a 2 -adrenoceptors does not affect baroreflex activity [31], it is possible that IC control of restraint-evoked pressor response through this adrenoceptor occurs by mechanisms independent of the baroreflex.
An antero-posterior organization of IC control of cardiovascular function has been proposed. Predominantly depressor responses have been reported following stimulation of rostral regions of the IC [13]. However, stimulation of the posterior IC elicits either pressor response associated with tachycardia (rostral sites within the posterior IC) or depressor response followed by bradycardia (caudal sites within the posterior IC) [13]. Although these pieces of evidence, a possible regionalization in the IC control of cardiovascular adjustments to stress has never been reported. The injection sites within the IC in studies investigating the role of this cortical region in the cardiovascular control during stress (including the present study) have centered within rostral regions of the IC [2,20]. Therefore, further studies are necessary in order to investigate a possible rostro-caudal organization of the IC control of cardiovascular function during aversive threat.
IC treatment with adrenoceptor antagonists did not affect either MAP or HR baseline values. Therefore, although the present study supports the hypothesis that IC noradrenergic neurotransmission plays an important role in modulating the cardiovascular responses to restraint stress, this neurotransmission is not involved in the tonic maintenance of cardiovascular function. These results corroborate with our previous data demonstrating no changes in cardiovascular parameters after blockade of either glutamatergic receptors or adrenoceptors into the IC [16,31]. However, present results contrast with data of other groups that observed increased arterial pressure and HR following microinjection of the neuronal blocker lidocaine into the IC of unanesthetized rats [53]. Lidocaine blocks both local synapses and passage fibers [56]. Therefore, since local IC pharmacological treatment with agents that selectively inhibits synapses without affecting passage fibers (e.g., CoCl 2 ) does not affect cardiovascular basal parameters [20], it is possible that effects observed previously after local lidocaine treatment is due the inhibition of fibers passing through the IC and targeting other brain regions. Furthermore, it is important to mention that other studies did not identify effects of local lidocaine microinjection or IC lesion on cardiovascular baseline parameters [17,18], thus supporting our results of absence of IC role in the tonic maintenance of cardiovascular function.
In conclusion, the present results show that noradrenergic neurotransmission in the IC modulates cardiovascular adjustments during restraint stress in a complex way. Our data provide evidence that IC noradrenergic neurotransmission acting through activation of both a 1 -and a 2 -adrenoceptors has a facilitatory influence on pressor response during acute restraint stress. Moreover, IC a 1 -adrenoceptors also play a facilitatory role on restraint-evoked tachycardiac response. Table 2. Effect of bilateral microinjections into the IC of crescent doses (0.3, 5, 10 and 15 nmol/100 nl) of the selective a 2adrenoceptor antagonist RX821002 on mean arterial pressure (MAP) and heart rate (HR) baseline. | 2016-05-02T18:17:47.175Z | 2014-01-03T00:00:00.000 | {
"year": 2014,
"sha1": "23929a68a5b882fbc43bc7b8de5030399c67ce74",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0083900&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "23929a68a5b882fbc43bc7b8de5030399c67ce74",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
219689476 | pes2o/s2orc | v3-fos-license | Can Polyphenols be used as Natural Preservatives in Fermented Sausages?
Abstract This study was aimed at the investigation into the influence of polyphenols on fermented sausages produced with and without nitrite addition, during storage which lasted for 280 days. Three types of sausages were produced and formed the three experimental groups: C – the control – sausages of usual composition containing nitrites; N+P - sausages with nitrites and polyphenols; and P - nitrite-free sausages with added polyphenols. The proximate chemical composition of all groups was in the range with that of dry fermented sausages. P sausages contained 0.3 mg nitrites per kg, while C and N+P contained 54.8 mg/kg and 52.2 mg/kg, respectively. Polyphenol-enriched sausages had significantly lower peroxide and TBARS values than C sausages. In all sausages lactic acid bacteria counts reached 8.9-9.9 log cfu/g, but decreased during storage to 4.3-4.8 log cfu/g at the end of the storage period. Micrococcaceae counts remained stable: 3.5-3.9 log cfu/g. In P and N+P sausages a significantly lower number of Pseudomonadaceae was observed than in the control. The lightness of C and P sausages was similar (L=50.2 and L=49.5, respectively), while N+P sausages were darker (L=42.5). C and N+P sausages had similar redness (a*=14.5 and a*=13.2, respectively) and yellowness (b*=5.9 and b*=6.4, respectively), but the values which correspond to redness and yellowness were lower in P sausages (a*=8.0 and b*=4.6). Sensory characteristics of all products were found to be very similar. The flavour of polyphenol-enriched sausages was considered to be better. The most dominant polyphenol in sausages was kaempferol-3-O-glucoside followed by quercetin, luteolin-7-O-glucoside, catechin and syringic acid. Nitrite-free polyphenol-enriched sausages reached the same shelf life as conventional sausages containing nitrites did, which is a promising result implying that polyphenols might be used as natural preservatives and nitrite substitutes. Simultaneous use of nitrite and polyphenols is questionable due to their interactions which should be further studied.
INTRODUCTION
Fermented sausages are produced from ground meat and fatty tissue, with the addition of table salt, additives, sugar, spices and some other ingredients such as starter cultures, fi bers, carbohydrates etc. The technological process of production includes preparation of the stuffi ng, fi lling into casings, smoking and drying, followed by ripening that includes physical, chemical and enzymatic processes that enable shelf life and provide sensory properties to the product [1]. Fermented sausages are meat products of high quality and are truly appreciated among consumers. According to the regulations [2], dry fermented sausages should contain less than 30% moisture and more than 20% meat proteins, while the collagen content in meat proteins should be less than 15 %. The safety and shelf life of dry fermented sausages are based on the decrease in pHand a w -values during fermentation and drying processes, res pectively, which prevents spoilage and growth of pathogenic microorganisms [3][4][5]. Characteristic sensory properties of the product, such as colour, fl avour and texture, are formed through the ripening process. A typical red colour is formed by myoglobin reduction, as well as the formation of nitroso-myoglobin in sausages treated with curing salts containing nitrite and/or nitrate [1]. The fl avour is infl uenced by fermentation, proteolysis and lipolysis, resulting from the activities of sausage microbiota and tissue enzymes, when organic acids, peptides, amino acids, amines, fatty acids, peroxides and aldehydes are released [6][7][8]. These compounds jointly contribute to the typical fl avour of fermented sausages, but free fatty acids are especially prone to oxidation which could lead to the rancidity and spoilage of the product [1,6].
In order to provide microbiological safety, lipid stabilization and the slowdown of oxidation processes throughout the sausage ripening, as well as colour and fl avour formation, nitrites play an irreplaceable role in contemporary meat processing. However, nitrites are precursors for harmful N-nitrosamines, which are formed in the reactions between nitrites and amines that are released during sausage ripening [9]. N-nitrosamines are reported as harmful compounds for human's health with a carcinogenic potential [10,11] which prompted research in the direction of reducing the use of nitrites in meat products or fi nding suitable replacements. However, this is not an easy task due to the multiple signifi cance of nitrite in meat products so an appropriate substitute should act as an antimicrobial, antioxidant, colouring and fl avouring agent simultaneously [12].
Polyphenols are secondary metabolites of plants which play important physiological roles protecting them from microorganisms and ultraviolet radiation. These compounds include fl avonoids (anthocyanins, fl avanols, isofl avonoids, fl avonoids, and fl avanones) and phenolic acids. It has been proved that they can perform a series of biological effects including antioxidant, antimicrobial, anti-carcinogenic and antiinfl ammatory actions [13]. Owing to these properties, the use of polyphenols in meat products could provide a double effect. On the one hand, they could play a role as natural preservatives, and on the other hand they could act as functional ingredients having a consumer's health promoting potential [14].
The aim of this study was to investigate the infl uence of polyphenols on physicochemical, chemical and microbiological processes, as well as on sensory properties of fermented sausages produced with and without nitrite addition.
Sausage stuffi ng was prepared in a bowl chopper, stuffed into collagen casings 55 mm in diameter, and subjected to smoking, drying and ripening processes, in the following conditions: fermentation -2 days at a temperature of 26 o C and a relative air humidity (RH) 90%; smoking -occasionally for 3 days at 22 to 24 o C, drying and ripening at 15 o C while RH gradually decreased from 90% to 75% in the following 30 days. The total production process lasted 35 days. The products were stored at a temperature of +15 o C and RH of 75% during 280 days.
Six sausages were randomly taken from each experimental group and samples were investigated in duplicate. The investigation was conducted during production period (stuffi ng and the end-product) and during storage on day 0 (at the beginning of storage), 30,70,100,130,190, 220, 250 and 280.
Physicochemical and chemical analysis
Physicochemical analysis included the determination of water activity (a w ) with an awmeter (FAst/1, GBX Scientifi c Instruments) according to ISO, 2004, and pH value measurement with Testo 205 pH meter (Testo AG, Lenzkirch, Germany) according to the reference method [15]. The chemical composition of the sausages was determined by measuring the moisture, protein, hydroxyproline, fat, table salt, ash, nitrite and nitrate contents using standard methods [16][17][18][19][20][21][22][23]. The collagen/protein ratio (the relative content of collagen in meat protein) was calculated as follows: collagen content (%) x 100 / protein content (%).
Lipid oxidation was determined through the acid number [24], peroxide value [25] and TBARS value according to Tarladgis [26] and Holland [27]. Proteolysis index (PI) was calculated according to the method described by Careri et al. [28].
Instrumental colour analysis
The colour of the sausages was determined instrumentally (ChromaMeter CR-400, Minolta Co. Ltd, Tokyo, Japan), using a D-65 light source, a 2° standard observer angle and an 8 mm aperture in the measuring head, according to the CIE L*a*b* system (L* -lightness, a* -redness, b* -yellowness). The results were obtained as the average value of three measurements on the cross-section surface of one sample.
Sensory analysis
Sensory evaluation was made by six panellists trained according to standard procedure [31], by means of the quantitative descriptive analysis according to a 5-point scale with scores from 5 (excellent) to 1 (unacceptable). The sausages with scores 2.0 and higher for each attribute tested (colour, texture, cross section appearance and fl avour) were considered acceptable.
Polyphenols content analysis
Polyphenol content determination in the sausage extracts included two steps: the extraction of phenolic compounds and the HPLC-MS-MS analysis.
Extraction of phenolic compounds was performed as follows: 30 g of sausage samples were mixed with 120 mL of methanol and water (80/20, v/v) containing 20 mg/L of butylatedhydroxytoluene (BHT). The system was homogenized using a rod dispenser (T18 Digital Ultra-Turrax, IKA®-Werke GmbH & Co, Germany) for 1 min. at 6,000 rpm, centrifuged at 4,000 rpm for 10 min and the supernatant was recovered. The operation was repeated twice, and the collected extract was then concentrated by rotary evaporator (Heildolph, Germany) until reaching 50 mL, which was used for the extraction of phenols by solid-phase extraction (SPE). An ODS-C18 SPE cartridge (AccuBOND II ODS-C18, Agilent Technologies, 500mg) previously activated with 10mL of methanol and 10 mL of water was loaded with the obtained water extract. The elution of phenolic compounds was performed with 10 mL of methanol. After solvent removal under vacuum, the phenolic compounds were solubilized in 1 mL methanol and passed through a 0.2-µm-pore-size RC fi lter (Merck KGaA, Germany). The extract was submitted to HPLC-MS/MS analysis.
HPLC-MS-MS analysis was performed as follows: 15 working standards, ranging from 1.53 ng/mL to 25,0·10 3 ng/mL, were prepared by serial 1:1 dilutions of standard mixture with mixture of distilled water and methanol (1:1). Prepared extracts and standards were analysed using Agilent Technologies 1200 Series high-performance liquid chromatograph coupled with Agilent Technologies 6410A Triple Quad tandem mass spectrometer with electrospray ion source, and controlled by Agilent Technologies Mass Hunter Workstation software -Data Acquisition (ver. B.03.01). Five microlitres were injected into the system, and compounds were separated on Zorbax Eclipse XDB-C18 (50 mm × 4.6 mm, 1.8 μm) rapid resolution column held at 50°C. Mobile phase consisting of 0.05% aqueous formic acid (A) and methanol (B) was delivered at fl ow rate of 1 mL/min in gradient mode (0 min 30% B, 6 min 70% B, 9 min 100% B, 12 min 100% B, re-equilibration time 3 min). The eluted components were detected by MS, using the ion source parameters as follows: nebulization gas (N2) pressure 50 psi, drying gas (N2) fl ow 10 L/min and temperature 350°C, capillary voltage 4 kV, negative polarity. Data were acquired in dynamic SRM mode, using the optimised compound-specifi c parameters (retention time, precursor ion, product ion, fragmentor voltage and collision voltage). For all the compounds, peak areas were determined using Agilent Mass Hunter Workstation Software -Qualitative Analysis (ver. B.03.01). Calibration curves were plotted and sample concentrations calculated using the Microsoft Excel software.
Statistical analysis
Statistical analysis of the results was conducted using the software GraphPad Prism version 6.00 for Windows (GraphPad Software, USA). The two-way analysis of variance (ANOVA) was used to determine the signifi cance of the differences between experimental groups. After signifi cant interactions were found, the data were evaluated by one-factor analysis of variance (ANOVA) with Tukey's multiple comparison test. Statistical signifi cance was considered at the level of P < 0.05.
Chemical composition
The results of the investigation into the chemical composition of the sausages are shown in Table 1. At the end of the production, moisture (29.4 -33.0 %), fat (40.4-42.3 %) and proteins (21.6 -23.2 %) content, as well as the collagen/proteins ratio (5.9 -7.7) were very similar among experimental groups. Signifi cant differences were found in the contents of nitrites and nitrates. P sausages contained nitrites in traces (0.3 mg/kg) in the stuffi ng, which is signifi cantly lower than in C and N+P sausages (54.8 and 52.2 mg/kg respectively). Nitrates were present in the stuffi ng of P sausages in the amount of 8.7 mg/kg, and in C and N+P sausages the nitrate content ranged between 38.2 and 34.4 mg/kg respectively. In the end-products, both nitrite and nitrate contents lowered compared to the stuffi ng. P sausages contained 0.2 and 0.3 mg/ kg nitrites and nitrates respectively. There was no signifi cant difference in the nitrite content between C (11.4 mg/kg) and N+P sausages, but concerning the nitrates content, N+P sausages had almost twice as high content of nitrates (22.02 mg/kg) as did C sausages (13.78 mg/kg). a,b,c = different letter indicates differences (P < 0.05) between experimental groups for investigated parameters separately for the stuffi ng and the end-products C = control -a sausage of usual composition containing nitrites N+P = sausage produced with nitrites and polyphenols P = nitrite-free sausage with polyphenols
Physicochemical and oxidative changes
The changes in a w -and pH-values as well as proteolysis indexes in the fermented sausages are shown in Figure 1.
During the 35-day production period, the a w value declined from 0.92 -0.93 in the stuffi ng to the values that ranged from 0.82 (P sausages), 0.83 (N+P sausages) up to 0.85 (C sausages) and remained mainly unchanged during storage. The more intensive a w value drop was observed in polyphenol-containing sausages (N+P and P), where the lowest values were observed by P sausages after 190 days of storage (0.79) but after 280 days there was no difference between experimental groups (0.83).
The pH value decreased from 5.57 (N+P and P sausages) and 5.61 (C sausages) in the stuffi ng to 5.32 -5.34 (N+P and P, respectively) and 5.40 (C sausages) after 35 days of production. During the storage, pH value gradually increased in all sausages, but it was signifi cantly lower in polyphenol containing sausages compared to the control, reaching 5.41 (N+P), 5.46 (P) and 5.48 (C) after 280 days. The results of the investigation into lipid oxidation changes during production and storage of the sausages are shown in Figure 2. The acid value was higher after the production period in polyphenol-enriched sausages (
Microbial changes
The results of microbiological investigation during the production of the sausages are shown in Figure 3, and during the storage time in Figure 4. The results showed that the most abundant were lactic acid bacteria (LAB), rising from 5.0-5.1 log cfu/g (in the stuffi ng) up to 8.9 log cfu/g in P sausages and to 9.8 and 9.9 log cfu/g in C and N+P sausages, respectively, on day 14 of production ( Figure 3). There was no difference in LAB counts between experimental groups through the remaining of the storage period. The counts of Micrococcaceae was rather similar in all sausages during production (3.5-3.9 log cfu/g), except on day 7, when N+P sausages contained signifi cantly lower numbers (5.1 log cfu/g) than C and P sausages (5.9 and 6.0 log cfu/g, respectively). Enterobacteriaceae and Pseudomonadaceae which were present in the stuffi ng in counts of 2.0-2.8 log cfu/g and 2.9-5.2 log cfu/g respectively, were not detected after 14 days and after 28 days of production. In P and N+P sausages a signifi cantly lower number of Pseudomonadaceae was observed on days 14 (3.33-3.68 log cfu/g, respectively) and 21 (1,87-2,75 log cfu/g, respectively) compared to the control (3.98 on 14 th and 2.94 log cfu/g on day21).
During storage LAB count decreased from 8.9-9.0 log cfu/g to 4.3-4.8 log cfu/g after day 280, being similar in all sausages ( Figure 4). Exceptions were observed on days 30 (6.6 log cfu/g in N+P and 6,7 log cfu/g in P sausages) and 190 (4.5 log cfu/g in N+P and 4.7 log cfu/g in P sausages), when LAB counts were signifi cantly lower in polyphenol-containing sausages compared to the control sausages (7.8 and 5.8 log cfu/g, respectively). Micrococcaceae count decreased from 3.5-3.9 log cfu/g to 2.0 log cfu/g after 130 days and afterwards they were not detected anymore till the end of the storage time. Salmonella spp. and Listeria monocytogenes were not detected in any of the production and storage phases.
Colour parameters and sensory evaluation
The results of instrumental colour measurement according to the CIE L*a*b* system are presented in Figure 5. At the beginning of the storage period, C and P sausages had similar lightness (L*= 50.2 and 49.5 respectively), while N+P sausages were signifi cantly darker (L*= 42.5). At the same time, C and N+P sausages had similar redness (a*= 14.5 and 13.2, respectively) and yellowness (b*=5.9 and 6.4, respectively) The results of sensory evaluation are shown in the Figure 6. The colour of P sausages was signifi cantly lower evaluated during the fi rst 30 days (4.4) compared to C and
Polyphenol contents in sausage extracts
The results of the investigation of polyphenol contents in sausage extracts are shown in Table 2. The most dominant in N+P and P sausages was kaempferol-3-O-glucoside (23.7 and 33.0 ng/g, respectively), followed by quercetin (14.7-15.2 ng/g), luteolin-7-O-glucoside (8.6-12.1 ng/g), catechin (6.8-7.5 ng/g) and syringic acid (5.4-4.0 ng/g). It was observed that the total content of polyphenols detected in the sausage extracts grew during the storage period, raising from 96.9 to 152.2 ng/g in P sausages and from 71.9 to 438.4 ng/g in N+P sausages. The total content of polyphenols detected in sausage extracts was 1.7 (end-product), 2.0 (30 th day) and 5.2 (70 th day) times higher in N+P sausages compared to P sausages.
Chemical composition
All the investigated chemical parameters were in the range typical for dry fermented sausages [1,4] and met the regulation requirements [2] concerning moisture and proteins content, as well as the collagen/proteins ratio ( Table 1). The nitrite content in the stuffi ng of the P sausages was signifi cantly lower than in C and N+P sausages as they are produced without nitrite addition. The presence of nitrates in the stuffi ng of C and N+P sausages could be explained by fast oxidation of nitrites to nitrates after their addition [9]. The detection of nitrates in the stuffi ng of P sausages could have resulted from their natural presence in spices [32] which were added to sausages during the production. In the end-products, both nitrite and nitrate contents lowered compared to the stuffi ng in all sausages because of their complex reactions with the sausage matrix compounds [9]. Interestingly, N+P sausages had almost twice as high content of nitrates as did C sausages, which could be attributed to the reactions between phenolic compounds and nitrites that are also observed by other authors [33], but the nature of these reactions in fermented sausage matrices should be further investigated.
Physicochemical and oxidative changes
The recorded a w value decline (Figure 1) was usual for dry fermented sausages [1] to far lower values than 0.90, which is important for the product safety [33], and remained mainly unchanged during storage. The more intensive a w value drop was observed in polyphenol-containing sausages (N+P and P), which could be attributed to the simultaneously more intense pH decrease in these sausages ( Figure 1). Namely, the decrease in pH value reduces the water-binding capacity of meat, which results in more intense water release and sausage drying [35]. Lower pH values in polyphenolenriched sausages were also reported by Moawad et al. [36], which was explained by more intense inhibition of spoilage microorganisms that could contribute to higher pH value due to their proteolytic activity. In our study, the highest proteolysis index after the production period was observed in C sausages, which is in accordance with those data. Even more, the microbiological investigation in our study showed signifi cant higher counts of proteolytic bacteria (Pseudomonadaceae) in C sausages on days 14 and 21 day of the production period ( Figure 3). During storage, the pH value increased simultaneously with proteolysis index (Figure 1), which could be attributed to activity of tissue proteolytic enzymes, as spoilage bacteria were not detected any more, which is normal during the ripening of meat products [28]. In our investigation a higher proteolysis index during storage was observed in polyphenol-enriched sausages but were in range with those typical of fermented sausages [36] and represent the measure of product maturity, which is desirable in dried meat products [28].
After the production period, the acid value which indicates the lipid hydrolysis intensity was higher in polyphenol-enriched sausages, which remained up to day 90 of storage ( Figure 2). Afterwards, the highest acid value was observed in the control group. As free fatty acids released via lipid hydrolysis process contribute to the fl avour of fermented sausages [6,7,8] these changes could not be considered as undesirable. On the other hand, peroxide and TBARS values, which indicate lipid oxidation, showed different pattern in the fi rst (up to day 130) and second period of storage (from day 130 to day 280). In the fi rst period, the lowest peroxide value and highest TBARS value was observed in P sausages, but as the highest amount of aldehydes in these products was far below the sensory rancidity limit of 2.2 mg MAL/kg [37] and is typical of dry fermented sausages [36], such fi ndings do not represent a signifi cant disadvantage. After 130 days of storage polyphenol-enriched sausages had signifi cantly lower peroxide and TBARS values than C sausages. Additionally, in this period P sausages had signifi cantly lower peroxide and TBARS values then N+P sausages, probably due to the chemical reactions between nitrites and polyphenols [33] which decreased the antioxidant activity of polyphenols in these sausages.
Microbial changes
As the main changes in fermented sausage microbiota occur in the fi rst several weeks [1], microbiological investigation was conducted every 7 days during the production of the sausages (Figure 3), while during the storage time the investigation was carried out on a monthly basis (Figure 4). The results showed that in all sausages the most abundant were lactic acid bacteria (LAB). Although P sausages contained signifi cantly lower counts of LAB on day 14, which could be contributed by a mild antimicrobial effect of polyphenols [37], on other days there was no difference in LAB counts between experimental groups and the counts were all in range of optimum for the fermentation process in sausages [1]. The counts of Micrococcaceae was rather similar in all sausages during the production, except on day 7, when N+P sausages contained signifi cantly lower numbers than C and P sausages. Micrococcaceae are part of useful microorganisms in fermented sausages which play an important role because of their peroxidase activity and aroma formation [1]. Concerning spoilage bacteria, Enterobacteriaceae were not detected after day 14 and Pseudomonadaceae after 28 days of production. In P and N+P sausages a signifi cantly lower number of Pseudomonadaceae was observed on days 14 and 21, which could be attributed to the antimicrobial effect of polyphenols on Pseudomonadaceae already described by other authors [38].
During storage LAB count slightly decreased and was similar in all sausages ( Figure 4). Exceptions were observed on days 30 and 190, when LAB counts were signifi cantly lower in polyphenol-containing sausages, which could be the result of the antimicrobial activity of polyphenols [38]. Micrococcaceae were not detected after day 220 of storage in all experimental sausage groups. Despite differences between experimental groups, in some phases during storage, the changes in microbiota were in range characteristic of fermented sausages [1]. Concerning pathogen bacteria, Salmonella spp. and Listeria monocytogenes were not detected in any of the production and storage phases, which confi rms the safety of all sausage groups tested [4].
Colour parameters and sensory evaluation
At the beginning of the storage period, C and P sausages had similar lightness ( Figure 5), while N+P sausage was signifi cantly darker. At the same time, C and N+P sausages had similar redness and yellowness while these parameters were signifi cantly lower in P sausages. Such results could be explained by faster red colour formation -the red-purple pigment nitroso-myoglobin is obtained in reactions between nitrite and myoglobin [9]. In fermented sausages produced without additives, like in traditional production, the stable red colour is formed through the slow process of myoglobin reduction and stabilisation [32]. During storage the lightness equalized in all experimental groups on days 130, 250 and 280. Redness was signifi cantly lowered in polyphenol-containing sausages during the whole storage period. Although the N+P sausages contained nitrites, the redness of these sausages was lower than of C sausages, which was probably due to the partial loss of nitrites in reactions with polyphenols [33]. Signifi cantly higher yellowness in C sausages during storage could be attributed to more intense lipid oxidation (see Figure 2) in these, where peroxides and other oxidation products deteriorated red pigments increasing the yellowness [38].
The results of instrumental colour measurements were confi rmed by sensory evaluation (Figure 6). The colour of P sausages was signifi cantly lower evaluated during the fi rst 30 days compared to C and N+P sausages, but from day 70 to 130, it received higher ratings because of its more intense red colour, which could be also seen by increased a* value on day 70 observed by instrumental colour measurement ( Figure 5). Concerning other sensory parameters, all products were similarly rated in all storage phases. Decreasing marks for all sensory properties, especially after day 130, resulted mostly from oxidative changes which affected primarily the fl avour, colour and cross sections of the products. However, polyphenol-containing products were acceptable up to day 280 of storage being evaluated in average about 3.0 for all sensory attributes, while C sausages were rated lowest because of their fl avour, which gained score 2.0. Better fl avour of polyphenol-enriched sausages could be attributed to the antioxidative role of these compounds [13], which was proved by the lower parameters of lipid oxidation in this study ( Figure 2). Additionally, sausages from our experiment were highly evaluated for all examined attributes on day 190 of storage, which is important as dry fermented sausages are usually stored up to 180 days [39]. Thus, this study confi rmed the possibility of prolonged storage period of dry fermented sausages, even by nitrite-free sausages enriched with polyphenols.
Polyphenol contents in sausage extracts
The investigation into polyphenol contents in sausage extracts showed that the most dominant was kaempferol-3-O-glucoside (kaempferol-3-O-Glc), followed by quercetin, luteolin-7-O-glucoside (luteolin-7-O-Glc), catechin, syringic acid and others, being characteristic of grape [40] which was used as a source of polyphenols in our study. During the storage period, it was observed that the content of polyphenols grew, which could be explained by the increase in the concentrations of sausage compounds along with the release of water during drying [1]. Interestingly, the extract of N+P sausages contained at the end of production and 30 days after storage twice, and after 70 days even fi ve times higher total contents of polyphenols as P sausages did, although the same amounts of grape powder were added to their stuffi ng. This could be explained by the proneness of polyphenols to bind to proteins and build insoluble complexes [41], which caused limited polyphenol extractability from P sausages.
Concerning N+P sausages, it should be taken into account that owing to the reactivity of polyphenols with nitrites certain soluble derivates are built [33] which were able to be extracted from the sausages, but it should be confi rmed by further studies.
CONCLUSIONS
The addition of polyphenols did not affect the fermentation and drying processes in fermented sausages and products of standard chemical composition were obtained. Nitrite-free sausages contained nitrites in traces. Sausages with both nitrites and polyphenols had higher nitrates contents then the control, which indicates that reactions between polyphenols and nitrites occurred. Polyphenol-enriched sausages had signifi cantly lower peroxide and TBARS values then conventional ones. However, those produced with both nitrites and polyphenols showed higher lipid oxidation than nitrite-free polyphenol-enriched sausages, which indicates lower antioxidative potential in the sausage matrix due to interactions between nitrites and polyphenols. Microbiological processes in all experimental groups were typical of fermented sausages concerning lactic acid bacteria and Micrococcaceaecounts, and polyphenolenriched sausages had lower counts of spoilage bacteria (Pseudomonadaceae). Although instrumental colour measurement registered differences in lightness, redness and yellowness, the sensory properties of all groups of sausages were highly evaluated during most of the storage period and reached day 280 above the discriminating level. The most dominant polyphenol compound in the sausages was kaemferol-3-O-Glc, followed by quercetin, luteolin-7-O-Glc, catechin and syringic acid.
Nitrite-free polyphenol-enriched sausages reached the same shelf life as conventional sausages, which is a promising result giving hope that polyphenols can be used as potential nitrite substitutes. On the other hand, simultaneous addition of polyphenols and nitrites in fermented sausages is questionable because of indications that interactions between nitrites and polyphenols diminish their positive, especially antioxidative role of both ingredients. Because of the complexity of these reactions, further studies should be conducted to prove their nature in fermented sausage matrices. | 2020-06-11T09:04:48.059Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "c3a7ae2a11d77aebdbcd9c2026931a7c88de89ff",
"oa_license": "CCBY",
"oa_url": "https://content.sciendo.com/downloadpdf/journals/acve/70/2/article-p219.pdf",
"oa_status": "GOLD",
"pdf_src": "DeGruyter",
"pdf_hash": "f097b0fad846aec6468452f8ecc2208c9b702252",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
157547324 | pes2o/s2orc | v3-fos-license | Reproducing Home: Arab Women's Experiences of Canada
Birth is more than physical reproduction; through reproductive traditions and birthing processes, social reproduction is manifested. In other words, these traditions and ceremonies highlight the values of social life. Migration to a new country may affect the ability of women and their families to perform reproductive rituals; hence, examination of women's birth stories may demonstrate the tensions between the challenges and benefits of migration. This shifting ground of multiple identities in turn contextualizes the process of acculturation as migrants strive to adapt to their new country while maintaining cultural and ethnic identities. In this article, we study experiences of reproduction to examine how Arab immigrant women shape their Canadian identities while balancing connections with their families “back home” and their ethnic/cultural identities.
Introduction
Childbirth is more than the physical reproduction of the family, for it also functions as social production, in the reproduction of cultural values and belief systems (Franklin & Ragoné, 1998;Jordan, 1993). As such, the rituals associated with pregnancy, birth and the post-partum period symbolize ideologies of gender, healing, and religion as well as forming individual and group identities.
We suggest that the process of reproduction may also be fertile ground for the study of the interplay between acculturation and maintenance of migrants' traditions and belief systems. Acculturation is defined here as the process whereby immigrants who are members of minority communities incorporate and practice traditions of the dominant group (Berry, 1980;Snowden & Hines, 1999). Of course this process is mitigated by actual and perceived opportunities for integration, as well as intergroup heterogeneity (e.g. variations based on individual behaviors, characteristics, etc.), with respect to both the migrant and dominant cultures and communities (Rudmin & Ahmadzadeh, 2001;Vega & Rumbaut, 1991). Furthermore, the process of acculturation is not unilateral, for migrant communities also influence and contribute to the dominant group's socio-cultural beliefs and practices (Phillips, 2005).
Migration can be a stressful process, especially for women who give birth to children in a new cultural context, and who do not have the means or knowledge to fulfill their obligations to arrange for traditional rites and rituals (Deitrick, 2002;Groleau, Soulière & Kirmayer, 2006). On the other hand, immigrant women may welcome the opportunity to participate in "new" reproduc-tive technologies, and the diminished responsibilities of celebrating birth as a social event that was weighed down with "old", unnecessary traditions. This article will compare and contrast Arab women's reproductive experiences in their home countries with those in Canada, as well as examine the factors that shape women's abilities and desires to participate in traditional pregnancies, birthing processes, and post-partum practices. To add historical depth to this discussion, comparisons are made between the experiences of recent and non-recent migrants. The examination of the sociocultural framework of reproduction and immigrant women's experiences of the reproductive process highlights both challenges and opportunities associated with migration and living in Canada.
Methodology
This study was conducted in a large city in Western Canada with a population of approximately one million. According to the most recent 2001 census, 17.8% of this city's population consists of immigrants and 14.6% of visible minorities. The Arab community makes up 6.7% of this visible minority population (Statistics Canada, 2001). The results presented here are part of a larger study examining adult women's migration experiences. Both recent immigrants who have been residing in Canada for less than ten years and non-recent immigrants who have been residing in Canada for more than ten years were recruited, in order to compare changes over time in immigrant women's experiences. A female Arab research assistant assisted with recruitment of participants, translation when required during the interview process, and transcription of interviews.
This study was designed to be community-based, and has been conducted in collaboration with a local community organization composed of immigrant women from various countries. It was with the help of this organization that we located research assistants (one from each participating community), who were also instrumental in the development of the research design (e.g. defining appropriate questions and how to best phrase them). One goal of the study is to be able to provide this community organization, and other interested parties, with information that can be used for advocacy, and in the development of culturally appropriate health education programs (e.g. food prescriptions/proscriptions, traditional knowledge and care of mothers' postpartum that could be incorporated into medical practices) both for participating communities and health providers.
Six focus group interviews were conducted, three with recent and three with non-recent immigrants. This number of focus group interviews was found to be adequate for saturation (Glasser & Strauss, 1967;Morgan, 1997). Each focus group consisted of five to eight women. Discussions were elicited easily; therefore, groups of this size were adequate for collecting a range of opinions, as well as allowing all the participating women to be heard. Thirty-six Arab women participated. In conjunction with these interviews, demographic information was collected, and acculturation and body image measurements were administered. The findings from these focus group interviews are the focus of this article.
All women gave informed consent prior to participation in the focus group interviews, and they had the option of ceasing participation at any point, requesting that their comments not be utilized after it was over, or of not participating in a particular portion of the focus group interview. Interviews were conducted in women's homes or community centers, and took approximately two and a half hours to complete. Helen Vallianatos and a research assistant were present at all interviews; the former took notes and asked probing questions and the latter facilitated the discussion, using predetermined interview questions. The Human Research Ethics Committee of the Department of Anthropology, Faculty of Arts at the University of Alberta approved the study.
Interviews were audio taped, translated into English where required, and transcribed. Data analysis from the interviews was content-based (DeVault, 1990). In other words, the data were examined for patterns in what women said, rather than doing a narrative analysis of the content. This method is useful in exploring themes. Qualitative data were coded by reviewing all cases. Codes were formulated through a line-by-line analysis of concepts that were identified in the data. Comparative analysis led to the development of categories. This level of analysis examined how women used the codes defined in the first stage. Themes were developed from the categories that emerged from the data, and by comparing these concepts to those reported in the literature. Helen Vallianatos conducted the data analysis.
Characteristics of Participants
A description of the characteristics of the study's participants is provided in order to contextualize their experiences of migration and reproduction. A total of fifteen non-recent and twenty-one recent Arab women participated in the focus group interviews. The non-recent immigrant women had been in Canada for an average of twenty-three years, whereas the recent participants had lived in Canada for an average of three and a half years.
Compared with the recent immigrant women, nonrecent migrants were about a decade older on average (32 and 41 years respectively). All of the women were, or had been married; three were divorced. Household size varied from two (among those who were married without children) to seven members. Household income was variable, although just over half of the participants were living in low-income households. In Canada, a poverty line per se does not exist. Instead, low-income households are defined according to community and household size. Part of the financial difficulties faced by immigrants is the lack of recognition of foreign credentials, resulting in underemployment. According to various Arab informants, "We [in our city] have the most educated cab drivers," a statement that reveals the extent of underemployment in this immigrant community.
All of the participants were Muslim except one, and all self-identified as Arab. Their countries of origin were diverse: Egypt, Iraq, Syria, Jordan, Palestine, but the majority were from Lebanon. The majority of participants had come to Canada on family reunification visas. In order to have some measure of assessing their various degrees of acculturation, an acculturation scale measuring language usage patterns was used. This four-item questionnaire, where responses are scored on a 5-point scale, was modified from one previously used among Spanish-speaking immigrants (Norris, Ford & Bova, 1996;Wallen, Feldman & Anliker, 2002). Unquestionably, language use alone does not measure all degrees and varieties of acculturation: previous research has found that language preference and use can be used to provide an appropriate estimate of acculturation (Marin, Sabogal, Marin, Otero-Sabogal & Perez-Stable, 1987;Norris, Ford & Bova, 1996;Wallen, Feldman & Anliker, 2002). Nonrecent immigrants scored higher on this scale, indicating greater ease with, and frequency of English in everyday life. This suggests that non-recent immigrant women felt relatively more comfortable interacting with non-Arab Canadians.
Comparisons of Reproductive Rituals between Canada and "Home"
In this section, we compare and contrast reproductive rituals practiced in Canada with what those women recalled from their home countries. All women could recount traditions associated with pregnancy, birth and the post-partum period, even if they had only experienced pregnancy and birth in Canada. However, most women could contrast their own personal experiences of reproduction both in Canada and at home, having had some of their children before migrating to Canada. Three interrelated themes on women's reproductive experiences emerged from the interviews, labeled as follows: 1) social support networks; 2) medical technology and per-sonnel; and, 3) feeding the mother. As each of these themes is discussed, the viewpoints of recent and nonrecent migrants are compared.
Theme 1: Social Support Networks
Many of the discussions concerning women's lives during pregnancy and the post-partum period focused on the traditions of social support evident throughout women's reproductive experiences. This was especially emphasized in the conversations with non-recent immigrants, one of whom explained, "You treat yourself, everyone around you treats you". Women recalled being pampered throughout pregnancy, demonstrated not only by the amount of food offered to them, but also by receiving special foods or foods that they craved. Extended family networks ensured that mothers were well cared for and helped first time expectant mothers prepare for birth. This continued into the post-partum period, when mothers recalled doing nothing but sleeping and caring for their newborn infant for the first couple of weeks, while female family members rallied to ensure domestic tasks were completed.
All women who reminisced about birthing in their home countries pointed out that one of the greatest challenges they faced in Canada was the lack of social support available for mothers, especially during the postpartum period. During these times, the comfort and care provided by families were especially missed. This was a source of the greatest tension, as it highlighted differences in how womanhood and birth are viewed in Canada. Neolocal residence patterns are the norm in Canada, reflecting emphasis on individuality and reliance on self. Families are often dispersed, and are not available to provide much help during the reproductive process due to both time and distance limitations. Birth in Canada is arguably a medical event, occurring primarily under the expert control of medical professionals, and in hospitals where medical technology is readily available for use (Bourgeault, Declercq & Sandall, 2001;Davis-Floyd, 1992;Davis-Floyd & Sargent, 1997;Daviss, 2001;Jordan, 1993;Wrede, Benoit & Sandall, 2001). In contrast, although Arab women reported that medical systems and interventions were commonly used in their home countries, birthing was to a greater degree a social event, marked with celebrations and women, ensconced in supportive social systems, could take at least some time to rest and bond with their infant.
Due to their inability to divide labor, and to lean on the support of others, some women perceived the Canadian birth context as detrimental to their own and their infants' well-being. This is exemplified in the following comparison made by a non-recent immigrant: In Beirut, my mother-in-law bought me lemons and homemade soup, and she used to make me special foods because I was pregnant and it was good for me and the baby. My first son was born in Lebanon; he was 10 lbs, but my other kids were born smaller, like Hana was one month early and she was 4 lbs. That is because I did not like the food here; even if I found everything [that could be found at home], the food still was not good. This woman's description of the important support she received from her mother-in-law also highlights the symbolic value of food. The above quotation also suggests that food can demarcate challenges of adjusting to life in Canada. The mother's dislike of the food, even if Arabic food was found and purchased, is symbolic of missing home and everything that home represents. The reliance on others for help in provisioning and preparing special food perceived as a requirement for a healthy pregnancy is further explored below. Before turning to explore in greater depth the care women receive as new mothers, represented by food items and the sharing of food preparation, an investigation into the medicalization of birth in women's home countries is in order.
Theme 2: Medical Technology and Personnel
In the past two decades, much research has been conducted on the medicalization of reproduction, including childbirth. This process of medicalization may be explained as when language and ideology of scientific medicine come to predominate in explanations of human behavior and biology. In relation to childbirth, emphasis is focused on increasing usage of technology in birthing, of "operationalizing" childbirth, of treating it as a "disease" to be fixed (Davis-Floyd, 1992). This process of medicalization of women's reproduction is commonplace in North America, thus birth in Canada can be categorized as a medical event (Davis-Floyd & Sargent, 1997;Daviss, 2001). However, it is a mistake to conceptualize reproduction and childbirth as either a medical or social event. Both, to varying degrees, are important in formulating women's experiences; they are not mutually exclusive. This is especially clear when discussing birth with recent immigrants.
Recent immigrant women further contextualized the meaning of birth in their discussions concerning medical and social aspects of reproduction in Canada and their home countries. Having more recently experienced birth in their home countries, they pointed out that medical interventions in the birthing process are global. However, both positive and negative contrasts can be made, as shown in the following exchange: Woman 1: I was more comfortable during my pregnancy in Lebanon. Here I was tired all the time. But during delivery, here is much better because here you can have your mother or sister beside you in the delivery room, but back home, no one is allowed in the room with you. No, here is much better to have the baby, really. Woman 2: But the doctors and doctor visits, the doctors' care is much better there. Woman 1: Yeah, that's true. Woman 2: Like when you're pregnant back home you go to the doctor right away, you don't need to make any appointment and wait, and the same doctor you go to is the same doctor who does the ultrasound. The machines are in the same office as the checkup and everything. Woman 3: Generally being pregnant there is more comfortable, but here it's much better at the time of delivery. Woman 2: Well I'm pregnant right now and I feel tired a lot too. Woman 4: Like for the delivery, the nurses back home are not as nice as the nurses here. They yell at you if you're in pain, Wallah, they really do.
Note in this exchange that being tired was associated with living in Canada. As mentioned in the previous section, this reflects the fact that women have smaller social networks in Canada, hence they have fewer people on whom they could count on to help on a daily basis -a pattern that has previously been reported for Iranian and other immigrant communities (Ali, 2002;Dossa, 2004;Lock, 1991). This is exacerbated by the relatively fast pace of the Canadian lifestyle, where long work hours are required to stay financially solvent, let alone to get ahead and rise in socioeconomic status Vallianatos, Ramos-Salas & Raine, 2005). Constant work further impedes women's abilities to find others who have the time to help, so the pampering during pregnancy and the postpartum period experienced in home countries are idealized in immigrant women's imaginations. Furthermore, hospital stays for routine births are a day or two, so women must return to their homes and their household duties, with no one to help.
The above exchange also highlights a major difference in medical practices and interactions with medical personnel. Access and ongoing care were described as being better at "home" since women could see their care-provider whenever they wished, and were ensured that the same provider who was advising them throughout their pregnancy would be present at the birth. Building a personal relationship with one's care provider fosters trust, and in turn helps to make women's reproductive experiences more positive. This is usually not the case in recent years in Canada. Instead, the obstetrician on call attends the birth, and it is becoming common for obstetrical practices to consist of a group of doctors. For women this means that even for routine visits during pregnancy, they may see whichever partner is available, consequently not developing a strong personal connection with one doctor. The next section examines the reproductive process as a whole, and food practices in particular, to develop an understanding of the social significance of reproduction.
Theme 3: Feeding the Mother "When people know you're pregnant, they all want to feed you." (Non-recent Immigrant Woman) As this quote shows, women recalled pregnancy as being a special time back home, where they were sheltered, given special food, and encouraged to rest and take care of themselves. In contrast, because of lifestyle pressures in Canada, immigrant women often felt lonely, and missed the care provided by extended family members and friends in their home country. Recent migrants in particular spoke of these differences, as exemplified by this quotation: "They make the food for you, and you are always resting. Friends and family make you soup and other food. Here you have to do everything yourself, it's much harder." Food cravings are a common experience shared by pregnant women around the world (e.g. Coronios-Vargas, Toma, Tuveson & Schutz, 1992;Demissie, Muroki & Kogi-Makau, 1998;Vallianatos, 2006). Folk knowledge often emphasizes the importance of satisfying cravings, for if not met, it is believed that something may happen to the fetus. An example of this is shown in the following exchange: Woman 4: One advantage is that because here everything is available, when a woman craves it's easy to find whatever she wants. Back home, if you crave watermelon in winter, you can't find it.
Researcher: So what happens if you can't meet your cravings? Woman 1: You wait until you find it and eat it, you just keep craving. Woman 6: We believe if you don't get what you craved for, the shape of the food you craved will be on the baby's body.
In the above exchange, it is also clear that there are benefits to living in Canada while pregnant. Seasonality has little impact on food availability, as food items from around the world are imported, although food purchased out of season may be more expensive. Nevertheless, the plethora of products available in grocery stores from around the world ensures that cravings may be satisfied.
The study's participants also reported food prescriptions for the post-partum period. These dietary practices are believed to enhance breastfeeding by ensuring that adequate quantities of high-quality breast milk were produced. These foods included 'hot' foods, necessary for bringing the body back into equilibrium. Non-recent immigrants recalled: Woman 1: After you have the baby, they give you a lot of milk and hot stuff to drink so the milk will come. Here the first thing they bring you in the hospitals is cold water! It was weird, back home they give you hot drinks. {Aside as group discusses why they would be given cold items in Canada; no one knew} Woman 2: We give her eggs with garlic and cumin after she delivers, for the milk to come. Woman 3: Also chicken soup for one month. Woman 2: And sawdah (liver) which is good for the blood.
The discussion of the necessity of giving new mothers 'hot' items seems to be folk knowledge based on traditional Islamic medical procedures. Islamic medicine is a humoral medical system, in which health is defined as the balance of humors. Imbalance can result from an individual's activities, including dietary and physical activity patterns, and must be corrected in order to maintain health. Also shaping the balance of humors are environmental factors, such as the seasons of the year, and individual characteristics, such as personality and different stages in life, including pregnancy, birth, and lactation (Ullmann, 1978). Pregnancy is considered a 'hot' condition, and birth releases this heat from women's bodies. Consequently, women must be given 'hot' foods or drinks that are strengthening, and protect against illness caused by being too 'cold'. Giving 'hot' foods also ensures that adequate milk will be produced, and that the breast milk will be of good quality. Although women did not have a deep understanding of humoral medical systems, it does seem to have permeated folk wisdom, demonstrated by the conversation above. In this context, it is surprising that women in Canada are given cold items during labor and after birth.
Breast-feeding has been shown to be a common practice; most of the women studied breast-fed for at least a few months, and often for two years. Social support was provided to mothers in their home countries, in order to successfully initiate and to encourage continuation of breastfeeding, as reported by one recent immigrant woman: "Back home, they all try to help you to breast-feed as long as possible. They try to make everything comfortable for you to breast-feed, and they make all the foods that bring more milk." Breast-feeding is a skill that takes time to learn, and even for multiparous women, time is required to develop the relationships with their new infant. The lack of a large support network in Canada means that women often feel harried to meet all their household tasks, leaving only a day or two to recover from birth and establish a routine. This lack of social space for breast-feeding, in conjunction with difficulties mothers face in implementing postpartum traditions, has been reported to negatively affect breastfeeding initiation and continuation rates (Groleau, Soulière & Kirmayer, 2006). Therefore, immigrant Arab women's remembrances of the reproductive process in their home countries emphasized the quality and extent of care they received. This was symbolized in the provision of special foods that ensured their own, and their infants' wellbeing. The importance of taking care of new mothers was also represented by the time they were given to recover from birth, and to settle into a new routine with their infants. Women received the most help with activities concerning food, as female relatives and friends took over food provisioning not just in providing special foods for the new mother, but also helped meet the dietary needs of the family as a whole.
Discussion
Reproduction is a rite of passage experienced by the vast majority of women who participated in this study. Not only is this a physical event women experience, but it is shaped by cultural values, affecting women's expectations and views of their reproductive experiences. The process of reproduction is also a social one, as it propagates not only new community members, but social values as well. In other words, the rituals associated with the reproduction process demonstrate fundamental societal worldviews. The suggestion being made is that because the reproductive process is intimately linked with sociocultural values and worldviews, an investigation comparing and contrasting immigrant women's reproductive experiences in their new and home countries is useful in highlighting tensions that are frequently part of the migration process.
Analysis of focus group interviews conducted with recent and non-recent Arab immigrant women showed this tension in the challenges they faced in birthing in Canada. The most common element missing in the Canadian context was a large support network composed of friends and family who would look after pregnant and lactating mothers, especially in the weeks following the birth event. Women recalled being vetted in their home countries while pregnant and lactating. This treatment was symbolized by the provisioning of food, in particular special foods. This was often not the case in Canada, where lifestyle changes resulted in fewer opportunities for socialization. The negative impact of the faster pace of life on women's abilities to perform traditional customs was exacerbated by financial constraints. Nevertheless, the study's participants spoke of the importance of continuing these traditions, not only because these practices shaped women's well-being, but also as a means of connecting with their homeland and living according to their ethnic and cultural identi-ty. To participate in these traditions was a way of reproducing "home".
Challenges faced by immigrant women were balanced with benefits perceived to come with living in Canada. The medical system in Canada was overall highly regarded. Despite the complaint of lack of personal relationships with doctors, women appreciated the public health system and the consequent accessibility of care. Hospitals were clean and friendly, and staff were helpful and accommodating (e.g. religious dietary proscriptions were respected), allowing women to feel comfortable in this environment. Furthermore, for a small number of women, moving to Canada meant escaping the responsibilities that come with social reproduction, and they welcomed the opportunity to not participate in traditional reproductive rituals and ceremonies.
In sum, women's role as mothers is in large part manifested in reproduction, in their endeavors of reproducing children and society. Moving to a country where they are a visible minority group is accompanied by a variety of social, economic, and political stresses. These tensions may negatively affect their coping abilities of adjusting to life in Canada, and adversely affect their health and well-being (e.g. Dossa, 2004;Stewart et al., 2006;. Reproducing "home" through continuance of traditions may help to alleviate some anxieties, but in addition, efforts to address the issues female migrants face need further investigation and action (cf. Stewart et al., 2006). Advocacy with medical and governmental bureaucracies to increase awareness of Arab immigrant women's needs and development of culturally appropriate ways to help these women cope with the challenges they face in Canada are required. In addition, the education of medical and governmental administrators and staff members in the diverse cultural interpretations of health and reproduction is also necessary in order to provide culturally appropriate care.
Funding for this project was provided by POWER (Promotion of Optimal Weights through Ecological Research), a New Emerging Team research grant provided by the Canadian Institutes of Health Research -Institute of Nutrition, Metabolism and Diabetes, in partnership with the Heart and Stroke Foundation of Canada. We would like to thank Shaymaa Rahme, BSc, whose work as a research assistant was extremely valuable, as was the help and advice provided by Yvonne Chiu and her colleagues, at the Multicultural Health Brokers Cooperative. | 2019-05-19T13:07:03.193Z | 1970-01-01T00:00:00.000 | {
"year": 1970,
"sha1": "f1f90ffd87ab5bbf1f2f6a1396c509f88f7b97ec",
"oa_license": "CCBYSA",
"oa_url": "https://doi.org/10.32380/alrj.v0i0.207",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "01f3c429f95bfefd75eee4e779e27aae24d4f6d5",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
245987121 | pes2o/s2orc | v3-fos-license | The Impact of the COVID-19 Pandemic on Affect, Fear, and Personality of Primary School Children Measured During the Second Wave of Infections in 2020
In relation to the COVID-19 pandemic outbreak, a large body of research has identified a negative impact on individuals' affectivity, frequently documented by increased prevalence of anxiety and depression symptoms. For children, this research was less extensive, was mainly based on caregivers' reports and neglected personality assessment. In order to measure the impact of the pandemic, and the fears it caused, on primary school children's affect and personality, 323 (180 boys and 143 girls) Italian third, fourth and fifth graders were assessed between October and November 2020, namely during the second wave of COVID-19 infections in Italy, with validated self-reports of affect (Positive and Negative Affect Scale for Children, PANAS-C), fear of COVID-19 (Fear of COVID-19 Scale, FCV-19S) and personality (junior Temperament and Character Inventory, jTCI). In comparison with PANAS-C and jTCI normative scores collected prior to the pandemic, data obtained from children in 2020 showed unchanged affect scores in the overall sample, a decrease of Positive Affect in girls, and a decrease in the Harm Avoidance and an increase in the Self-Transcendence scales of personality. Fear of COVID-19 scores were positively correlated with Negative Affect scores and negatively predicted by children's personality profile of resilience (calculated using scores on the Harm Avoidance and the Self-Directedness scales of personality). These results suggested that Italian primary school children, especially boys, maintained their pre-pandemic levels of affect (or restored them after the first COVID-19 wave) and partially diverged from the typical development of personality in an apparently positive sense, namely toward more courageous/optimistic and spiritual profiles. This sort of children's post-traumatic growth might also be attributed to children's family and education systems, which should continue to be supported to promote and maintain community mental health.
In relation to the COVID-19 pandemic outbreak, a large body of research has identified a negative impact on individuals' affectivity, frequently documented by increased prevalence of anxiety and depression symptoms. For children, this research was less extensive, was mainly based on caregivers' reports and neglected personality assessment. In order to measure the impact of the pandemic, and the fears it caused, on primary school children's affect and personality, 323 (180 boys and 143 girls) Italian third, fourth and fifth graders were assessed between October and November 2020, namely during the second wave of COVID-19 infections in Italy, with validated self-reports of affect (Positive and Negative Affect Scale for Children, PANAS-C), fear of COVID-19 (Fear of COVID-19 Scale, FCV-19S) and personality (junior Temperament and Character Inventory, jTCI). In comparison with PANAS-C and jTCI normative scores collected prior to the pandemic, data obtained from children in 2020 showed unchanged affect scores in the overall sample, a decrease of Positive Affect in girls, and a decrease in the Harm Avoidance and an increase in the Self-Transcendence scales of personality. Fear of COVID-19 scores were positively correlated with Negative Affect scores and negatively predicted by children's personality profile of resilience (calculated using scores on the Harm Avoidance and the Self-Directedness scales of personality). These results suggested that Italian primary school children, especially boys, maintained their pre-pandemic levels of affect (or restored them after the first COVID-19 wave) and partially diverged from the typical development of personality in an apparently positive sense, namely toward more courageous/optimistic and spiritual profiles. This sort of children's post-traumatic growth might also be attributed to children's family and education systems, which should continue to be supported to promote and maintain community mental health.
INTRODUCTION
After the outbreak of the worldwide COVID-19 pandemic in early 2020 and the consequent public health policies put in action to contain the waves of infections, a large body of research has documented a worsening of public mental health. Various systematic reviews and meta-analyses reported increased emotional distress and increased risk for psychiatric disorders among the adult general population during 2020 (1)(2)(3)(4)(5)(6)(7)(8)(9). Less research has explored the impact of the pandemic emergency on the emotional well-being of children; the relevant reviews though resulted in reporting a negative psychological impact related to COVID-19 (10)(11)(12)(13)(14)(15)(16). Although the risk of death from COVID-19 is negligible for children and adolescents, they can nevertheless be as susceptible as adults to the psychological impact of the pandemic and its response measures (e.g., obligation to stay at home, interruption of both regular school and extracurricular activities attendance, physical distancing).
Most of the studies included in the above-mentioned reviews found that individuals' levels of anxiety and depression were the most frequent indicator of psychological distress, both in adults and children. Anxiety and depression are two forms of human suffering which have distinct and overlapping features. According to the model of Clark and Watson (17), they may share a component of general emotional distress, which can be labeled as negative affect (NA), and are differentiated by the levels of positive affect (PA), which is characteristically lower in depression than in anxiety. This model, together with the resulting scale for measuring positive and negative affectivity (i.e., the Positive and Negative Affect Scale, PANAS) (18), has been largely used both on adults and younger people (19)(20)(21)(22)(23). Positive and negative affect, considered as the set of transient and enduring evaluative feelings experienced by a person in response to salient events/conditions (24), can be therefore regarded as critical markers of the psychological condition of persons also in respect to the impact of the COVID-19 related crisis. It seems therefore particularly important to assess the levels of positive and negative affect in the population during the COVID-19 pandemic and to compare them with the normative levels collected before the pandemic. This pre-vs. during pandemic comparison, which has been performed for measures such as anxiety, depression and psychological well-being [e.g., (25)(26)(27)], has not been frequently carried out so far on affect scores. A study on a thousand full-time adult workers during the early stages of the pandemic in Germany revealed that their positive and negative affectivity did not change between December 2019 and March 2020, but decreased between March and May 2020 (28). A smaller study on adolescents (n = 34) and their parents (n = 67) conducted in the Netherlands revealed that adolescents' positive or negative affectivity did not change between 2018-19 and March 2020, while parents significantly reported a more negative affect in March 2020 in comparison to 2018-19 (29). The only existing study that assessed positive and negative affect of children (n = 34) during the pandemic (April-July 2020) and compared these scores with data collected prior to the pandemic (n = 101) did not find any difference in affect scores (30). The scarce information on this important aspect of individuals' mental health during the pandemic, in particular for children, urgently calls for a wider investigation.
In relation to negative affectivity, a salient emotion experienced by many persons during the pandemic is the fear of COVID-19. A self-report measure on this feeling was indeed developed in early 2020, the Fear of COVID-19 Scale (FCV-19S) (31). In this tool, for which factor analyses generally indicated a unidimensional structure, people are asked to evaluate both the physical and mental components of their fear of COVID-19. The FCV-19S has been extensively used since its introduction and made it possible to estimate the distribution of scores in separate samples (32), compare scores between samples of different countries (32,33), and compare scores of a same population obtained in different time points (e.g., during the first vs. second wave of COVID-19) (34). The FCV-19S was originally developed for adults, but it was also employed in youth samples, in particular in adolescents (35)(36)(37). Nevertheless, given its small number of items, the relatively simple form of its statements and of the 5-point response scale in which respondents indicate their level of agreement with the statements, the FCV-19S was also administered to children as small as 7 years of age (38). Similarly to what is done in adults, it would thus be useful to further explore the depth and prevalence of fear of COVID-19 using the FCV-19S in children of different countries and during different phases of the pandemic, with the aim of providing children with the best environmental and psychological support in relation to this specific emotional sequelae of the pandemic.
An overarching aspect taken in consideration in many studies on the affective repercussion of the pandemic crisis is personality. Most of these studies assessed individuals' personality traits in combination with other measures, with the aim to link specific traits to various outcomes of interest, such as the level of distress, the way of perceiving the emergency, the form of behavioral adjustments to the emergency, and the degree of compliance to safety rules [e.g., (39)(40)(41)(42)(43)(44)]. These studies were all carried out on adult samples. Although adults' personality is relatively stable, referring to "individual differences in characteristic patterns of thinking, feeling and behaving" (45), a number of researches have investigated whether the pandemic crisis has come to significantly change the overall personality profile of the population. Studies on healthy adults' self-reports of personality collected during the pandemic did not gave a definite answer to this question: most of the studies found out that the scores collected during the pandemic with instruments such as the Brief HEXACO Inventory (46), the International Personality Item Pool's IPIP-NEO (47), the reduced Temperament and Character Inventory (48), the Personality Inventory for the DSM-5-Brief Form (49), or the various versions of the Big Five Inventory (50-53) remained stable (i.e., remained within one standard deviation of the normative means) in comparison with those collected before it [e.g., (39,(54)(55)(56)(57)]. Other studies, however, found that scores changed beyond one standard deviation from the normative means [e.g., (58,59)], or found significant changes in the pre-vs. during pandemic comparisons of scores: for example, significant changes were observed, using the Big Five Inventory-2 questionnaire (53), in the neuroticism and extraversion traits of the big-5 model of personality in a sample of 2,137 U.S. citizens who were tested before (early February 2020) and during (second half of March 2020) the pandemic outbreak in the U.S. (60). In yet another study, significant changes were observed in all the big-5 traits of personality in 480 alleged healthcare workers when using linguistic analyses of their social media data collected before (February 2020) and during (between February and April 2020) the pandemic (61).
Childhood is an important period of life for the development of an individual's personality, because in this period the interaction between individuals' inborn traits and their personal life events increasingly organizes the course of children's action, emotion and cognition and their subsequent personality development (62). The personality of children may therefore face important developmental changes: however, the evaluation of the psychological impact of the COVID-19 related crisis on children's personality can be performed by detecting possible changes in the typical development of personality. Such changes can be monitored, for example, on the basis of age-appropriate normative scores (collected prior to the pandemic) of instruments for personality assessment such as the Big Five Questionnaire for Children (63) or the junior Temperament and Character Inventory (64). Yet, the question of whether the psychological impact of the COVID-19 related crisis may have changed typical personality development of children has not been answered so far. It would also be useful to replicate in children the studies that highlighted which personality dimensions were associated with the health outcomes previously investigated in adults such as well-being and anxiety/depression.
Moreover, as evidenced in many studies on adults (44,(65)(66)(67)(68), even for children a key factor impacting the individual ability to cope with the distress caused by the pandemic could be linked to the personality aspect of resilience. More in particular, in the seven-dimension model of personality measured by the Temperament and Character Inventory (TCI) (69) or its junior version (jTCI) (64), high and low resilience profiles can be effectively measured by focusing on the two dimensions of harmavoidance (a temperamental trait reflecting the tendency to avoid behaviors due to intense response to aversive stimuli expressed as fear of uncertainty, quick fatigability, shyness of strangers, and pessimistic worry) (70) and self-directedness (a character trait referring to self-determination, self-acceptance, responsibility and reliability, and to being able to control, regulate, and adapt behavior in accordance to one's own goals and values) (70), which are respectively negatively and positively related to resilience (71)(72)(73). Thus, besides considering personality for either trying to monitor its possible changes after the pandemic or for investigating its general association with the affective impact of the pandemic, focusing on children's resilience profiles may also help explaining in a more specific way why the COVID-19 related crisis has affectively impacted some individuals differently from others.
In sum, in our study data on positive and negative affect, fear of COVID-19 and personality were collected in a sample of Italian primary school children. All data were collected through children's self-reports while they were at school. For affect, the Positive and Negative Affect Scale for Children (PANAS-C) (74,75) was used, for the fear of COVID-19 the Fear of COVID-19 Scale (FCV-19S) (31,76) and, for personality, the junior Temperament and Character Inventory (jTCI) (64,77). Assessment was carried out between October and November 2020, during the second wave of the pandemic in Italy. The main aim of the study was (i) to compare normative PANAS and jTCI data (collected on independent samples of children before the pandemic) with data obtained during the pandemic period. In particular, in the period of assessment, Italian children had just returned to school after more than 6 months of school closure and the country was facing the ascending phase of the second pandemic wave without certainties about the degree of its sanitary, economic and social impact. The secondary aim of the study was (ii) to assess the levels of fear of COVID-19 in these same children and to link their levels of fear, positive affect and negative affect with their personality characteristics. This was done first by correlating the PANAS-C and FCV-19S with the jTCI scores and then by assessing the differences in PANAS-C and FCV-19S scores in two separate children groups: one with a low-resilience personality profile and the other with high-resilience. In this way, the present study tried to address some relevant questions that have partially or completely been overlooked in the literature so far: were affect and personality profiles of primary school children assessed during the second wave of the pandemic different from those collected in agematched children before the pandemic? How were the primary school children's personality characteristics in 2020 related to children's levels of fear of COVID-19, positive affect and negative affect, also considering the aspect of high and low resilience?
Participants
Twenty-one classes from 14 primary schools of the North-East part of Italy (Friuli-Venezia Giulia region) participated in the assessment of the present study, as the initial stage of a successive attentional and self-regulation training program. A total of 361 third, fourth, and fifth graders were assessed. After excluding the data from 38 children (18 questionnaires were not complete, three questionnaires had been completed by children with intellectual disabilities, 17 jTCI reports had no valid responses for control items), the final sample consisted of 323 children (grade: 103 third, 75 fourth, 145 fifth; sex: 180 boys, 143 girls).
Affect
Positive and negative affect were measured with the Italian version of the Positive and Negative Affect Scale for Children (PANAS-C) (74,75). This tool was originally developed and validated on 9-to 12-year-old children, but it was also used for third graders [e.g., (78,79)]. It is the child version of PANAS, the most frequently used scale to assess positive (PA) and negative affect (NA) in adults (18). In PANAS-C, respondents are asked to rate on a 5-point Likert scale (ranging from 1 = never to 5 = always) how often during the last weeks they have experienced each of the 30 positive or negative listed moods that in the tool are expressed by adjectives or very short expressions. In the Italian version, PA score is the sum of scores for 11 items and NA score is the sum of scores for 13 items. Example items are "Active" (for PA) and "Afraid" (for NA). Both the original and the Italian validation of PANAS-C showed two clearly differentiated factors (PA and NA) and good internal consistency reliability (alpha ≥ 0.85). For data collected for the present study in 2020, Cronbach's alphas were: 0.84 for PA and 0.87 for NA.
Personality
Personality was assessed with the Italian version of the junior Temperament and Character Inventory (jTCI) (64,77). This is the child version of the widely known TCI personality inventory (69). It was developed and validated on 9-to 12-year-old children, and consists of 108 true/false items. Respondents are asked to express their general concordance/discordance with each statement. The jTCI has four temperament scales (Novelty Seeking, NS; Harm Avoidance, HA; Reward Dependency, RD; Persistence, P) and three character scales (Self-Directedness, SD; Cooperativeness, C; Self-Transcendence, ST). Temperament scales model the inborn neurobiological tendencies toward early emotions and the resultant behavioral reactions to distinct environmental stimuli. Character scales model, at the intra-, inter-and trans-personal level of the individual, the result of the interaction between temperament traits, socio-cultural influences, life events and intentional training. Example items are: "I get tense and worried in unfamiliar situations" (HA), "I often try new things for fun or thrills" (NS), "I don't open up much even with friends" (RD), "I work long after others give up" (P), "I feel strong enough, to master everything somehow" (SD), "I take good care not to hurt somebody with my actions" (C), "I believe in a higher force connecting all living beings" (ST). Cronbach's alphas for data collected for the present study in 2020 were: 0.63 for NS (18 items
Fear of COVID-19
The fear of COVID-19 was measured with the Italian version of the Fear of COVID-19 Scale (FCV-19S) (31,76). This tool consists of seven items with a five-point rating scale (ranging from 1 = strongly disagree to 5 = strongly agree) and was developed for adults; nonetheless, it has been used in children as young as 7 years old (36)(37)(38). Example items are "I am very afraid of the coronavirus-19" and "I cannot sleep because I'm worrying about getting (or having) coronavirus-19". As FCV-19S is recognized as an uni-dimensional measure (32), a total score is provided, with higher scores corresponding to greater fear of COVID-19. The FCV-19S showed good internal consistency (seven items; Cronbach's alpha = 0.80) when applied to children in our study. This was consistent for the different grades (alpha = 0.82 for third graders, alpha = 0.76 for fourth graders, alpha = 0.79 for fifth graders). Results of a confirmatory factor analysis using diagonally weighted least squares method on data of our study [χ 2 (14) = 32.3, p < 0.01; Root Mean Square Error of Approximation (90% Confidence Interval) = 0.06 (0.03;0.09), Comparative Fit Index = 0.99, Standardized Root Mean square Residual = 0.07] revealed an acceptable fit for the seven-item single-factor construct (80).
Procedure
The study was carried out between October 13, 2020 and November 6, 2020. In this period Italy was experiencing the second wave of COVID-19 infections, which peaked on November 13, 2020 with 40,902 new daily cases and 550 daily deaths (81). In the Friuli-Venezia Giulia region, where the study took place, the restrictions applied in the initial weeks of the study (until November 6, 2020) were: compulsory face masks in public areas, distance learning in high schools and universities, no service after 12 a.m. for bars serving food and restaurants. After November 6, tighter restrictions were introduced: stayhome mandate between 10 p.m. and 5 a.m., closure of shopping malls during weekends and holidays, 50% capacity reduction on public transport, closure of indoor recreational and cultural venues, closure of indoor gyms, pools and leisure venues, and prohibition of non-professional contact sports (82). People had been informed by mass media that the pandemic was going to get worse before it got better.
Paper questionnaires were administered by school instructors to their pupils during teacher-led classes. The teachers had been previously instructed by researchers, during an online group meeting, in the procedure to be followed for administering the questionnaires: they had to read the instructions of each questionnaire to the class, explain any word/expression that the children had asked to clarify and refrain from suggesting any response to their students during the filling of the questionnaires.
Parents of all participants provided written informed consent for their children's inclusion in the study. The study was approved by the Ethics Committee of the University of Udine and all procedures performed in the study were in accordance with the ethical standards of the 1964 Helsinki declaration and its later amendments. Finally, all data were analyzed anonymously and data confidentiality was ensured.
Statistical Analysis
Data analysis was conducted using R, version 3.6.3 (83). Power analysis was performed with GPower, version 3.1 (84). Missing values in participants' responses were found to be <2% and were imputed with the mean score of the whole sample for the corresponding item.
Primary analysis involved (i) testing the difference between the distribution of the PANAS-C and jTCI data obtained in October-November 2020 and the distribution of data from the PANAS-C and jTCI datasets obtained during the validation of these questionnaires in Italy (74,76). The difference was tested using robust independent samples t-tests separately for boys, girls, and boys and girls together. Bonferroni correction for multiple comparisons was applied in each separate group.
The PANAS-C validation dataset included data of fourth and fifth graders collected in 2014 (n = 331, 51.7% boys). The jTCI validation dataset included data of third, fourth and fifth graders collected in 2010-2011 (n = 238 after removing data without valid responses for control items, 52.1% boys). For jTCI, data from a group of fifth graders (n = 101, 46.5% boys) collected by our research group in February 2019 (i.e., about 1 year before the COVID-19 pandemic outbreak) in schools of the same area (about 30 km away) in which data were collected for the present study in 2020. Fifth graders' jTCI normative data collected in 2010-2011 were thus compared with jTCI data collected in 2019 to verify if any change had occurred with time. Participant demographic characteristics of all these samples are detailed in Table 1.
Secondary analysis involved (ii) descriptive statistics for FCV-19S scores, Pearson's correlation of jTCI scores with PANAS-C and FCV-19S scores, and the comparison of PANAS-C and FCV-19S scores between low-resilience (LR) and high-resilience (HR) personality profiles groups of the 2020 dataset. LR and HR groups were obtained, as done in a previous work of our research group (85), by partitioning the whole sample on the basis of individual HA and SD scores from the jTCI questionnaire, since, as mentioned in the Introduction, these two scales have been reported as the most influential TCI scales on adults' self-reports of resilience (HA inversely and SD directly related to resilience) (71-73).
The partitioning procedure was performed using the k-means algorithm (86) on the participants' standardized HA and SD scores. Comparison of PANAS-C and FCV-19S scores between LR and HR groups was performed using robust independent samples t-tests.
Sample size was determined by voluntary study participation in 2020 (n = 323) and by normative sample sizes of PANAS-C (n = 331) and jTCI (n = 238). In a statistical power analysis performed in terms of sensitivity, the sensitivity of study design for the primary analyses of our study was tested by comparing the effect sizes observed in the current study with the Minimum Detectable Effects (MDEs) obtained from the desired minimum statistical power of 0.80, an α level of 0.05, and the sample sizes employed in each pre-vs. during pandemic comparison. This power analysis revealed that the study design was generally sensitive enough to detect the differences of interest (in PANAS-C: for PA in girls d Tables 2, 3). For all tests, effects are reported as significant at p < 0.05.
Affect
The comparison of fourth and fifth graders' data collected before the pandemic (n = 331, 51.7% boys) with fourth and fifth graders' data collected in 2020 (n = 220, 59.1% boys) generally showed no differences in positive and negative affect, except for a difference in girls' positive affect [t(198.5) = 2.5, p Bonf = 0.02]: in 2020 girls self-reported a significantly lower positive affect than girls in 2014 (see Table 2, Figure 1A).
Personality
The Table 3, Figure 1B). It is worth noting that no difference was observed between fifth graders' jTCI data collected in 2010-2011 and fifth graders' jTCI data collected in 2019 (for all scales, in boys/girls/boys and girls: |t| < 2.4, p Bonf > 0.12).
Fear of COVID-19
There were extremely few missing values (0.25% of the total number of responses). Participants' average score (boys: 11.6 ± 3.4, girls: 12.7 ± 3.2, boys and girls: 12.1 ± 3.4, see Table 4, Figure 2A) was close to that obtained in a sample of 340 girls during the second wave of COVID-19 in Iran (M = 12.1 for third graders, M = 12.8 for fourth graders, M = 10.6 for fifth graders; data collected from July to November 2020, n = 340, 100% girls, age: 10.1 ± 1.7 years) (38), but lower than scores obtained during the first wave of COVID-19: in Canadian children the average FCV-19S score was 14.1 ± 5.7 (data collected between April and May 2020, n = 144, 51.4% boys, age: 9 to 12 years) (36), in Turkish children/adolescents was 18.9 ± 6.3 (data collected from April to June 2020, n = 381, 50.4% males, age: 15.4 ± 2.4 years) (37) and in Italian adults was 16.9 ± 6.1 (data collected from 18 March to 21 March 2020, n = 249, 8.0% men, age: 34.5 ± 12.2 years) (76). Table 5 depicts the correlation matrix of PANAS-C, FCV-19S and jTCI measures. The exploration of the relationship between affectivity and personality showed that: positive affect (PA) was positively correlated with P, SD and ST, as well as negatively correlated with HA; negative affect (NA) was positively correlated with NS and HA, as well as negatively correlated with P and SD. The exploration of the relationship between fear of COVID-19 and personality showed that fear was positively correlated with HA and negatively correlated with SD. Moreover, correlation analysis showed a positive relationship between negative affect and fear of COVID-19 (see also Figure 2B).
Low and High Resilience Profile Groups
Based on individuals' standardized HA and SD scores, the whole group of children assessed in 2020 was partitioned in a low-resilience (LR; n = 135, 51.9% boys) and a highresilience (HR; n = 188, 58.5% boys) group. In comparison with children in the HR group (see Table 6, Figure 2C)
DISCUSSION
This study investigated primary school students' selfreports during the second wave of COVID-19 in Italy. Three questionnaires were used, one for assessing students' temperament and character dimensions of personality (jTCI), one for positive and negative affect (PANAS-C), and one for fear of COVID-19 (FCV-19S). Data analysis focused on: (i) comparing the affect and personality scores obtained during the pandemic with same-graders' scores obtained before the pandemic (during the validation of the affect and personality questionnaires in Italy); in the data collected during the pandemic (ii) describing the distribution of fear of COVID-19 scores, correlating affect and fear of COVID-19 with personality scores, and comparing affect and fear of COVID-19 scores between a low-resilience and a high-resilience profile group.
In the pre-vs. during pandemic comparison of affect scores, no differences were found in terms of positive and negative affect in the overall sample (boys and girls). A significant difference between data collected before and during the pandemic, however, was found in girls' positive affect: in 2020 girls self-reported a significantly lower positive affect than girls in 2014. There are few studies that collected primary school children's self-reports during the COVID-19 pandemic and that could compare their data with those collected prior to the pandemic (30,(87)(88)(89)(90). The only existing study that carried out this comparison using children's self-reports of affectivity (30) found out that positive or negative affect scores collected in 34 healthy children (age: 11.9 ± 1.2 years) by using the shortened 10-item PANAS-C in California from 22 April to 29 July 2020 did not differ from data collected in other pediatric studies conducted prior to the pandemic (n = 101); nonetheless, the same children assessed in that study during 2020 reported significantly greater state anxiety (measured with the State Anxiety Inventory for Children) (91) compared to children assessed prior to the pandemic. It is therefore possible that measurements of children's affectivity, such as PANAS-C, could not capture the psychological impact of the COVID-19 pandemic on children that has instead been reported in other pre-vs. during pandemic studies in terms of anxiety (87,88), depression or post-traumatic symptoms (89). Two of these three studies (87,88), however, included samples of children and adolescents up to 17 years without distinguishing children from adolescents in the analyses, when various studies [e.g., (92,93)] and reviews (12,16) reported greater severity of anxiety, depression and stress symptoms in adolescents than in primary school children during 2020. It is worth noting that a study on 166 fourth graders (84 boys and 82 girls) carried out in Korea in September and October 2020 (90) detected unchanged levels of life satisfaction, measured with the Satisfaction with Life Scale (94), with respect to data collected in 2018 and 2019.
In the pre-vs. during pandemic comparison of personality scores performed in the present study, a significant change was observed in the overall sample in harm avoidance (decreased in 2020) and self-transcendence (increased in 2020) scores. In the biopsychosocial model of personality, on which the Temperament and Character Inventory is based, harm avoidance is the dimension of temperament linked to worry/pessimism, fear of uncertainty, shyness and fatigability (64,77,95). Although temperament should bear a greater stability throughout life compared to character (96), among the temperamental traits harm avoidance is considered to be the most susceptible to mood and anxiety (97,98), as well as to experiences such as trainings [e.g., (99,100)] or therapy [e.g., (101,102)]. In our study, a decreased level of harm avoidance in the overall sample was observed in comparison with pre-pandemic data, which was mainly due to the decrease of scores in boys. This means that in this dimension of temperament, children self-reported in 2020 a generally healthier profile than children assessed in 2010-2011. This result seems to be in contrast with the increase of children's anxiety and depression symptoms which were generally reported, although not consistently [e.g., Ravens-Sieberer et al. (88) observed no significant increase in the prevalence of depressive symptoms before vs. during the pandemic], in the previous literature focusing on the pandemic period. When comparing the results of the various studies on the impact of the pandemic, an important issue concerns when and where these studies were carried out, because the environmental conditions during the different phases/waves of the pandemic could have differently influenced the mental condition of people that were exposed to them: for example, children in our study were experiencing the second wave of COVID-19 in Italy, but were back to school in September 2020 after their schools had remained closed since the outbreak of the pandemic in February 2020, and could therefore find themselves in a different condition than their German or Swedish peers who returned to school in May 2020 or who did not experience school closures (103). That being said, the change in Italian children's harm avoidance may look like a positive rebound in terms of optimism, courage and energy after the possibly traumatic experience of the first wave of COVID-19 and the hard lockdown imposed in Italy. The fact that this change remained within one standard deviation from normative scores suggests, however, that children's personality did not change dramatically from pre-pandemic levels and, in particular, toward excessive and unhealthy fearlessness and imprudence profiles.
The observed decrease in harm avoidance scores from prepandemic levels was accompanied by the increase from prepandemic levels in the character trait of self-transcendence, although no correlation was found between these two variables. In the biopsychosocial model of personality, self-transcendence is the dimension of character linked to fantasy/daydreaming, transpersonal identification and spiritual acceptance (64,77,95). Changes in adults' self-transcendence have repeatedly been observed in response to trainings/therapy and medical treatment (104-108). Self-transcendence and spirituality are recognized as useful coping strategies for managing stressful life events (109,110) and it is therefore possible that children in our study drew on their spiritual resources in response to the pandemic crisis for developing resilience. This possibility can be encompassed within the concept of post-traumatic growth, defined as "positive change experienced as a result of the struggle with trauma" (111), one of whose domains being precisely spiritual change: various metaanalytic studies revealed that post-traumatic growth is in general positively associated with spirituality in adults and children (112)(113)(114)(115). During the COVID-19 emergency, large portions of the population had to simultaneously confront, directly or not, confinement, illness and death. Such an experience can have stimulated the development of spirituality/self-transcendence, understood as the discovery or making sense of the experience itself: this healing process can pass through an initial crisis, as reported for example in a study on adults during the first days of the COVID-19 lockdown in Italy, where 1,250 adults selfreported significantly worse levels of mental health and lower levels of spiritual well-being in comparison with pre-pandemic normative data (116). As seen for harm avoidance, the change in self-transcendence observed in our study also remained within one standard deviation from normative levels, which can be interpreted as a significant but not dramatic modification of character maturity (at the transpersonal level).
In our study, children's fear of COVID-19 was also assessed and, despite the paucity of other observations of this measure in children, the participants' average score seemed to be similar to that obtained by other studies during the second wave of COVID-19 (in Iranian girls) and lower than those obtained during the first wave (in Canadian children, Turkish children/adolescents and in Italian adults). A significant decrease from the first to the second wave in the fear of COVID-19 scores (assessed with the same scale used in our study) has been observed, for example, in adult Slovakians (34). This can be viewed as the result of the individual and institutional adaptation to the pandemic after the initial emergency response. Importantly, in our study children's fear of COVID-19 scores resulted to be positively correlated with harm avoidance scores and negatively correlated with selfdirectedness scores. As already mentioned, these two scales have been reported as the most influential Temperament and Character Inventory scales on adults' self-reports of resilience [harm avoidance negatively and self-directedness positively related to resilience, (71)(72)(73)] and thus, in the present study, children with a weaker resilience profile self-reported higher fear of COVID-19 scores than children with a stronger resilience profile. In the analysis of the two resilience profile groups, it was also observed that children in the low resilience profile group self-reported significantly higher negative affectivity and lower positive affectivity than children in the high resilience profile group.
Other salient results coming from the correlations between the study variables were: harm avoidance directly related to negative affect and inversely related to positive affect; persistence [the temperament trait linked to determination to achieve a goal despite frustration or fatigue, (64,77,95)] and self-directedness directly related to positive affect and inversely to negative affect; self-transcendence directly related to positive affect. These results seem to confirm that children with lower personality tendency toward worry/pessimism, fear of uncertainty, shyness and fatigability (trait of harm avoidance) and higher personality tendency to self-identification as an integral part of the universe as a whole (trait of self-transcendence) were likely to live with more positive and less negative feelings than children with the opposite features of personality. Results indicate also that the same condition of experiencing more positive and less negative feelings was also related to personality traits of maturity, autonomy and reliability (trait of self-directedness), as well as of diligence and determination (trait of persistence).
The present study has some strengths, in comparison with similar studies, as well as several limitations. The strengths include (i) the fact that children's self-reports, rather than proxy reports, were used and (ii) that these self-reports were obtained in classroom, rather than online. The limitations include that (i) pre-vs. during pandemic differences in the study measures have been related exclusively to the pandemic, whereas other individual and contextual factors may have influenced these differences; (ii) differently from jTCI (for which the normative dataset and a dataset collected in 2019 were used as pre-pandemic datasets), for PANAS-C it was not possible to obtain a dataset collected immediately before the pandemic, that confirmed the normative dataset collected in 2014; (iii) pre-vs. during pandemic differences in the study measures were observed using different groups of children, which seems the best way to assess an average change in a population (comparing it with a normative sample), but, at the same time, due to the fact that assessment is performed at group levels, cannot take into account individual longitudinal changes in single children; (iv) the FCV-19S is a tool developed and validated for adults, although children in our study filled it easily (very few missing responses) and results seemed to be consistent with those obtained using the other study measures; (v) results were obtained in Italy immediately before the peak of the second wave of infections of COVID-19 and it is not possible to know to what extent these results can be generalizable to other periods and countries, as previously discussed.
In conclusion, our findings suggest that Italian primary school children, exposed to the first wave of COVID-19 and the hard lockdown imposed in Italy during spring 2020, and assessed during the ascending phase of the second wave of the pandemic in Italy, had affect scores generally in line with normative data collected prior to the pandemic and personality profiles denoting increased levels of courage/optimism and spirituality in comparison with the typical, pre-pandemic, profiles of children's personality.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors upon request.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Local Ethics of the University of Udine, DILL, University of Udine. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. | 2022-01-17T14:18:52.606Z | 2022-01-17T00:00:00.000 | {
"year": 2021,
"sha1": "b4f6c042039838f00dab0987d93e9d63601b15ac",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "b4f6c042039838f00dab0987d93e9d63601b15ac",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265343658 | pes2o/s2orc | v3-fos-license | Antibacterial Efficiency of Tanacetum vulgare Essential Oil against ESKAPE Pathogens and Synergisms with Antibiotics
Medicinal plants with multiple targets of action have become one of the most promising solutions in the fight against multidrug-resistant (MDR) bacterial infections. Tanacetum vulgare (Tansy) is one of the medicinal plants with antibacterial qualities that deserve to be studied. Thus, this research takes a closer look at tansy extract’s composition and antibacterial properties, aiming to highlight its potential against clinically relevant bacterial strains. In this respect, the antibacterial test was performed against several drug-resistant pathogenic strains, and we correlated them with the main isolated compounds, demonstrating the therapeutic properties of the extract. The essential oil was extracted via hydrodistillation, and its composition was characterized via gas chromatography. The main isolated compounds known for their antibacterial effects were α-Thujone, β-Thujone, Eucalyptol, Sabinene, Chrysanthenon, Camphor, Linalool oxide acetate, cis-Carveol, trans-Carveyl acetate, and Germacrene. The evaluation of the antibacterial activity was carried out using the Kirby–Bauer and binary microdilution methods on Gram-positive and Gram-negative MDR strains belonging to the ESKAPE group (i.e., Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter spp.). Tansy essential oil showed MIC values ranging from 62.5 to 500 μg/mL against the tested strains. Synergistic activity with different classes of antibiotics (penicillins, cephalosporins, carbapenems, monobactams, aminoglycosides, and quinolones) has also been noted. The obtained results demonstrate that tansy essential oil represents a promising lead for developing new antimicrobials active against MDR alone or in combination with antibiotics.
Introduction
The therapy of infectious diseases is going through a major crisis due to the extent of the phenomenon of antibiotic resistance (AR), which has reached alarming proportions.The World Health Organization (WHO) list (27 February 2017) [1] of priority pathogens for research and development of new antibiotics comprises three priority levels: critical, high, and medium.The critical level includes multidrug-resistant pathogens (MDRbacteria resistant to three or more classes of antibiotics) ESKAPE, Gram-negative pathogens, A. baumannii (carbapenem-resistant), P. aeruginosa (carbapenem-resistant), K. pneumoniae (resistant to third-generation cephalosporins), and Enterobacter spp.(resistant to thirdgeneration cephalosporins).The high level includes ESKAPE Gram-positive pathogens, Enterococcus faecium (vancomycin-resistant), and S. aureus (intermediate methicillin-resistant and vancomycin-resistant).These pathogens are often isolated from clinical settings associated with life-threatening nosocomial infections [2].
The design of new (classes of) antibiotics is difficult to achieve.Although efforts are made to stop and reduce the phenomenon, the monitoring studies do not report the positive effects of these policies, as the incidence of multidrug and pandrug-resistant strains is continuously increasing [3].New antibiotics that entered the market in recent decades are variations of pre-existent antibiotics discovered up to the 1980s [4].The lack of progress in antibiotic development highlights the need to explore innovative approaches to treating bacterial infections.
The list of plants with curative effects against a wide variety of chronic and infectious diseases has expanded, especially in recent decades.Analytical methods have identified diverse compounds with therapeutic effects, especially against bacterial infectious diseases, in the context of the expansion of the phenomenon of antibiotic resistance [5,6].In vitro tests have shown that some plant species that are used to treat non-infectious diseases (e.g., Panax ginseng, Achillea sp., Cichorium intybus L., Cynara cardunculus L., Foeniculum vulgare Mill., which regulate liver functions [7], or plants that regulate the nervous system, such as Ginkgo biloba L., Panax ginseng C.A. Mayer, Scutellaria baicalensis Georgi [8]) also have antibacterial activity [9][10][11][12][13][14].More than a quarter of the medicines administered in industrialized countries are obtained directly or indirectly from plants.The antimicrobial action of plant extracts results from their complex chemical composition, the compounds acting synergistically and having multiple action targets.These preparations are generically referred to as "non-antibiotic" antibacterial agents [15] and were proven to be active against a large spectrum of bacterial and fungal strains [16].
The use of essential oils (EOs), resulting from the secondary metabolism of plants, has been known since Neolithic times when they were extracted via pressing.The Egyptians used to extract oils by infusing a plant into a fatty substance.By boiling, the flavor evaporates and is fixed in the fat [17].The oldest medicine book, "Shennong's Herbal", dating back to 2700 BC, contains instructions for using 365 medicinal plants belonging to Chinese folklore.Hippocrates, the father of modern medicine, documented the medical benefits of fumigation with aromatic oils in treating plague.There are 12 different types of EOs mentioned in the Holy Bible.In 1990, in the book "L'Aromatherapie Exactement", the medical properties of more than 270 EOs are described, representing a start for many studies [18].
The main chemical compounds found in the composition of EOs include terpenes, terpenoids, alkaloids, and phenolic compounds.EOs are obtained from raw vegetable material, steam distillation, or dry distillation.Yields vary greatly depending on plant parts' pre-processing techniques and agronomic factors (climate, soil, oral harvest interval) [19,20].In general, the increase in drying temperature is proportional to the decrease in the EO content [21].Thus, drying plants at 30 • C leads to concentration losses of 16%, while drying at higher temperatures of 60 • C leads to 65% losses [22].Therefore, the obtaining process is directly correlated with the quality of the EOs.
Tansy (Tanacetum vulgare) is an aromatic plant that combines the smell of Achillea millefolium with Artemisia absintum, Piper nigrum, Salvia mirzayanii, and Eucalyptus sp.due to some common aromatic compounds.Aromatic plants are rich in essential oils.The term "essential oil" was used by Paracelsus von Hohenheim, who called the effective component of a medicine "Quinta essential".Conventional methods of obtaining essential oil include cold pressing, distillation, and solvent extraction.Supercritical and subcritical carbon dioxide (CO 2 ) extraction methods are more advanced and used industrially.The most efficient method, both financially and in terms of working time, is via water vapor distillation, which we conducted in this work.
Although Tanacetum vulgare is a plant that has been studied a lot recently due to its multiple therapeutic properties, including anti-inflammatory, antioxidant, and antibacterial activities, it still presents novelties due to its chemical compounds that vary according to environmental conditions (e.g., soil pH, light exposure, humidity, pollution) [23][24][25].According to the World Checklist of Selected Plant Families, Anthos, Flora Iberica, and other databases, it has been attributed to numerous names correlated with its action/benefits, such as the gate of heaven, herb of the Mother of God, herb of the air, herb of worms, triac plant, medicinal of St. Anastasia, moss herb, St. Mark's weed, bitter weed, St. Teresa's pen, frankincense, and mugwort.However, it has been destroyed by herbicide or incineration in recent years, being considered an invasive plant.
In Romania, Tansy has a wide distribution, including all geographical areas.The best harvesting period is considered to range from July to August.The plant used herein was selected from an area with abundant growth in the Sohodol Valley area, Gorj County, Romania.
This study aims to highlight the potential of the EOs extracted from locally collected T. vulgare to be used as a source of antibacterial formulations or compounds.In this respect, EOs have been extracted via hydrodistillation, their main chemical constituents have been identified via gas chromatography-mass spectrometry (GC-MS), and their antibacterial properties have been investigated against clinically relevant Gram-positive and Gramnegative MDR bacterial strains, demonstrating promising activity.Thus, this study seeks to emphasize the importance of local plants for developing antimicrobial formulations against ESKAPE pathogens.
Results
The essential oil yield was 0.43% (w/w) after 6 h of distillation.The refractive index was determined using an Abbe Zeiss refractometer, being 1.4756.The density measured with an ISOLAB pycnometer was 0.95783 g•mL −1 .
According to the World Checklist of Selected Plant Families, Anthos, Flora Iberica, and other databases, it has been attributed to numerous names correlated with its action/benefits, such as the gate of heaven, herb of the Mother of God, herb of the air, herb of worms, triac plant, medicinal of St. Anastasia, moss herb, St. Mark's weed, bitter weed, St. Teresa's pen, frankincense, and mugwort.However, it has been destroyed by herbicide or incineration in recent years, being considered an invasive plant.
In Romania, Tansy has a wide distribution, including all geographical areas.The best harvesting period is considered to range from July to August.The plant used herein was selected from an area with abundant growth in the Sohodol Valley area, Gorj County, Romania.
This study aims to highlight the potential of the EOs extracted from locally collected T. vulgare to be used as a source of antibacterial formulations or compounds.In this respect, EOs have been extracted via hydrodistillation, their main chemical constituents have been identified via gas chromatography-mass spectrometry (GC-MS), and their antibacterial properties have been investigated against clinically relevant Gram-positive and Gram-negative MDR bacterial strains, demonstrating promising activity.Thus, this study seeks to emphasize the importance of local plants for developing antimicrobial formulations against ESKAPE pathogens.
Results
The essential oil yield was 0.43% (w/w) after 6 h of distillation.The refractive index was determined using an Abbe Zeiss refractometer, being 1.4756.The density measured with an ISOLAB pycnometer was 0.95783 g•mL −1 .
Further, approximately 80 compounds were identified via GC-MS analysis (Figure 1) and summarized in Table 1 in the order of elution.The chemical and active properties of Tansy depend on several factors (e.g., plant distribution, geographical location, and environmental factors) [26], which influence the yield and quality of the plant.Thus, we made a comparison between the main compounds obtained by us and those obtained and published in the specialized literature from different geographical areas: Poland, Serbia, and Canada.The working methods for obtaining the essential oil and identifying the compounds were similar, but the reports were different, suggesting the Tansy species' variety.Table 2 summarizes these differences.
In the quantitative assay of the antimicrobial activity of the tansy EOs, all tested strains proved to be very susceptible to the tested EOs in the mass concentration range of 62.5-500 µg/mL (Figure 2).The results obtained showed that Tansy had a remarkable antibacterial effect.Tansy essential oil inhibited the growth of almost all tested bacteria at a concentration lower than 61.5 µg/mL.A higher mass concentration (23.25 mg/mL) was needed to inhibit the growth of P. aeruginosa (125 µg/mL).On the ordinate is written the value of the optical density of the cultures measured indirectly via optical density at 620 nm with a UV-vis spectrometer (Lambda25, PerkinElmer, Inc., Waltham, MA, USA), and on the abscissa is written the mass concentration of the Tansy extract (µg/mL).The best activity was recorded against the S.aureus strain isolated from wound secretions (31.25 µg/mL).M+ positive control (bacterial culture without extract addition) and M− (sterile culture medium).Experiments were performed in triplicate (n = 3), and results were expressed as means ± SD.Statistical analyses were performed using GraphPad Prism 7. If the probability of the difference was less than p < 0.05, it was deemed to be statistically significant.
The T. vulgare extract had a different effect on the activity of the tested antibiotics, depending on the bacterial strain and the antibiotic.Table 3 summarizes the obtained results regarding the synergism of tansy EOs with the tested antibiotics.Each experiment was repeated three times, the diameter of the inhibition zone being the arithmetic mean of these values.The tolerance was ±0.5 mm.Graphic representation of the antibacterial activity of tansy EOs against the MDR strains.On the ordinate is written the value of the optical density of the cultures measured indirectly viaoptical density at 620 nm with a UV-vis spectrometer (Lambda25, PerkinElmer, Inc., Waltham, MA, USA), and on the abscissa is written the mass concentration of the Tansy extract (µg/mL).The best activity was recorded against the S.aureus strain isolated from wound secretions (31.25 µg/mL).M+ positive control (bacterial culture without extract addition) and M− (sterile culture medium).Experiments were performed in triplicate (n = 3), and results were expressed as means ± SD.Statistical analyses were performed using GraphPad Prism 7. If the probability of the difference was less than p < 0.05, it was deemed to be statistically significant.
The T. vulgare extract had a different effect on the activity of the tested antibiotics, depending on the bacterial strain and the antibiotic.Table 3 summarizes the obtained results regarding the synergism of tansy EOs with the tested antibiotics.Each experiment was repeated three times, the diameter of the inhibition zone being the arithmetic mean of these values.The tolerance was ±0.5 mm.As shown in Table 4, the Tansy EOs exhibited a synergic effect on all tested antibiotics in the case of A. baumannii and K. pneumoniae strains and on at least one antibiotic from each of the tested class in the case of P. aeruginosa strains (Table 3).Plant extracts have multiple mechanisms of action on microorganisms.To better understand their mechanisms of bactericidal or bacteriostatic action, we evaluated the influence of extracts by associating them with different classes of antibiotics: aminoglycosides, β-lactams, and quinolones.In particular, the serial microdilution method was used to quantify the synergistic effect of tansy extract with antibiotics.Antibiotics, in combination with the Tansy extract, demonstrated enhanced antibacterial activities (Table 5).Abbreviations: FEP = Cefepime; AMC = Amoxicillin + Clavulanic Acid; TOB = Tobramycin; w/w % Tansy = 125 µg/mL.
As observed in Table 5, regarding the synergism of the T vulgare extract with the antibiotic, the plant extract had a different effect on the activity of the associated antibiotic and also depending on the bacterial strain.The zones of inhibition had values between 14 and 30 mm.Each experiment was repeated three times, the diameter of the inhibition zone being the arithmetic mean of these values.The tolerance was ±0.5 mm.
The synergism of the plant extract with the antibiotic was evident for all classes of antibiotics inhibiting bacterial growth.The potentiation of antibacterial activity of antibiotics in combination with plant extracts can be correlated with the synergism of the specific mechanisms of action of antibiotics and extracts on multidrug-resistant bacteria.Specifically, in combination with the extract, antibiotics have regained their properties as bacterial inhibitors.Moreover, the synergistic action of Tansy extract with antibiotics on MDR strains suggests the complexity of the action of the compounds resulting from secondary metabolism.
Discussion
The list of plants with curative effects against a wide variety of organic and infectious diseases has expanded, especially in recent decades, as analytical methods have identified compounds with therapeutic effects, especially against bacterial infectious diseases, in the context of the expansion of the phenomenon of antibiotic resistance.In vitro tests have shown that some plant species used to treat non-infectious diseases have antibacterial activity.More than a quarter of the medicines administered in industrialized countries are obtained directly or indirectly from plants [15].
The antimicrobial action of plant extracts results from their complex chemical composition, the compounds acting synergistically and having multiple action targets.These preparations are generically referred to as "non-antibiotic antibacterial agents" [15] and were proven to be active against a large spectrum of bacterial and fungal strains [16].
In this context, the main objective of this study was to evaluate the antibacterial effect of Tansy EO as well as its potential synergy with antibiotics.In this regard, the first step assumed the identification of the bioactive compounds of T. vulgare extract, comparing them with those reported by other authors.The variation in the concentration of the compounds is correlated with the site from which the plant was harvested [23].In more detail, the essential oil composition indicates plant adaptation to habitat conditions and stress conditions: drought, radiation, high temperature, heavy metal content, and predators [28].Thus, essential oils change according to environmental conditions [29].
The compounds identified in this study had approximately the same concentrations as those reported in the cited literature, with some differences that could influence the higher antibacterial activity.Thus, Eucalyptol, trans-β-Ocimene, cis-Pinocamphone, cis-Carveol, (z)-Linalool oxide acetate (pyranoid), α-Fenchene, trans-Carvyl acetate, β-Copaene, Cedrol, and Junelol were not identified in these studies [23,24,27].Therefore, this article completes the knowledge regarding T. vulgare EO, identifying and quantifying a series of previously disregarded compounds.
Eucalyptol is an important monoterpene in this study, found in a proportion of 2.47%, being widely used in the pharmaceutical industry, having anti-inflammatory properties, and killing leukemic cells in vitro [30].Antibacterial activity has been reported in several studies [31,32], the main mechanism of action being the penetration of the bacterial cell membrane and its lysis [33].Trans-β-Ocimene is a terpene that establishes tri-trophic interactions [34], which was identified in a proportion of 0.07%.In addition to its quality as a food additive, it has several therapeutic properties, including anti-inflammatory, antifungal, and antiviral effects [35,36].Cis-Pinocamphone is a monoterpenoid found especially in Xylopia aromatica and Aloysia gratissima.Cis-Carveol is a monoterpene that has the ability to cause the death of the bacterial cell via the disintegration mechanism of the cell membrane [37].α-Fenchene is a monoterpene with a camphor-like odor.The presence of this compound in plants has a role in increasing antioxidant activity [38].The greatest spread was found in Artemisia sp., hence the similarity of the smell of the two plants [39].
Trans-Carvyl acetate is a terpene ester with a mint flavor with a role in increasing antibacterial activity [40].β-Copaene is a sesquiterpene used as a food additive and flavor due to its antioxidant properties [41].Cedrol is a sesquiterpene found in conifers used in traditional medicine, especially due to its antimicrobial properties [42].Juneol is a sesquiterpene identified in high concentrations in Bursera graveolens, a tree that belongs to the same family (Burseraceae) as frankincense and myrrh with several therapeutic properties, including antibacterial [43].
Concerning identified concentrations, the most abundant compounds were carvyl acetate (34.44%) and β-thujone (30.26%).Carvyl acetate is a flavonoid predominantly found in Mentha spicata L. with strong inhibitory potential against Gram-positive and Gram-negative bacteria and pathogenic fungi [44].The quality of Mentha longifolia (L) EOs is proportional to the carveol content [45].On the other hand, α-Thujone and β-thujone are the two stereoisomeric forms by which thujone is represented in the ketone group of bicyclic monoterpenes [46].
Plants that have significant thujone content, such as Artemisia sp., Achillea sp., Thuja sp., and Salvia sp., are often used to flavor foods and beverages.Absinthe, for example, is a well-known alcoholic drink called absinthe after the name of the plant used due to the flavor obtained by adding the extract of Artemisia absinthium L. Thujone has long been thought to cause adverse psychoactive effects.However, recent research has made convincing arguments for correctly identifying substances with possible side effects in absinthe, such as ethanol and other compounds used in the adulteration process [47].In addition, α-Thujone is the active ingredient of wormwood oil and other drugs and has been reported to have antinociceptive, insecticidal, and anthelmintic activity [48].The toxicity of thujone has been studied extensively, neurotoxicity being the main toxic outcome [49].Regarding the degree of toxicity of the two forms of thujone, α-Thujone has a higher toxicity than β-thujone.As thujone is found in many medicinal plants to eliminate the supposed negative effect of this compound, HMPC (Committee on Herbal Medicinal Products) and EMA (European Medicines Agency) were able to set the maximum daily dose limit for thujone, which was set between 3.5 and 5.0 mg/person [50].
Thujone formation is not a universal phenomenon in plants.The indirect precursor of thujone, sabinene, is one of the most widespread monoterpene compounds in EOs.Various external and internal factors influence the accumulation of thujone.The isomerization of thujones can be explained by the presence of intermediates such as sabinol, sabinone, and related compounds in EOs [51].The thujone pathway exists in many sabinene-containing species, but the expression of the corresponding genes is repressed due to different metabolic interactions, leading to the lack of thujone (for example, plants of the Lamiaceae family) [52].
It has been shown that α, β-Thujone (70% of α-thujone and 10% of β-thujone) also have anti-cancer effects against placental choriocarcinoma cells by inducing apoptosis via the mitochondrial-mediated intrinsic pathway [53].A study by Pudełek et al. [54] highlighted the properties of α-Thujone as a potent attenuator on the proliferation and viability of glioblastoma multiforme cells, as this compound displayed anti-invasive and pro-apoptotic effects on glioblastoma multiforme cells.
Further, the obtained extract was tested on Gram-positive and Gram-negative bacterial strains that have been selected to exhibit multiple drug resistance to current antibiotics, i.e., beta-lactams, aminoglycosides, and quinolones.In order to evaluate the antibacterial effects and the potential synergism with antibiotics, qualitative and quantitative assays have been performed.T. vulgare extract was observed to work together with antibiotics, achieving more significant antimicrobial effects.The combination of conventional antibiotics, against which bacterial strains have been resistant, with plant extracts can restore the effectiveness of treatment due to the synergistic antibiotic-extract effect [55].
The microorganisms analyzed were strains with multiple antibiotic resistance and reference strains.To evaluate the effects of the association of plant extracts with antibiotics, we determined the MIC values of plant extracts as a reference point for defining the interactions with antibiotics.The association of antibiotics with volatile oil extract from T vulgare has been investigated for possible synergistic interactions.Using the "chessboard" method, the synergism is manifested by the increased sensitivity of the microorganism in the simultaneous presence of both antimicrobial agents, which is reflected by changes in the values of the zones of inhibition [56].This can be explained by the ability of Tansy extract to enhance bacterial cell membrane permeability [57], thus allowing higher amounts of antibiotics to enter pathogenic cells and consequentially destroy them.
The antimicrobial action of plant extracts is the result of all chemical compounds that act synergistically, having multiple targets of action.The minimum inhibitory concentrations of oil Tansy extract for the bacterial strains tested varied in the range (62.5 ÷ 500) µg/mL.In the quantitative assay, the most susceptible to the tested EOs were the S. aureus strains (for which the MIC was 125 µg/mL for S. aureus 732 and S. aureus 735 and 62.5 µg/mL for S. aureus ATC25923 compared to the results obtained by Coté et al. [24] for S. aureus ATC25923 , of 59 µg/mL).For Gram-negative bacteria, the MIC values were approximately the same.Further, for P. aeruginosa 11I , it was 125 µg/mL; for P. aeruginosa 3162 , it was 250 µg/mL; and for P.aeruginosa ATCC27853 , it was 62.5 µg/mL like the one obtained by Chiavari-Frederico et al. [58].For Gram-negative strains of K. pneumoniae isolated from urinary tract infections (UTI), the MIC concentrations were between 125 and 250 µg/mL, similar to those obtained by Gadisa and Tadesse [59].
Obtaining and Characterization of Tansy EOs
Tansy plants were harvested between July and August 2021 from the Sohodol Valley area, Gorj County.The plant was gathered from a soil covering, predominantly granitic, bordered to the southwest by a narrow, deep valley traversed by the Sohodol River.This entire area stands as an open clearing amidst a vast expanse of woodland, spanning approximately 10 hectares.The central reference coordinates are 45 • 11 13.7 N 23 • 08 04.0 E, with altitudes ranging between 550 and 700 m.This delineated area lies on the left slope of the Sohodol Valley.Only young inflorescences, unaffected by pests or mechanical factors, were used.The essential oil from the dried flowers of T. vulgare was obtained in a Neoclevenger distillery purchased from Pellet Lab, Houston, TX, USA.Extraction of essential oils via steam entrainment is the most widely used process.Volatile oils are very light at temperatures below the boiling point of water and are carried to the distillery's upper outlet areas by steam.Their low density and insolubility in water facilitate their separation.
The air-dried plants were introduced into the distillery boiler with a capacity of 20 L. The ratio of distilled water to dry plants was 6 L to a volume of 3 kg.The total working time was about 6 h.We obtained 20 mL of essential oil, which we stored in a dark-colored bottle at a temperature of 4 • C. The chemical compounds in the essential oil of T. vulgare were identified using the gas chromatography method (i.e., Shimadzu GC-2010 Plus method) coupled with a mass spectrometer (GC-MS), coupled with a quartz capillary column.Helium (1.0 mL/min) was used as carrier gas.Additionally, 1 µL of the samples were injected into the GC (detector temperature, 280 • C).Qualitative analysis used electron impact ionization (ionization energy, 70 eV).Essential oils were identified using a digital library of mass spectral data (NIST 8.0) and literature comparisons of retention indices [60].Quantitative GC-FID analysis was performed using a SHIMADZU GC-2010 Plus instrument equipped with an MS quartz capillary column under the same conditions as GC-MS, except that N 2 was used as the carrier gas.The temperature of the FID detector was 250 • C. The relative concentration of the compounds was calculated by measuring the chromatographic peak areas without applying any correction factor.The applied software was AFT (Advanced Flow Technology), Fathom 9 version.The relative percentage values of separated compounds were calculated by recording the peaks in the FID chromatograms.Essential oils were identified by comparing their GC retention times on apolar and polar columns with those of the sample compounds and by comparing retention indices relative to commercial mass spectral libraries of the C6-C40 alkane series.Alkanes (C6-C40) were used as reference points to calculate relative retention indices [61][62][63].
Antibacterial Activity of T. vulgare EOs
For this study, we selected 2 Staphylococcus aureus, 2 Acinetobacter baumannii, 3 Pseudomonas aeruginosa, and 4 Klebsiella pneumoniae bacterial strains that were multidrug resistant, isolated from patients with different infections, hospitalized at the C.C. Iliescu Institute of Cardiovascular Diseases in Bucharest and previously characterized for their resistance phenotypes and genotypes.S. aureus ATCC 25923 and P. aeruginosa ATCC 27853 were used as control reference strains.The antibiotic susceptibility assay of the studied strains was performed using the standardized Kirby-Bauer method, according to CLSI, 2018 edition [64].
For the qualitative assay of the antibiotic-tansy EO synergisms, bacterial suspensions were made in sterile physiological water from 24 h of bacterial cultures at a density of 1.5 × 10 8 CFU/mL according to the McFarland 0.5 turbidity standard.Each bacterial inoculum was inoculated on a plate with Müller-Hinton medium, using the "cloth" seeding technique.After drying the plates, antibiotic discs alone, as well as those impregnated with a volume of 10 µL Tansy EOs, were placed at equal distances.For testing the tansy EOs, sterile paper disks impregnated with a volume of 10 µL tansy EOs were used.The plates were incubated for 18-24 h at 35 • C ± 2 • C.After incubation, the diameters of growth inhibition zones were measured and expressed in mm.Increasing the diameter of the inhibition zone of the antibiotic in the presence of tansy EOs by ≥5 mm was the indicator of the synergistic effect.
We used the serial microdilution method for the semi-quantitative quantification of the synergistic effect of Tansy extract with antibiotics.A conventional method (according to EUCAST/2010) to obtain different concentrations of antimicrobial agents for MIC determination is to obtain a stock solution of 10,240 mg/L.By adding a volume of 19 mL of liquid Muller-Hinton broth to 1 mL of the stock solution, a final concentration of 512 µg/mL is obtained.In this study, the antibacterial activity was quantitatively determined using binary serial microdilutions in a liquid culture medium (Muller-Hinton nutrient broth) in 96-well microplates according to CLSI 2006 [65] and 2008 [66].For this purpose, 10 binary serial dilutions, from 500 µg/mL to 1.953 µg/mL, were performed in a ratio of 9:1 nutrient broth with tansy EOs + distilled water.Bacterial suspensions of 1.5 × 10 8 CFU/mL were prepared according to the McFarland 0.5 turbidity standard.The final volume per well was 200 µL, and the volume of bacterial suspension was 15 µL.Wells 11 and 12 corresponded to positive (bacterial culture) and negative (sterile culture medium) controls, respectively.The plates were incubated overnight at 37 • C. The minimum inhibitory concentration (MIC) was determined macroscopically by establishing the final concentration at which no increase in microbial growth and environmental turbidity was observed and via the spectrophotometric reading of the absorbance at 620 nm.
Conclusions
Our results have shown that the bioactive compounds from Tansy EOs are a rich source for developing new pharmaceuticals useful in the prophylaxis and treatment of bacterial infections produced by MDR Gram-negative and Gram-positive bacteria, as such or in combination with commonly used antibiotics.The antimicrobial action of Tansy Eos is justified by its complex chemical composition, the secondary metabolites potentially responsible for this activity being monoterpenes (Sabinene, Eucalyptol, trans-β-Ocimene, Linalool, Filifolone, Thujone, Camphor, cis-Carveol, Sabinol isovalerate), terpenoids (Chrysanthenone, Terpinen-4-ol), sesquiterpenes (Germacrenes D, Cedrol, Cubenol, ledol, Globulol, Artemisia ketone, Spirojatamol), and trans-Carvyl acetate.Moreover, Tansy essential oil showed MIC values ranging between 500 and 62.5 µg/mL against the tested strains, displaying synergistic activity with different classes of antibiotics.In conclusion, the presented results demonstrate that Tansy EOs hold promise for developing new antimicrobial formulations for treating active MDR bacterial infections, being a natural source of valuable bioactive compounds.
Figure 1 .
Figure 1.Chromatograph of EOs obtained from T. vulgare.The peaks indicate the concentration of the compounds.
Figure 1 .
Figure 1.Chromatograph of EOs obtained from T. vulgare.The peaks indicate the concentration of the compounds.
Figure 2 .
Figure 2. Graphic representation of the antibacterial activity of tansy EOs against the MDR strains.On the ordinate is written the value of the optical density of the cultures measured indirectly via optical density at 620 nm with a UV-vis spectrometer (Lambda25, PerkinElmer, Inc., Waltham, MA, USA), and on the abscissa is written the mass concentration of the Tansy extract (µg/mL).The best activity was recorded against the S.aureus strain isolated from wound secretions (31.25 µg/mL).M+ positive control (bacterial culture without extract addition) and M− (sterile culture medium).Experiments were performed in triplicate (n = 3), and results were expressed as means ± SD.Statistical analyses were performed using GraphPad Prism 7. If the probability of the difference was less than p < 0.05, it was deemed to be statistically significant.
Figure 2 .
Figure 2. Graphic representation of the antibacterial activity of tansy EOs against the MDR strains.On the ordinate is written the value of the optical density of the cultures measured indirectly viaoptical density at 620 nm with a UV-vis spectrometer (Lambda25, PerkinElmer, Inc., Waltham, MA, USA), and on the abscissa is written the mass concentration of the Tansy extract (µg/mL).The best activity was recorded against the S.aureus strain isolated from wound secretions (31.25 µg/mL).M+ positive control (bacterial culture without extract addition) and M− (sterile culture medium).Experiments were performed in triplicate (n = 3), and results were expressed as means ± SD.Statistical analyses were performed using GraphPad Prism 7. If the probability of the difference was less than p < 0.05, it was deemed to be statistically significant.
Table 1 .
Chemical compounds of essential oil from T. vulgare, listed in order of elution, as identified via GC-MS analysis.
Abbreviations: MH = monoterpene hydrocarbons; MO = oxygenated monoterpenes; SH = sesquiterpene hydrocarbons; SO = oxygenated sesquiterpenes; O = others.Values are mean ± standard deviation of three different samples of T. vulgare, analyzed individually in triplicate.Retention time identification is based on comparing retention time with standard compounds; MS identification is based on comparing mass spectra.The amounts were calculated using calibrated curves with pure standard compounds.
Table 2 .
The main chemical compounds isolated and identified in T. vulgare essential oils compared to similar studies.
Table 3 .
Growth inhibition zone diameters obtained for beta-lactam antibiotics tested in association with Tansy EO against the Gram-negative bacterial strains.
Table 4 .
The number of Gram-negative strains out of the total number of strains from each species for which a synergism with Tv EOs was noted.
Table 5 .
Influence of Tansy extract on the antibiotic resistance profile of Gram-positive and Gramnegative strains. | 2023-11-22T16:39:43.571Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "a78643eb4e91c471879ce5d6b2688bbd7ae7191e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6382/12/11/1635/pdf?version=1700229769",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "664612d8c4793b71ff193746a1c4ee195ed94e1c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
32626620 | pes2o/s2orc | v3-fos-license | Retrospective Study of ALK Rearrangement and Clinicopathological Implications in Completely Resected Non-small Cell Lung Cancer Patients in Northern Thailand : Role of Screening with D 5 F 3 Antibodies
Lung cancer is one of the leading causes of cancerrelated death. Non-small cell lung carcinoma (NSCLC) accounts for 85 percent of lung cancer cases (Ettinger et al., 2006). In the recent era, multidisciplinary approaches are used for treatment including surgery, radiotherapy, chemotherapy, immunotherapy and targeted therapy. The possible curative treatment is anatomical resection with systematic lymphadenectomy, however it is achieved only for early stage NSCLC. In advanced cases, recently, targeted therapy has a vital role for treatment as firstor second-line treatment. Targeted therapy involves drug for specific gene mutation or abnormal rearrangement such as erotinib or gefitinib for EGFR mutation, which
Introduction
Lung cancer is one of the leading causes of cancerrelated death.Non-small cell lung carcinoma (NSCLC) accounts for 85 percent of lung cancer cases (Ettinger et al., 2006).In the recent era, multidisciplinary approaches are used for treatment including surgery, radiotherapy, chemotherapy, immunotherapy and targeted therapy.The possible curative treatment is anatomical resection with systematic lymphadenectomy, however it is achieved only for early stage NSCLC.In advanced cases, recently, targeted therapy has a vital role for treatment as first-or second-line treatment.Targeted therapy involves drug for specific gene mutation or abnormal rearrangement such as erotinib or gefitinib for EGFR mutation, which
ALK-rearrangement in lung cancer is a unique NSCLC category that is characterized by ALK gene inversion or translocation.ALK-rearranged lung cancer has been considered as a striking response to treatment with a small-molecule inhibitor of ALK (Shaw and Solomon, 2011).However the incidence of NSLCL patients who have EML4-ALK fusion is very low (Kim et al., 2011;Paik et al., 2011).
There are three different methods to determine the ALK status: detection of the protein overexpression by immunohistochemistry (IHC), gene rearrangement by in situ hybridization(ISH), (Kim et al., 2011) and reverse transcription-polymerase chain reaction analysis (RT-PCT) (Marchetti et al., 2013).The IHC method is more difficult than ISH in performing a more reliable quantification of the genomic alteration.FISH has been regarded as most reliable method for detecting ALK rearrangement, however the fluorescent signal rapidly fades over time.Consequently, FISH is not routinely done in clinical practice.Furthermore, it is difficult to detect the overall morphology and tumor heterogeneity (Yoo et al., 2010).RT-PCR also has many disadvantages in its clinical application practice (Marchetti et al., 2013).In the past, using detection of ALK rearrangement was controversial.Some studies have addressed the discordance between FISH and IHC assay (Boland et al., 2009).Kim et al. (2011) observed a good correlation between results obtained using IHC and FISH in a large-scale, singleinstitution study.Recently many studies reported the efficacy of IHC for detecting ALK rearrangement.Sholl et al. (2013) reported that the ALK IHC using the clone 5A4 was 93% sensitive and 100% specific as compared with FISH using the Vysis ALK Break Apart FISH Probe Kit.To et al. (2013) demonstrated that IHC can effectively detect ALK rearrangement in lung cancer and might provide a reliable and cost-effective diagnostic approach in routine pathologic laboratories for the identification of suitable candidates for ALK-targeted therapy.Studying ALK rearrangement has been intensively studied.The correlation between clinicopathologic features and prognostic implications of anaplastic lymphoma kinase (ALK) gene rearrangement in non-small cell lung cancer (NSCLC) has not been concluded.Thailand has a high prevalence of NSCLC, therefore the Northern Thailand Thoracic Oncology Group (NT-TOG) would like to identify the incidence of ALK rearrangement and prognostic factors for ALK positive in completely resected NSCLC in Northern Thailand.Furthermore, we attempt to identify the diagnostic role of IHC comparing with FISH method.
Case enrollment
We reviewed the clinical characteristics and histopathological specimens from patients diagnosed with NSCLC who underwent completely resected anatomical resection with systematic mediastinal lymphadenectomy at Maharaj Nakorn Chiang Mai Hospital (Department of Surgery, Faculty of Medicine, Chiang Mai University), Chiang Mai, Thailand from from January 2008 to December 2012.Patients who did not receive a curative resection and had a previous history of other cancers, pre-surgical chemotherapy or radiotherapy were excluded from this study.Formalin-fixed, paraffinembedded (FFPE) tissue sections were examined from 267 patients.Clinicopathologic information was reviewed from the patient medical recording system.Histopathologic examination was performed by the same highly experienced pathologist.Pathologic staging was determined according to the IASLC TNM staging classification of NSCLC (Goldstraw, 2009).Histologic subtypes of lung cancer were determined according to World Health Organization classification (Travis, 2004) and International Association for the Study of Lung Cancer/American Thoracic Society/European Respiratory Society (IASLC/ATS/ERS) International Multidisciplinary Classification of Lung Adenocarcinoma (Travis et al., 2013).Visceral pleural invasion (VPI), intratumoral blood vessel invasion (IVI), intratumoral lymphatic invasion (ILI), neural invasion (NI) were defined as previously described (Tantraworasin et al., 2013).Overall survival was measured from the date of complete resection of lung cancer until the time of death, and disease-free survival was measured from the date of surgery until recurrence or death.Patients with an unknown date of death or recurrence were censored at the time of the last follow up.Disease-free and overall survival rates were compared according to ALK rearrangement.Patients were divided into two groups, ALK positive group and ALK negative group.This study was reviewed and approved by the Research Ethics Committee, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand.
Construction of the tissue microarray
Representative core tissue sections were taken from formalin-fixed, paraffin-embedded (FFPE) blocks and arranged in new recipient tissue microarray paraffin blocks, as previously described (Yoo et al., 2010;Ozdemir et al., 2013).Serial sections were cut and fixed with hematoxylin and eosin staining, IHC and FISH was performed.Figure 1 doi.org/10.7314/APJCP.2014.15.7.3057 ALK Rearrangement and Clinicopathological Implications in NSCLC Patients in Northern Thailand Ventana automated immunostainer (Ventana Medical Systems, Tucson, AS) by optiview detection system as previously described (Paik et al., 2011).The results from this method were reported as positive or negative.Positive result referred to strongly positive cell staining (darkbrown color) as shown in Figure 1 and negative result referred to none, mild or moderate positive cell staining.Fluorescence in situ hybridization (FISH).
FISH was performed on the FFPE tumor tissues using a break-apart probe specific to the ALK locus (Vysis LSI ALK Dual Color, break-apart rearrangement probe; Abbott Molecular, Abbott Park, IL) according to the manufacturer's instructions.FISH positive is defined as those presenting more than 15% split signals or an isolated red signal (IRS) in tumor cells (Kim et al., 2011) as shown in Figure 3. Dual-probe hybridization was performed with a three micrometer-thick FFPE sections using the LSI ALK Dual Color Probe, which hybridizes to the 2p23 band with spectrum Orange (red) and Spectrum Green on either side of the ALK gene breakpoint (Abbott Molecule).The 4, 6-diamidino-2-phenylindole (DAPI) II are applied for the nuclei counterstaining.The signals for each probe were evaluated under a microscope equipped with a triple-pass filter (DAPI/Green/Orange; Abbott Molecular) and an oil immersion objective lens.The FISH tests were performed with unknown results of the IHC for ALK (Paik et al., 2012).
Statistical analysis
The data were analyzed using the Stata statistical package (Release 11, 2011; Stata Corporation, College Station, TX).Sensitivity, specificity, accuracy, positive predictive value, negative predictive value and likelihood ratio of positive IHC comparing with FISH were calculated.Continuous data were presented as mean and standard deviation or median with 50 th percentiles according to data distribution and were analyzed using student t-test or wilcoxon rank-sum test.Categorical data were presented as frequency and percent and were analyzed using the Fisher-exact test.Univariable and multivariable risk regression analysis were used for controlling confounding factors and for identifying risk factor of ALK positive.Cutoff points of some variables such as age were determined by the maximal likelihood method for achieving the best discrimination between patients with ALK-Fish negative and ALK-FISH positive.Any risk factors with p values <0.1 in the univariable analyses and other potential clinical confounders associated with ALK-positive were selected to be included in the multivariable analysis.The Cox proportional hazards model was used to estimate the hazard ratio (HR) for disease-free and overall survival in patients with ALK positive.All the p values are based on a two-sided hypothesis.The p value less than 0.05 was considered significant.
Results
In this study, 267 completely resected NSCLC were included.The patients composed of 156 (58.4%) men and 111 (41.6%) women.The histology was adenocarcinoma in 165 (61.8%) patients, squamous cell carcinoma in 72 (27.0%) patients, and other NSCLC in 30 (11.2%) patients.The pathologic stage was I in 37 (13.9%)patients, II in 47 (17.6%) patients, and III in 45 (16.9%) patients.At the time of the analysis, the mean follow-up time was 32.4 months.Fifty five percent (146/267) of the total NSCLC patients were still alive at the time of the analysis.The result of both methods for ALK analysis was shown in Table 1.Twenty-two (8.2%) of 267 specimens were IHC-positive of ALK with intense cytoplasmic staining, whereas, 10 (3.8%) of 267 specimens were FISH-positive of ALK.Sensitivity, specificity, accuracy, positive predictive value, negative predictive value and positive likelihood ratio of IHC were 80.0%, 94.9%, 87.5%, 38.1%, 99.2% and 15.8 respectively as shown in Table 2.
Clinicopathological data and treatment outcomes comparing between ALK-FISH positive (AP) group and ALK-FISH negative (AN) group was shown in Table 3-6.The mean age of the AP group was significantly lower than those in the AN group (51.8 versus 62.7, p value <0.001) especially, when using 55 years as a cut-off point (p value <0.001).Thirty percent of patients in the AP group (three of 10, one of first degree relative and two of second degree relative) had a family history of lung cancer whereas only 5.8 percent in the AN group (p=0.023).There were no statistically significant differences in other patient characteristics, histopathologic data, treatment modalities, and clinical outcomes including overall survival and tumor recurrence between both groups as shown in regression analysis demonstrated that age less than 55 years (RR of 9.4, 95%CI of 2.07-42.58,p value of 0.004) and family history of lung cancer (RR of 7.9, 95%CI of 2.12-29.41,p value of 0.002) were significant risk factors for ALK-positivity as shown in Table 6.A multivariable analysis using a Cox proportional hazards model compared overall survival and tumor recurrence between ALK-positive and -negative patients.After adjusting for nodal involvement and tumor staging, ALK-positivity was not associated with overall survival DOI:http://dx.doi.org/10.7314/APJCP.2014.15.7.3057 ALK Rearrangement and Clinicopathological Implications in NSCLC Patients in Northern Thailand
Discussion
The main results of our study were as follows: 1) IHC with mouse monoclonal, clone 5A4, Ventana D5F3 antibody can be used for screening ALK rearrangement before using FISH to confirm diagnosis because of high specificity, high negative predictive value and high likelihood ratio (LR) of positivity; 2) Completely resected NSCLC patients with age less than 55 years had higher risk for ALK rearrangement comparing to those with age more than 55 years; 3) Completely resected NSCLC patients with family history of lung cancer had higher risk for ALK rearrangement comparing to those without VPI; 4) ALK rearrangement was not prognostically significant for overall survival but affected the tumor recurrence in completely resected NSCLC of any stage.
A gold standard to determine ALK rearrangement has not been concluded.Recently, there are two well established methods for diagnostic analyses in clinical practice, IHC and FISH (Savic et al., 2008).The advantage of the FISH method is an availability of a validated kit with standard procedures such as Abbott Vysis (ALK Break Apart FISH Probe Kit, Abbott Molecular Inc., certificated by FDA) and reliable for use in clinical trials, (Yi et al., 2011;Marchetti et al., 2013) however it is still technically challenging and costly.IHC is easily used and costly but lacks dedicated kits and standard procedures.Minca et al. (2013) reported that IHC with D5F3 antibody demonstrated 100% sensitivity and specificity (95%CI, 0.86 to 1.00 and 0.97 to 1.00, respectively) for ALK detection on 249 specimens.Although IHC in our study did not show a 100% sensitivity and specificity and low positive predictive value, the negative predictive value and likelihood ratio of IHC positive are very high (99.1% and 15.82 respectively).Therefore, IHC with D5F3 is a valuable screening tool before testing with FISH method.Paik et al. (2012) studied 735 completely resected NSCLC patients.They found that ALK rearrangement was not prognostically significant for disease-free survival or overall survival; their results are the same as our studies.However, their studies found some different results.They reported ALK-rearranged lung cancer showed a lower tumor stage (T1) in NSCLC (p=0.020),whereas it tended to harbor lymph node metastasis in adenocarcinoma (p=0.090).Furthermore, they found that ALK rearrangement was more frequently observed in women, adenocarcinoma, and those who never smoked in surgically resectable NSCLC patients, but no gender difference was observed in the adenocarcinoma or in the subgroup that never-smoked.Our study found that ALK rearrangement is a prognostic factor of tumor recurrence in completely resected NSCLC, which has not been previously reported.Adjuvant chemotherapy may be beneficial in this setting.
Our study found that patients with ALK rearrangement were significantly younger, like a previous report (Zhong et al., 2013).Furthermore, many recent studies found that ALK rearrangements are more likely of significance in young women (Paik et al., 2012;Li et al., 2013;Zhong et al., 2013).Shaw et al. (2009) found that patients with ALK rearrangement were more likely to be men (p=0.039),contrary to our results.Previously, many studies found that ALK rearrangement had been variably detected in both smokers and nonsmokers and suggested a lack of association between smoking history and presence of ALK rearrangement (Rikova et al., 2007;Koivunen et al., 2008;Shinmura et al., 2008).However in recent studies, they found that ALK rearrangement is strongly associated with never/light smoking history (Paik et al., 2012;Conde et al., 2013;Li et al., 2013;Zhong et al., 2013).Besides young women and non or light smokers, adenocarcinoma is also significantly higher in ALK rearrangement patients (Paik et al., 2012;Conde et al., 2013;Li et al., 2013;Martinez et al., 2013) like our study, adenocarcinoma was predominantly found in 63.6% of ALK rearrangement patients.Our study found that a family history of lung cancer is a risk factor of ALK rearrangement which was not previously reported.Therefore, ALK rearrangement may have a genetic heredity.
In conclusion, the incidence of ALK positive in completely resected NSCLC in Northern Thailand is 8.2%by the IHC method and 3.8% by the FISH method.IHC with clone D5F3 antibody can be used as screening tool.Age less than 55 years and family history of lung cancer are risk factors of ALK-FISH positive.Moreover, ALK rearrangement is a prognostic factor of tumor recurrence in completely resected NSCLC of any stage.DOI:http://dx.doi.org/10.7314/APJCP.2014.15.7.3057 ALK Rearrangement and Clinicopathological Implications in Takeuchi K, Choi YL, Soda M, et al (2008).Multiplex reverse transcription-PCR screening for EML4-ALK fusion transcripts.Clin Cancer Res, 14, 6618-24.Tantraworasin A, Saeteng S, Lertprasertsuke N, et al (2013).
Figure 1 .
Figure 1.Slide of Tissue Microarray with ALK Positive in IHC (Dark-Brown Color)
FigureFigure 3 .
Figure 2. Poorly Differentiated Adenocarcinoma with ALK Positive by IHC (A).Micropapillary Type of Invasive Adenocarcinoma with ALK Strongly Positive by IHC (B)A!B!
Table 7 . Multivariable Hazard Ratios (HR) and 95% Confidence Interval (CI)
(HR of 0.8, 95%CI of 0.24-2.37,pvalue of 0.504), however, it was a significantly adverse prognostic factor of tumor recurrence (HR of 3.2 95%CI of 1.30-8.11,pvalue of 0.012) as shown in Table7.These results suggest that ALK rearrangement may be a prognostic factor of tumor recurrence in completely resected NSCLC of any stage.
. Detection of ALK rearrangement by immunohistochemistry in lung adenocarcinoma and the identification of a novel EML4-ALK variant.J Thorac Oncol, 8, 883-91.Travis WD BE, Muller-Hermelink HK, Harris CC. (ed).(2004).World health organization classification of tumors.pathology and genetics of tumours of the lung, pleura, thymus and heart.IARC Press, Lyon.Travis WD, Brambilla E, Riely GJ (2013).New pathologic classification of lung cancer: relevance for clinical practice and clinical trials.J Clin Oncol, 31, 992-1001.Wong DW, Leung EL, So KK, et al (2009).The EML4-ALK fusion gene is involved in various histologic types of lung cancers from nonsmokers with wild-type EGFR and KRAS. | 2017-10-18T09:20:12.464Z | 2014-04-01T00:00:00.000 | {
"year": 2014,
"sha1": "268c7c5f1573456d0ffb17da76484ca71bbd1596",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201418342937178&method=download",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c8659a9e366e67180dbb39c6ce1e3d5651869325",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268576400 | pes2o/s2orc | v3-fos-license | Multiobjective‐Optimization MoS2/Cd‐ZnIn2S4/CdS Composites Prepared by In Situ Structure‐Tailored Technique for High‐Efficiency Hydrogen Generation
Photocatalytic water splitting into hydrogen production provides a new avenue to produce clean chemical fuels. However, developing high‐efficiency photocatalytic materials still remains a challenge till now. Herein, multiobjective‐optimization MoS2/Cd‐ZnIn2S4/CdS (MS/CZIS/CS) composites are successfully constructed by an in situ structure‐tailored technique. Benefiting from the synergistic feature integrating sulfur vacancy, II‐type CZIS/CS heterojunction and Schottky‐type MS/CS heterojunction, such composites not only effectively steer photogenerated carrier transfer but also markedly expedite surface reaction kinetics for hydrogen reduction reaction. As a result, an optimal hydrogen evolution rate of 11.49 mmol g−1 h−1 is achieved over the MS/CZIS/CS catalysts, which is approximately 4.79 times higher than that of pristine ZIS (2.40 mmol g−1 h−1). This work provides some new inspirations for the steering of carrier transfer and the design of multiobjective‐optimization photocatalysts with high efficiency.
photocatalyst.However, just as a coin has two sides, each approach inevitably has its disadvantages.For example, the doping of heteroatom would unavoidably change the atomic structure of ZIS and possibly form some recombination centers; [18] cocatalysis could potentially weaken the light absorption due to the "light shielding" effect and reduce proton reduction reaction sites; [19] while inexpedient construction of heterojunction would also result in stacking interfaces, high carrier recombination, and diminished active sites.
Very recently, multiobjective-optimization system has inspired wide attention to effectively improve the photocatalytic performance of ZIS. [20]Such approach takes full advantage of the synergistic effect and merit of each optimization method.Majhi et al. first synthesized tetragonal β-Bi 2 O 3 nanoplates through a hydrothermal method, and then prepared dual Z-scheme Bi 2 S 3 /β-Bi 2 O 3 /ZIS nanomaterials by a facile reflux route. [21]onsequently, such ternary composites achieved fast electron channelization, enhanced charge carrier separation, and prolonged lifetime, finally resulting in excellent visible-light activity.Nevertheless, such coupled systems still suffer from unsatisfactory photocatalytic efficiency, complex multistep processes, long processing periods, and high costs.That is, realizing the synergistic effect of multiobjective-optimization methods and maintaining the high hydrogen evolution efficiency still remain a great challenge.
Herein, MS/CZIS/CS composites were successfully constructed by an in situ structure-tailored technique (Scheme 1).The multiobjective-optimization composites integrated sulfur vacancy (S v ), II-type CZIS/CS heterojunction, and Schottky-type MS/CS heterojunction.Specially, the S v tended to narrow the bandgap, promote the light harvesting, and boost the carrier separation; II-type CZIS/CS heterojunction was helpful for the electron/hole separation; Schottky-type MS/CS heterojunction was good for the charge transfer, hydrogen reduction reaction kinetics, and surface charge utilization.As a result, compared to pristine ZIS, the multiobjective-optimization MS/CZIS/CS composites exhibited an approximately 4.79-fold enhancement of photoactivity.
Preparation and Structural Characterization of MS/CZIS/CS Composites
Here, tetragonal CdMoO 4 (CMO) nanoparticles with 0.5-2 μm in diameter were preferred as an important structure-tailored body prepared by a hydrothermal method (Figure S1, Supporting Information).Subsequently, the in situ decomposition of CMO can promote the instantaneous formation of S v , II-type CZIS/CS heterojunction and Schottky-type MS/CS heterojunction.To exemplify this, the obtained samples were denoted as CMO-x/ ZIS (x = 0.5, 1, 2, 3, 4, 5, 6, 20 at%), where x represented the molar percentage of CMO to ZIS.The crystalline structure of as-prepared samples was determined by powder X-ray diffraction (PXRD) technique.As shown in Figure 1A, all the diffraction peaks of pristine ZIS can be ascribed to hexagonal ZIS (JCPDS No. 72-0773). [22]The diffraction peaks at 21.6°, 27.7°, and 47.2°correspond to the (006), (102), and (110) crystal planes, respectively.Moreover, the diffraction signals of CMO-x/ZIS exhibit the same characteristic peaks with pristine ZIS.No characteristic peaks of CMO are detected, even in CMO-20/ZIS composite (Figure S2A, Supporting Information), demonstrating the phase transition or complete decomposition of CMO under the water-bath condition.Nevertheless, the typical diffraction peak at 47.2°corresponding to (110) crystal plane progressively shifts to lower 2θ angle (the inset in Figure 1A and S2B, Supporting Information), indicating the possible doped heteroatoms with larger ionic radius in the crystal lattice of ZIS. [23]As depicted in Figure 1B and S3, Supporting Information, the (110) crystal configuration mainly lies in [Zn-S] 4 tetrahedral coordination consisting of Zn and S atoms.Namely, Zn atoms are potentially replaced by some heteroatoms.
Subsequently, the XPS of high-resolution Mo 3d indicates that the two peaks at 235.4 eV (Mo 3d 3/2 ) and 232.3 eV (Mo 3d 5/2 ) obviously shift to lower binding energies at 233.0 and 229.9 eV, indicating that Mo 6þ valence state is completely evolved to Mo 4þ valence state during the water-bath process. [24,25]This result further confirms the complete decomposition of CMO during the water-bath process (Figure 1C).Specially, a new peak at 226.3 eV (S 2s) attributed to Mo-S bonds confirms the formation of MS in CMO-3/ZIS composite. [26]Subsequently, the ionic difference was calculated to further analyze the possible heteroatom.29] Combined with the EPR spectra, Cd heteroatom with larger electronegativity and ionic radius results in sulfur vacancy in CMO-3/ZIS (Figure 1D). [30]The result further confirms the successful incorporation of Cd heteroatom in the crystal lattice of ZIS.Additionally, excessive Cd 2þ ions were also investigated by XRD in detail.It can be seen that most diffraction peaks match well with cubic CS (JCPDS No. 80-0019) when only CMO and thioacetamide (TAA) are used during the water-bath process, implying the possible formation of CS (Figure S4, Supporting Information).Specially, no obvious diffraction peaks corresponding to CS and MS are observed in CMO-3/ZIS, possibly resulting from the low content, ultrafine size, and amorphous feature. [31,32]o further elucidate the composition and chemical state, CMO, ZIS, and CMO-3/ZIS samples were analyzed by XPS technique.Clearly, the XPS survey spectra of CMO-3/ZIS verify the presence of Zn, In, S, Cd, Mo, and O elements (Figure S5, Supporting Information), in well agreement with the nominal composition.[35] Additionally, the corresponding Zn 2p, In 3d, and S 2p peaks clearly shift to higher binding energies, implying the strong interaction between different components.Figure 1E shows two strong symmetrical characteristic peaks at 405.2 and 411.9 eV, corresponding to spin-orbit splitting of Cd 2þ 3d 3/2 and Cd 2þ 3d 5/2 . [36]The high-resolution O ls XPS spectrum in pristine CMO indicates that two peaks at 530.3 eV and 531.8 eV correspond to lattice oxygen and absorbed O-containing species (Figure 1F). [37]While the two peaks shift to higher binding energies at 531.8 and 533.2 eV in CMO-3/ZIS sample, assigning to absorbed O-containing species and surface H 2 O, further confirming the complete decomposition of CMO.
Figure 2A,B shows the high-resolution transmission electron microscopy (HRTEM) images of as-prepared CMO-3/ZIS.Obviously, the intimate contact between ZIS, CS, and MS can be found (Figure S7A, Supporting Information).Moreover, the interplanar spacing of 0.32 nm can be ascribed to the (102) crystal plane of hexagonal ZIS, [38] the lattice fringe spacings of 0.34 and 0.22 nm match well with (111) crystal plane of cubic CS [39] and (103) crystal plane of hexagonal MS. [40] Similarly, Figure S7B, Supporting Information, also confirms the coexistence of CS and MS.Field emission scanning electron microscope (FESEM) image displays that ZIS exhibits flower-like hierarchical nanowalls composed of numerous cross-linked nanosheets (Figure 2C). [41]Coupling MS and CS with CZIS, the morphology of CMO-x/ZIS gradually tends to become irregular (Figure 2D and S8, Supporting Information).Additionally, energy dispersive spectrometer (EDS) mappings confirm the homogeneous distribution of Zn, In, S, Cd, Mo, and O elements (Figure 2E and Table S2, Supporting Information).Specially, the molar ratio of Zn:In:S is close to the theoretical value of 1:2:4, indicating that the main phase of ZIS is stable.In a word, the above results confirm that ternary MS/CZIS/CS composites are synthesized successfully.
Photocatalytic Hydrogen Evolution Performance
To verify the relationship between multiobjective-optimization structure and photoactivity, the hydrogen evolution activities of ZIS and CMO-x/ZIS catalysts were evaluated in a Pt-free cocatalytic system upon visible-light irradiation.Clearly, the continuous hydrogen production for all samples increases linearly with extending the irradiation time, revealing the excellent stability of photocatalytic water splitting (Figure 3A).Moreover, CMO-x/ZIS catalysts unsurprisingly exhibit higher hydrogen evolution activity.Therein, the CMO-3/ZIS catalyst displays the highest hydrogen evolution activity, with a hydrogen production of 1.5 mL within 3 h.The calculated hydrogen evolution rate shows that, as shown in Figure 3B, pristine ZIS exhibits a poor photoactivity (2.40 mmol g À1 h À1 ).Notably, the MS/CZIS/CS composites show extraordinary hydrogen evolution performance.Strikingly, a maximum hydrogen evolution rate of 11.49 mmol g À1 h À1 is observed over CMO-3/ZIS catalyst, which is approximately 4.79 times higher than that of pristine ZIS.GThe apparent quantum efficiency (AQE) values of pristine ZIS and CMO-3/ZIS were tested at monochromatic wavelength irradiation.As presented in Figure 3C and Table S3, Supporting Information, the corresponding AQE is consistent with the light absorption.Moreover, CMO-3/ZIS affords an AQE value of 6.17% at a monochromatic light irradiation at 420 nm, which is obviously higher than that of pristine ZIS (AQE = 1.77%).Nevertheless, excessive Cd doing, overloading MS cocatalysis, and massive heterointerface also decrease the hydrogen evolution activity due to lower active sites, higher light shading, and more recombination centers.Additionally, the photocatalytic stability of CMO-3/ZIS was evaluated by cycling experiments.As shown in Figure 3D, the hydrogen production after four cycles shows little deterioration, indicating that CMO-3/ZIS has excellent photostability.Besides, PXRD results also indicate that the peak shift of pristine ZIS can be clearly found after photocatalytic reaction, implying the serious photocorrosion of pristine ZIS (Figure S9A, Supporting Information).While all the diffraction peaks of CMO-3/ZIS remain unchanged after the photocatalytic reaction, further verifying the high photostability (Figure S9B, Supporting Information).
Photoelectric Dynamics Behaviors
The optical properties of as-prepared photocatalysts were investigated by ultraviolet-visible diffuse reflectance spectrum (UV-vis-DRS) spectrophotometer.As shown in Figure 4A and S10, Supporting Information, the absorption edges of ZIS, CS, and MS are approximately 527, 571, and 938 nm, respectively.Excitingly, the light absorption of CMO-x/ZIS is evidently enhanced in the range of 500-800 nm, which is consistent with the optical color of different samples (Figure S11, Supporting Information).Moreover, an obvious redshift for absorption edge can be observed in visible-light bands.Usually, the higher light absorption and redshift of absorption edge are beneficial to the photocatalytic activity.Subsequently, the bandgaps of all samples can be obtained by the Kubelka-Munk function.The bandgaps of ZIS, CS, and MS are determined to be 2.35, 2.18, and 1.32 eV, respectively (Figure S12, Supporting Information).Furthermore, CMO-x/ZIS composites have narrower bandgaps than that of pristine ZIS (Figure 4B and Table S4, Supporting Information).Usually, the narrow bandgap has a considerably positive influence on the carrier excitation and photocatalytic activity.
The transient photocurrent response is crucial for analyzing the photogenerated carrier separation and photostability.Clearly, the photocurrent density increases sharply upon the visible-light radiation and then remains stable (Figure S13, Supporting Information), indicating that all samples are very sensitive under visible-light irradiation. [42]To be specific, pristine ZIS shows a photocurrent density of 76.61 mA cm À2 , which is somewhat higher than those of CMO-x/ZIS catalysts.Nevertheless, the deterioration rate of photocurrent density for pristine ZIS reaches 61.23%, revealing its severe photocorrosion (Figure 4C and Table S5, Supporting Information).Interestingly, the deterioration rate significantly declines to 8.91% for CMO-3/ ZIS and 3.31% for CMO-5/ZIS, further confirming that the photostability is markedly improved. [43]esides, charge migration is also a major factor in photocatalytic performance.Electrochemical impedance spectroscopy (EIS) curves suggest that CMO-x/ZIS catalysts with smaller arc radii possess lower interfacial resistance and faster charge transfer (Figure 4D). [44]Photogenerated carrier recombination is considered as another bottleneck problem limiting the photon conversion efficiency.Here, steady-state photoluminescence (PL) spectra were collected to evaluate the radiative recombination process.Obviously, the intensity of CMO-x/ZIS is lower than that of pristine ZIS (Figure 4E), revealing that the carrier recombination is evidently suppressed. [45]Specially, CMO-3/ZIS exhibits the weakest emission peak, which is consistent with the optimal photocatalytic hydrogen production performance.Furthermore, the charge nonradiative recombination (τ 1 ), free exciton interband recombination (τ 2 ), and the general recombination (τ ave ) can be fitted from time-resolved photoluminescence (TRPL) plots over pristine ZIS and CMO-x/ZIS catalysts.As displayed in Figure 4F and Table S6, Supporting Information, CMO-x/ZIS catalysts exhibit a significant increase in τ 1 and τ 2 , indicating an inhibited charge nonradiative recombination and a faster interband charge transfer.That is, the multiobjectiveoptimization system can significantly prolong the charge lifetime and inhibit the photogenerated carrier recombination. [46]he wavelength-dependent surface photovoltage (SPV) spectra track the corresponding absorbance spectra.As shown in Figure 4G, pristine ZIS shows weak signal intensities, indicating a lower surface carrier concentration.On the contrary, remarkable response can be achieved in MS/CZIS/CS composites, which is beneficial to the following protonated reduction. [47]pecially, CMO-3/ZIS displays the largest visible-light-induced photovoltage and the highest charge concentration on the surface.Additionally, linear sweep voltammetry (LSV) curves of pristine ZIS and CMO-x/ZIS demonstrate that CMO-x/ZIS composites exhibit much lower onset overpotentials than pristine ZIS (Figure 4H).The overpotentials at a current density of À0.10 mA cm À2 were À1.41, À1.39, À1.38, and À1.37 V for ZIS, CMO-1/ZIS, CMO-3/ZIS, and CMO-5/ZIS, revealing that less potential barrier for hydrogen evolution can be realized in MS/CZIS/CS system. [48]In a word, the above results and analysis indicate that CMO-x/ZIS catalysts possess enhanced photoelectric dynamics during the photocatalytic process.
Proposed Photocatalytic Hydrogen Evolution Mechanism
To further investigate the carrier concentration and charge transfer path during the photocatalytic process, the band structures of ZIS, CS, and MS were studied according to the Mott-Schottky (M-S) and ultraviolet photoelectron spectroscopy (UPS) measurements.As displayed in Figure 5A-C, ZIS, CS, and MS display positive slopes in the M-S curves, confirming their n-type semiconductor features. [49]The carrier concentration for CMO-3/ZIS is calculated to be 2.9 Â 10 23 cm À3 , which is approximately 1.2 times higher than that of pristine ZIS (Table S7, Supporting Information).The more accumulated charges are beneficial to hydrogen evolution, in well agreement with photocatalytic water splitting performance and SPV result.Additionally, the flat band potentials (E fb ) of ZIS, CS, and MS can be calculated to be À0.24,À0.03, and À0.01 V (vs RHE, pH = 7.0), respectively.Generally, conduction band potential (E CB ) is approximately 0.1 eV more negative than the E fb value for n-type semiconductors. [50]ence, the E CB values of ZIS, CS, and MS are determined to be À0.34,À0.13, and À0.11 eV (vs RHE, pH = 7).Finally, the valence band potentials (E VB ) of ZIS, CS, and MS are calculated to be 2.01, 2.05, and 1.21 eV.
To understand the redox potential, UPS test was conducted to analyze the work function, Fermi levels, and interfacial charge transfer. [51]Usually, the difference between vacuum level (E vac ) and Fermi level (E F ) is defined as the work function (Ф) of semiconductor, namely, the formula is Ф = E vac -E F . [52]As shown in Figure 5D-F and S14, Supporting Information, the work functions of ZIS, CS, and MS are estimated to be 4.72, 4.98, and 5.06 eV.Namely, the Fermi levels of ZIS, CS, and MS decrease gradually (Figure 5G).After contact, the three components effectively generate built-in space charge layers due to the difference of E F .Namely, the thermal equilibrium leads to the Fermi level realignment.Consequently, II-type CZIS/CS heterojunction and Schottky-type MS/CS heterojunction can be generated due to the potential difference between different components.That is, as shown in Figure 6A, the photogenerated electrons spontaneously transfer from CZIS to CS at the driving force of the internal electric field due to II-type CZIS/CS heterojunction, and finally transfer to MS at the driving force of the internal electric field due to Schottky-type MS/CS heterojunction.Consequently, the accumulated electrons can be reduced to H 2 on the surface of MS.Meanwhile, the photoexcited holes can be automatically extracted from CS to CZIS and finally scavenged by tri-ethanolamine (TEOA) sacrificial agent.That is, the multiobjective-optimization MS/CZIS/CS composites provide a directional charge transfer pathway, finally boosting the photocatalytic water splitting into hydrogen generation.
To verify abovementioned mechanism, the electron spinresonance (ESR) technique was performed to validate the charge migration process.Here •O 2 À and •OH radicals were detected using DMPO as a radical trapping agent.As shown in Figure 6B, ZIS and CMO-3/ZIS exhibit no response of DMPO-•O 2 À in dark.However, under the visible-light irradiation, the signal intensity of DMPO-•O 2 À gradually increases with extending the irradiated time, suggesting the electrons can be generated in ZIS and CMO-3/ZIS.Importantly, the signal intensity of CMO-3/ZIS is obviously higher than that of ZIS catalyst, confirming the effective charge transfer and sufficient charge utilization.It should be noted that the CB potentials of CS and MS are not enough positive to generate the •O 2 À free radicals.That is, higher carrier separation efficiency of ZIS results in more •O 2 À free radicals due to the synergistic effect of multiobjective optimization.Similarly, there are no peaks of DMPO-•OH for pristine ZIS and CMO-3/ZIS in the dark (Figure 6C).However, four characteristic peaks with a standing ratio of intensity 1:2:2:1 within 10 min visible-light irradiation can be found.Moreover, the signal intensity of CMO-3/ZIS is about four times higher than that of pristine ZIS.The possible cause is that the VB potentials of both ZIS and CS are enough positive to produce •OH free radicals.Consequently, integrating sulfur vacancy, II-type CZIS/CS heterojunction and Schottky-type MS/CS heterojunction, increases the •OH production of ZIS.Specially, the enhanced difference between •O 2À free radicals and •OH free radicals also further confirms the above analysis.Finally, compared with pristine ZIS, the upshift of Zn 2þ 2p and Cd 2þ 3d XPS peaks toward higher binding energy and the downshift of Mo 4þ 3d XPS peak toward lower binding energy in CMO-3/ZIS also confirm the electron transfer path from CZIS to CS, finally, transfer to MS.
Conclusion
In summary, MS/CZIS/CS composites were delicately constructed through an in situ structure-tailored technique and applied as an efficient visible-light-driven water-splitting photocatalyst.Experimental results confirmed that the multiobjective-optimization structure integrated sulfur vacancy, II-type CZIS/CS heterojunction and Schottky-type MS/CS heterojunction.The CMO-3/ZIS composite demonstrated an optimal hydrogen evolution rate of 11.49 mmol g À1 h À1 , which was about 4.79 times that of pristine ZIS (2.40 mmol g À1 h À1 ).The photoelectrochemical characterization revealed that the multiobjective-optimization composites provided less photocurrent deterioration, higher carrier separation, faster charge transfer, slower carrier recombination, lower overpotential of hydrogen evolution, and higher surface charge concentration, finally, promoting the hydrogen evolution reaction.This work explores the synergistic effect between the multiplex modification and opens an inspiration to delicately design high-efficiency and stable photocatalytic for water splitting system.
Experimental Section
Chemicals: All reagents were analytical grade and used without further purification throughout the experiments.The deionized (DI) water was utilized in the experiment.Zinc chloride (ZnCl 2 ), sodium molybdate dihydrate (Na 2 MoO 4 •2H 2 O), and TEOA were purchased from Sinopharm Chemical Reagent Co., Ltd.Indium nitrate hydrate (In(NO 3 ) 3 •6H 2 O), TAA, and absolute ethanol were purchased from Aladdin Chemical Synthesis of CMO Nanomaterials: CMO nanomaterials were synthesized by a hydrothermal method. [53]Briefly, 0.6169 g of Cd(NO 3 ) 2 •4H 2 O was dissolved into 60 mL of DI water with magnetic stirring, and marked solution A. Then, 0.4893 g of Na 2 MoO 4 •2H 2 O was dissolved into 20 mL of DI water with magnetic stirring, and marked solution B. Subsequently, the solution B was slowly dropped into the solution A with stirring for 30 min.Afterward, the mixed solution was transferred into a Teflon-lined stainlesssteel autoclave, heated to 160 °C, and maintained for 12 h.After naturally cooling to room temperature, the white products were washed with DI water and absolute ethanol and then dried at 60 °C overnight to obtain tetragonal CMO.
Synthesis of MS/CZIS/CS Composites: In detail, a certain amount of as-prepared CMO was ultrasonically dispersed into 80 mL DI water, and marked solution A. Then, 0.4089 g ZnCl 2 and 0.4508 g TAA were added into solution A with magnetic stirring.Thereafter, solution B was prepared by dissolving 0.9812 g In(NO 3 ) 3 •6H 2 O into 20 mL DI water.Afterward, solution B was slowly added to solution A. Upon magnetically stirring for 30 min, the mixed solution was heated to 80 °C and maintained for 6 h under water-bath conditions.Finally, the precipitate was collected by centrifugation, washed with DI water and absolute ethanol, and then dried at 60 °C overnight in air.Pristine ZIS was prepared by a similar method without the addition of CMO raw materials.For comparison, cubic CS (JCPDS No. 80-0019) and hexagonal MS (JCPDS No. 75-1539) were synthesized according to previous report (Figure S15 and S16, Supporting Information). [54,55]atalyst Characterization: The phase and crystal structure of as-synthesized samples were verified by PXRD (D8-type, Brucker) with a scanning rate of 5°min À1 in the range of 10°-90°, using Cu Kα as radiation source (λ = 1.5406Å).The morphology and microstructure were observed by FESEM (Sigma 500, Zeiss) operated at 10 kV and HRTEM (Talos F200X, Thermo Fisher) operated at 200 kV.The elemental content and distribution were monitored on an EDS.The surface composition and chemical state were investigated by XPS (JPS-9010 MC, JEOL) equipped with a monochromatic Al Kα X-ray source.All binding energies were referred to adventitious C 1s signal at 284.8 eV.The fitting of the spectra was performed using the XPS Peak 41 software.The UV-vis-DRS was recorded to analyze the light absorption characteristic and bandgap using a PerkinElmer Lambda-950 spectrophotometer.The recorded spectrum ranged from 350 to 850 nm with a resolution of 1 nm.PTFE coating was used as a 100% reflectance standard.The steady-state PL emission spectrum was obtained on a Hitachi F-7000 spectrofluorometer equipped with a 450 W xenon lamp as the excitation light source (λ = 455 nm).The photoemission spectrum was recorded from 460 to 800 nm with 1 nm step.TRPL spectroscopy was recorded on an Edinburgh FLS1000 spectrometer to analyze the charge lifetime.The SPV spectrum was recorded on a PL-SPV1000 to analyze the photoinduced charge behavior.The testing current was fixed as 20 A, and the scanning wavelength ranged from 300 to 800 nm.EPR/ESR spectrum was conducted on a Bruker A300 spectrometer to analyze the crystal defect and active radicals.UPS was performed on a ThermoFischer EscaLab 250 Xi spectrometer to determine the work function and Fermi level, using Helium Iα as the excitation source (hv = 21.22 eV).
Photocatalytic Hydrogen Evolution Activity: The photocatalytic water splitting activity was evaluated in a quartz photoreactor with a 300 W Xe-lamp (Perfect Light Co. Ltd., China) as a visible-light source (λ ≥ 420 nm).The light area was approximately 5.28 cm 2 and the irradiated distance was maintained at 10 cm.In a typical process, 2 mg of photocatalyst was dispersed in a mixed solution containing 8 mL of DI water and 2 mL of sacrificial reagent (TEOA).Then, the dispersion was transferred into a 35 mL photoreactor.Prior to light irradiation, the suspension was degassed using high-purity argon gas to completely remove the air.Moreover, the photoreactor was kept at 10 °C through a cooling water system.Subsequently, the suspension was irradiated under visible-light source.At given interval of 0.5 h, 0.1 mL of the resulting gas was extracted from the chamber to qualitatively analyze the gas components using a gas chromatography (GC7900, TIANMEI, China).Cycling experiments were carried out under the same condition to evaluate the stability of photocatalytic activity.All of the photocatalytic experiments were conducted in quadruplicates.Additionally, the wavelength-dependent AQE was measured by means of different monochromatic light filters (420, 450, 475, 500 nm) and calculated according to the following formula: AQE ¼ 2  ðthe number of evolved hydrogen moleculesÞ the number of incident photons  100% Photoelectrochemical Measurement: The photoelectrochemical measurements, including transient photocurrent response, EIS, LSV, and M-S measurements, were conducted on a CHI 660E electrochemical workstation using 0.1 mol L À1 Na 2 SO 4 solution as the electrolyte.A standard three-electrode configuration was used, consisting of a photocatalyst-supporting FTO working electrode (as-prepared samples), a counter electrode (Pt plate), and a reference electrode (calomel electrode).A 300 W Xe-lamp with a UV cutoff filter (λ ≥ 420 nm) was applied as the visible-light source.To prepare the working electrode, 0.5 mg of photocatalyst was firstly dispersed in 0.5 mL mixed solvent (250 μL EtOH, 250 μL water, and 25 μL Nafion) to form homogeneous catalyst ink (10 mg mL À1 ).Then, 50 μL catalyst ink was drop-coated onto 1 cm 2 FTO electrode.Afterward, the working electrodes were dried naturally.All potentials in this work were given versus reversible hydrogen electrode (RHE) (E RHE = E Hg/HgCl þ 0.059  pH þ 0.24).
Figure 3 .
Figure 3. Photocatalytic hydrogen production performance.A) Time-dependent photocatalytic hydrogen production curves and B) hydrogen evolution rates of ZIS and CMO-x/ZIS; C) wavelength-dependent AQE and absorption spectra of ZIS and CMO-3/ZIS; D) cycling curves of photocatalytic hydrogen production in the presence of CMO-3/ZIS.
Figure 5 .
Figure 5. Semiconductor type and energy band structure.M-S curves of A) ZIS, B) CS, and C) MS; UPS spectra of D) ZIS, E) CS, and F) MS; G) schematic energy band arrangement of MS/CZIS/CS catalysts.
Figure 6 .
Figure 6.A) Schematic illustration of the charge dynamics in MS/CZIS/CS system; B,C) DMPO spin-trapping ESR spectra of ZIS and CMO-3/ZIS catalyst in dark and light. | 2024-03-22T15:48:08.846Z | 2024-03-19T00:00:00.000 | {
"year": 2024,
"sha1": "455334ee6f617ad76ef67ae16612a431beafc202",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/sstr.202300569",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "dbdca7d7510e8b93ad88c56da7d170d992c045f1",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": []
} |
205988070 | pes2o/s2orc | v3-fos-license | How should the completeness and quality of curated nanomaterial data be evaluated?
,
The design of novel nanomaterials with desirable properties and acceptable safety profiles, as well as the appropriate regulation of both new and existing nanomaterials, relies upon nanoscience researchers (both experimentalists and computational modellers), risk assessors, regulators and other relevant stakeholders having access to the necessary data and metadata.
These data should be sufficiently complete, including their associated metadata, and of acceptable quality to render them fit for their intended purpose e.g. risk assessment. However, defining what one means by data which are "sufficiently complete" and of "acceptable quality" is non-trivial in general and is arguably especially challenging for the nanoscience area.
The current paper is part of a series of articles 9,17 that address various aspects of nanomaterial data curation, arising from the Nanomaterial Data Curation Initiative (NDCI), where curation is defined as a "broad term encompassing all aspects involved with assimilating data into centralized repositories or sharable formats". 9 A variety of nanomaterial data resources, holding different kinds of data related to nanomaterials in a variety of formats, currently exist. Many of these were recently reviewed. 9,18,19 The number of nanomaterial data resources is expected to increase as a result of ongoing research projects. 4,19 An overview of the articles planned for the NDCI series was presented in Hendren et al. 9 At the time of writing, an article on curation workflows 17 was published and articles dedicated to curator responsibilities, data integration and metadata were at various stages of development. The current paper addresses the question of how to evaluate the degree to which curated nanomaterial data are "sufficiently complete" and of "acceptable quality". In order to address this central question, the current paper addresses a number of key issues: (1) what the terms data completeness and quality mean; (2) why these issues are important; (3) the specific requirements for nanomaterial data and metadata intended to support the needs of specific stakeholders; (4) how to most appropriately score the degree of completeness and quality for a given nanomaterial data collection. The abstract meaning of data completeness and quality in a range of relevant disciplines is reviewed and the importance of these concepts to the area of nanomaterial data curation is explained. An overview of existing approaches for characterising the degree of completeness and quality of (curated) nanomaterial data is presented, with a focus on those currently employed by curated nanomaterial data resources. Approaches to evaluating data completeness and quality in mature disciplines are also reviewed, with a view to considering how the relatively young discipline of nanoscience could learn from these disciplines. However, as is also discussed, there are specific challenges associated with nanomaterial data which affect assessment of their completeness and quality. Drawing upon the discussion of these issues, the current paper concludes with a set of recommendations aimed at promoting and, in some cases, establishing best practice regarding the manner in which the completeness and quality of curated nanomaterial data should be evaluated.
The snapshot of current practice, discussion of key challenges and recommendations were informed via a review of the published literature as well as responses to a survey distributed amongst a variety of stakeholders associated with a range of nanomaterial data resources. The survey and responses can be found in the ESI, † along with an overview of the nanomaterial data resources managed by these stakeholderswith a focus on how they address the issues related to data completeness and quality. The perspectives of individuals involved in a variety of nanomaterial data resources were captured via this survey. However, the resources for which respondents agreed to participate in this survey should not be seen as comprehensive. 9,18,19 For the purposes of the survey, the Nanomaterial Data Curation Initiative (NDCI) identified 24 data resources that addressed various nanomaterial data types: from cytotoxicity test results to consumer product information. Some of the identified resources were exclusively focussed on nanomaterial data, whereas others were broader databases holding some data for nanomaterials. Representatives of the 24 data resources were contacted by the NDCI and, in total, 12 liaisons, corresponding to nine (38%) of the 24 nanomaterial data resources, responded to the NDCI data completeness and quality survey. Some of the nine resources incorporated primary experimental data, whilst others were exclusively populated via literature curation. Some of these were in-house resources, whilst others were publicly available via the internet. The median experience of the survey respondents was 5 years in the nanomaterial data curation field, 10.5 years in the wider nanoscience field, and 5.5 years in the broader data curation field.
The rest of this paper is organised as follows. Section 2 reviews the meaning of data completeness and quality, in abstract terms, and then explains the importance of these issues in the context of nanomaterial data curation. Section 3 reviews existing proposals for characterising the completeness and quality of (curated) nanomaterial data. Section 4 reviews approaches for evaluating (curated) data completeness and quality which are employed in mature fields. Section 5 then discusses the key challenges associated with nanomaterial data which need to be taken into account when evaluating their completeness and quality. Section 6 presents the recommendations for evaluating curated nanomaterial data completeness and quality.
The meaning and importance of data completeness and quality
The importance of data completeness and quality is made clear by explaining what these concepts mean and their implications for a range of important issues. (Data completeness and quality are hereafter referred to as Key concept 1 and Key concept 3, with full descriptions presented in Tables 1 and 3, respectively.) The precise meanings of these concepts and the issues with which they are related are defined somewhat differently in the varied fields which are relevant to nanomaterial data curation e.g. informatics, toxicology and risk assessment. Nonetheless, it is possible to provide broad and flexible definitions which encompass a variety of perspectives.
Broad and flexible definitions of data completeness and quality are presented in Tables 1 and 3 respectively. These reflect the different and sometimes inconsistent definitions presented, either implicitly or explicitly, in the literature, during discussions amongst the co-authors and by respondents to the NDCI data completeness and quality survey. (The perspectives of the survey respondents are presented in the ESI. † Literature definitions of data completeness 9,20-24 and quality 9,[20][21][22][23]25,26 are provided in ESI Tables S3 and S5 † respectively.) Section 6.1.1 proposes that more precise definitions be adopted by the nanoscience community. These more precise definitions are generally consistent with the definitions presented in Tables 1 and 3, but some issues incorporated into those broad and flexible definitions are deemed out of scope. However, the definitions provided in Tables 1 and 3 encompass the range of different perspectives encountered when preparing this paper. Hence, these definitions serve as a reference point for the purpose of reviewing existing approaches to evaluating data completeness and quality in sections 3, 4 and ESI S2. † The following discussion expands upon the broad and flexible definitions presented in Tables 1 and 3. The importance of these concepts for nanomaterial data curation, and the issues with which they are commonly associated, is explained with reference to the nanoscience literature.
Data completeness may be considered a measure of the availability of the necessary, non-redundant data and associated metadata for a given entity (e.g. a nanomaterial). (Some scientists consider the availability of "metadata" to be a separate issue to data completeness.) 20,21 The term "metadata" is broadly defined as "data which describes data" 27 or "data about the data". 28 Defining exactly what is meant by "data" as opposed to "metadata" is challenging. For example, physicochemical characterisation data may be considered metadata associated with a biological datum obtained from testing a given nanomaterial in some assay. 3 However, precisely delineating "data" and "metadata" lies beyond the scope of the current article. In this article, data and metadata are collectively referred to as "(meta)data".
Generally, data completeness assesses the extent to which experimental details are described and associated experimental results are reported. One means of assessing the degree of completeness compliance is to employ a minimum information checklist. (This concept is referred to hereafter as Key Concept 2 and a broad and flexible definition is presented in Table 2. Literature definitions 28,29 are presented in ESI Table S4. †) However, one may also draw a distinction between data which are truly complete and data which are compliant with a minimum information checklist. The checklist may simply specify the most important, but not the only important, (meta)data. For example, in the case of nanomaterial physicochemical characterisation, measurement of a large number of properties might be considered necessary for complete characterisation but not truly essential to achieve all study goals. These properties might be distinguished from "priority" or "minimum" properties which are "essential" to determine. 3 The degree of data completeness, insofar as this refers to description of the necessary experimental details and availability of (raw) data, needs to be evaluated in a range of different nanoscience contexts. Firstly, it impacts the extent to which data areand can be verified to bereproducible. [30][31][32][33] Reproducibility 32-34 is contingent upon the degree to which the tested nanomaterial is identified and the experimental protocols, including the precise experimental conditions, are described. 35 Given the context dependence of many properties The completeness of data and associated metadata may be considered a measure of the availability of the necessary, non-redundant (meta) data for a given entity e.g. a nanomaterial or a set of nanomaterials in the context of nanoscience. However, there is no definitive consensus regarding exactly how data completeness should be defined in the nanoscience, or wider scientific, community. 9,20-24 Indeed, metadata availability may be considered an issue distinct from data completeness. 20,21 Data completeness may be considered to include, amongst other kinds of data and metadata, the extent of nanomaterial characterisation, both physicochemical and biological, under a specified set of experimental conditions and time points. It may also encompass the degree to which experimental details are described, as well as the availability of raw data, processed data, or derived data from the assays used for nanomaterial characterisation. Data completeness may be considered to be highly dependent upon both the questions posed of the data and the kinds of data, nanomaterials and applications being considered. Data completeness may be defined in terms of the degree of compliance with a minimum information checklist (Table 2). However, when estimating the degree of data completeness, it should be recognised that this will not necessarily be based upon consideration of all independent variables which determine, say, a given result obtained from a particular biological assay. This is especially the case when data completeness is assessed with respect to a predefined minimum information checklist (Table 2). Precise definitions of completeness may evolve in tandem with scientific understanding. which may identify nanomaterials, these two issues are interrelated. This is because nanomaterial identification, if based on physicochemical measurements, is not meaningful unless the corresponding experimental protocols are adequately described. 3,[36][37][38][39][40] Providing sufficient (meta)data to ensure the nanomaterial being considered is identified, to the degree required, is also inherently important to achieve the goals of "uniqueness" and "equivalency". 41 Establishing "uniqueness" means determining that nanomaterial A is different from B. 41 Establishing "equivalency" means determining that nanomaterial A isessentiallythe same as B. 41 Achieving "uniqueness" allows so-called "conflicting" results to be resolved. 3 Achieving "equivalency" allows for data integration (e.g. to interrogate relationships between different kinds of data) using data reported for the same, or functionally equivalent, nanomaterial in different studies.
Physicochemical characterisation also assists with explaining observed differences in (biological) effects. 3 Indeed, it facilitates the development of computational models for (biological) activity, based on the physicochemical properties as explanatory variables. Modelling of nanomaterial effects may entail the development of nanomaterial quantitative structure activity relationships (QSARs)termed "nano-QSARs", 42 nanoscale structure-activity relationships ("nanoSARs") 43 and quantitative nanostructure-activity relationships ("QNARs") 44 or "grouping" and "read-across" predictions for nanomaterial biological activity. 44,45 Reporting of the experimental details associated with the generation of a given biological or physicochemical measurement facilitates assessment of whether data from different sources might be combined for modelling, given the potential trade-off between dataset size and heterogeneity. 46,47 Data quality may be considered a measure of the potential usefulness, clarity, correctness and trustworthiness of data. Some data quality assessment proposals 23,35,48 may talk interchangeably about the quality of data, datasets (or "data sets"), studies and publications. However, subsets of data from a given source (e.g. a dataset, study report or journal article) may be considered to be of different quality, depending upon exactly how data quality is defined and assessed. 49 For example, the cytotoxicity data reported in a publication might be considered of different quality compared to the genotoxicity data. As another example, the data obtained for a single nanomaterial using a single assay might be considered of higher quality than the data obtained for a different nanomaterial and/or assay.
Whilst the quality of individual data points is an important issue, data points whichviewed in isolationmay be considered of insufficient quality to be useful may possibly be useful when used in combination with other data. For example, toxicity data which are evaluated as less reliable might be combined via a "weight-of-evidence" approach. 35 As another example, in the context of statistical analysis, large sample sizes may partially offset random measurement errors. 50 However, the importance of the reliability of the original data which are to be combined cannot be overlooked in either context. 23,50 According to some definitions, data quality may be partly assessed based upon the relevance of the data for answering a specific question. 27,48 Similarly, data completeness may also be considered highly context dependent. Here, the specific context refers to the kinds of data, the kinds of nanomaterials, the kinds of applications and the kinds of questions that need to be answered by a particular end user of the data. In other words, the degree to which the data are complete may be contingent upon "the defined [business] information demand". 27 Table 2 Key concept 2: minimum information checklist. Broad and flexible definition employed for reviewing prior work Minimum information checklists might otherwise be referred to as minimum information standards, minimum information criteria, minimum information guidelines or data reporting guidelines etc. 28,29 These checklists define a set of data and metadata which "should" be reportedif availableby experimentalists and/or captured during data curation. Again, the precise set of data and metadata which "should" be reported may be considered to be highly dependent upon both the questions posed of the data and the kinds of data, nanomaterials and applications being considered. There are two possible interpretations of the purpose of these checklists: (1) they should be used to support assessment of data completeness (Table 1); (2) data should be considered unacceptable if they are not 100% compliant with the checklist. Data quality may be considered a measure of the potential usefulness, clarity, correctness and trustworthiness of data and datasets. However, there is no definitive consensus regarding exactly how data quality should be defined in the nanoscience, or wider scientific, community. 9,[20][21][22][23]25,26 Data quality may be considered dependent upon the degree to which the meaning of the data is "clear" and the extent to which the data are "plausible". 48 In turn, this may be considered to incorporate (aspects of) data completeness (Table 1). For example, data quality may be considered 23 to be (partly) dependent upon the "reproducibility" of data [31][32][33][34] and the extent to which data are reproducible and their reproducibility can be assessed will partly depend upon the degree of data completeness in terms of the, readily accessible, available metadata and raw data. 30,35 As well as "reproducibility", data quality may be considered to incorporate a variety of related issues. These issues include systematic and random "errors" in the data, 32,33 data "precision" (which may be considered 33 related to notions such as "repeatability" [32][33][34][35] or "within-laboratory reproducibility"), 33 "accuracy" and "uncertainty". 20,23,25,27,32,33,35,[51][52][53][54][55] (As indicated by the cited references, different scientists may provide somewhat different definitions for these concepts. These concepts may be considered in a qualitative or quantitative sense.) Data quality may also be considered to be dependent upon the "relevance" of the data for answering a specific question, although data "relevance" might be considered an entirely distinct issue from data quality. 23,48 In the context of data curation, not only the quality of the original experimental data needs to be considered but also quality considerations associated with curated data. Quality considerations associated with curation include the probability of transcription errors 56 and possibly 57 whether a given dataset, structured according to some standardised format (e.g. XML based), 58 was compliant with the rules of the applicable standardised format (e.g. as documented via an XML schema). 59 Such compliance, amongst other possible aspects of data quality, could be determined using validation software.
None of the preceding discussion addresses the key question of how exactly to evaluate data completeness or quality for (curated) nanomaterial data. This question will be addressed in subsequent sections of the current paper.
Existing proposals for evaluating nanomaterial data completeness and quality
A plethora of proposals has been presented for assessing data completeness and quality in the nanoscience area. Because it would not be practical to comprehensively list and discuss all existing proposals in the current work, the following discussion (sections 3.1 and 3.2) aims to be illustrative of the different proposals which have been developedwith an emphasis on the most recent and those which are employed by the maintainers of specific curated nanomaterial data resources. Examples are taken from the published literature as well as the responses to the survey which informed the current article. A summary of the evaluation schemes, if any, employed by each of the data resources represented by the respondents to the survey is provided in the ESI. †
An overview of nanomaterial data completeness proposals
Considerable attention has been paid to identifying the minimum set of physicochemical parameters for which it is anticipated that nanomaterials with similar values for these parameters would exhibit similar effects in biological (e.g. toxicological) tests or clinical studies. 3 Here, "physicochemical parameters" refers to the characteristics/properties relevant for the description of a nanomaterial such as chemical composition, shape, size and size distribution statistics. A number of lists exist, including the well-known MINChar Initiative Parameters List, proposed in 2008. 60 Earlier efforts to provide minimum characterisation criteria for nanomaterials included the work carried out by the prototype Nanoparticle Information Library (NIL). [61][62][63] The prototype NIL was developed in 2004 to illustrate how nanomaterial data could be organised and gave examples of what physicochemical parameters, along with corresponding information regarding synthesis and characterisation methodology, might be included for nanomaterial characterisation (see the ESI † for further details). In 2012, Stefaniak et al. identified and carefully analysed 28 lists ( published between 2004 and 2011) which proposed "properties of interest" (for risk assessment), from which 18 lists of "minimum"or, in their terms, "priority"properties were discerned. 3 These authors summarised the properties found on these lists and the corresponding frequency of occurrence across all lists. Other lists 39,64-69 of important physicochemical parameters have been published subsequent to the analysis of Stefaniak et al. 3 Arguably, within nanoscience, less attention 70 has been paid to the question of which additional experimental details (e.g. the cell density, 71 number of particles per cell, 72 cell line used, passage number used or exposure medium constituents 73,74 in cell-based in vitro assays) need to be recorded. It is important to note that many of the physicochemical characteristics which define the identity of a nanomaterial are highly dependent upon experimental conditions such as the pH and biological macromolecules found in the suspension medium. 36,39,40 Nonetheless, some lists which specify key experimental details that should be reported (in addition to key physicochemical parameters) do exist. 3,60,64,66,75,76 Indeed, it should be noted that some lists focused on the minimum physicochemical parameters which should be reported also suggest certain experimental conditions such as "particle concentration" 3 and "media" 60 should be reported. (Here, the potential ambiguity as to what is considered a physicochemical parameter for a nanomaterial sample and what is considered an experimental condition should be noted: "particle concentration" 3 and "pH" 77 may be considered either as physicochemical properties or important experimental conditions.) 36 Other proposals, such as the caNanoLab data availability standard, 78 go further and stipulate that other (meta)data, such as characterisation with respect to specific biological endpoints, should be made available.
Key international standards bodies, the Organisation for Economic Co-operation and Development (OECD) and the International Standards Organisation (ISO), have also made recommendations regarding physicochemical parameters and other experimental variables which should be reported for various kinds of experimental studies of nanomaterials. [79][80][81][82][83][84][85] Notable reports include the "Guidance Manual for the Testing of Manufactured Nanomaterials: OECD Sponsorship Programme" 80 which stipulates physicochemical parameters and biological endpoints which needed to be assessed, as part of the OECD's "Safety Testing of a Representative Set of Manufactured Nanomaterials" project, and a guidance document on sample preparation and dosimetry, 81 which highlights specific experimental conditions, associated with stepwise sample preparation for various kinds of studies, that should be reported.
Many of the proposals cited above are not associated with a specific curated nanomaterial data resource, although some which were intended as recommendations for experimentalists (e.g. the MINChar Initiative Parameters List) 60 have been used as the basis for curated data scoring schemes. 78 Examples of proposals which are specifically used as the basis of a scoring scheme, partly or wholly based upon data completeness, for curated nanomaterial data include those employed by the Nanomaterial Registry, 39,86,87 caNanoLab 78 as well as the MOD-ENP-TOX and ModNanoTox projects (see ESI †).
Some proposals draw a distinction between broader completeness criteria (see Table 1) and what may be considered "minimum information" criteria (see Table 2). For example, within the MOD-ENP-TOX project (see ESI †) a set of minimum physicochemical parameters were required to be reported within a publication in order for it to be curated: composition, shape, crystallinity and primary size. Additional physico-chemical parameters (such as surface area) were deemed important for the data to be considered complete. This is in keeping with many proposals reviewed by Stefaniak et al., 3 which drew a distinction between "properties of interest" and "minimum" (or "priority") properties, as well as publications proposing increasing characterisation requirements within a tiered approach to nanosafety assessment. 67,68 Some proposals have also stressed the context dependence of completeness definitions. For example, the ModNanoTox project proposed (see ESI †) that certain physicochemical parameters and experimental metadata were only relevant for certain kinds of nanomaterials: crystal phase was considered crucial for TiO 2 nanoparticles but less important for CeO 2 nanoparticles, in keeping with an independent review of the literature emphasising the importance of crystal phase data for TiO 2 nanomaterials specifically. 68 Recent publications have also stressed the importance of characterisation requirements depending upon the type of nanomaterials studied and otherwise being relevant for the specific study. 68,88,89 Indeed, in contrast to the proposals discussed above which define specific (meta)data requirements, the developers of the Center for the Environmental Implications of NanoTechnology (CEINT) NanoInformatics Knowledge Commons (CEINT NIKC) data resource [90][91][92] have proposed that data completeness be calculated on a use-case-specific basis i.e. with respect to the (meta)data which a given database query aims to retrieve. For example, a researcher interested in the die off rate of fish due to nanomaterial exposure would need mortality data at multiple time points, whereas a researcher interested in mortality after, say, one week would only need data at a single time point.
An overview of nanomaterial data quality assessment proposals
Various schemes for scoring/categorising nanomaterial data (in part) according to their quality have been proposed in recent years. Because data completeness (see Table 1) and quality (see Table 3) may be considered highly interrelated, a number of these schemes are strongly based upon consideration of (meta)data availability. One of the simplest schemes, presented by Hristozov et al., 93 assessed the reliability of toxicity data in nanomaterial databases based purely upon the availability of basic provenance metadata: data were considered "unusable", or "unreliable", where a result from a study is not accompanied by a "properly cited reference". Significantly more sophisticated schemes exist which take into account the availability of a variety of additional (meta)data such as the availability of certain physicochemical data and experimental details concerning biological assay protocols. One such sophisticated scheme is the iteratively developed DaNa "Literature Criteria Checklist" 75,76 used to assess the quality of a given published study concerning a given nanomaterial for the purpose of preventing low quality scientific findings from being integrated within the DaNa knowledge base. [94][95][96] Indeed, some existing nanomaterial quality proposals go beyond merely considering data completeness, but are also concerned with whether the experimental protocols were carried out appropriately. For example, Lubinski et al. 47 proposed an extension of the Klimisch framework 48 for evaluating the reliability of nanotoxicology, or nano-physicochemical, data which was considered, in part, to depend upon compliance with Good Laboratory Practice (GLP) 97 and standardised test protocols. Other assessment schemes, such as the scheme employed by the DaNa 75,76,94-96 project (see ESI †), take account of whether biological results were affected by assay interference. [98][99][100][101][102][103][104][105][106][107] Indeed, application of the DaNa "Literature Criteria Checklist" 75,76 entails making a range of judgements regarding the quality of the nanomaterial data which go beyond mere consideration of data completeness (see ESI †). Likewise, Simkó et al. proposed a range of criteria for evaluating in vitro studies, including clearly specified criteria for the statistical "quality of study". 108 Some, but not all, proposals for quality assessment of nanomaterial data have sought to assign a categorical or numeric score to express the quality of the nanomaterial data. One such scheme, which assigns a qualitative score, was proposed by Lubinski et al. 47 Likewise, the "Data Readiness Levels" scheme proposed by the Nanotechnology Knowledge Infrastructure (NKI) Signature Initiative 51 assigns any kind of datai.e. not necessarily generated for nanomaterialsto one of seven, ranked categories denoting their "quality and maturity". In contrast, the following schemes assign numeric quality scores and were specifically designed to evaluate nanomaterial data curated into a specific data resource. The Nanomaterial Registry, 109,110 assigns normalised, numeric "compliance" scores to each nanomaterial record in the database based upon its associated measurements, corresponding to the physicochemical characteristics specified in the "minimal information about nanomaterials (MIAN)", which are designed to capture the "quality and quantity" of the physicochemical characterisation performed for that nanomaterial. 39,86,87 The MOD-ENP-TOX and ModNanoTox curated nanomaterial data resources also developed quality scoring schemes which assign numeric scores (see ESI †).
One notion of data quality (see Table 3) might be based on validation of dataset files, according to their data content or compliance with format specifications, using specialist software tools. (This is further discussed in section 4, with examples from mature fields.) In the nanoscience area, the validation tools 111 developed within the MODERN E.U. FP7 project, 112 used to validate ISA-TAB-Nano datasets based on their compliance with the ISA-TAB-Nano specification, [113][114][115] were, to the best of the authors' knowledge, the only such tools available at the time of writing which were specifically developed for validating curated nanomaterial datasets.
Lessons which can be learned from mature fields
In order to improve the means via which the completeness and quality of (curated) nanomaterial data are currently evalu-ated, it is worth considering the lessons which may be learned from "mature" fields.
A variety of different minimum information checklists or reporting guidelines (see Table 2) have been proposed in different areas of the life sciences. These are increasingly being used by publishers to assess the suitability of submitted publications. [116][117][118] The seminal Minimum Information About a Microarray Experiment (MIAME) reporting guidelines were proposed over a decade ago to describe the minimum information required for microarray data to be readily interpreted and for results obtained from analysis of these data to be independently verified, 116,119 which may be achieved if the results are reproducible. In under a decade, this standard was widely accepted and most scientific journals adopted these guidelines as a requirement for publication of research in this area, with authors being obliged to deposit the corresponding MIAMEcompliant microarray data in recognised public repositories. 116 A variety of similar guidelines 116 were subsequently developed for other life science technologies (e.g. proteomics) 120 or studies (e.g. toxicology 121 and molecular bioactivity studies). 122 The BioSharing project and online resource, [123][124][125][126] originally founded as the MIBBI Portal in 2007, 28 serves to summarise proposed "reporting guideline" standards and promote their development and acceptance. Clearly, the BioSharing online resource might be used to link to the various minimum information checklists that have been (implicitly) developed within the nanoscience domain (see section 3.1), thereby raising awareness of them and facilitating their comparison and further development. It is also possible that some of the recommendations made regarding experimental (meta)data in the (non-nanoscience specific) reporting guidelines linked to via the BioSharing website may also be applicable to (specific sub-domains of ) the nanoscience area.
The Standard Reference Data Program of the U.S. National Institute of Standards and Technology (NIST) 127 has supported the evaluation of data in many areas of science and technology. Typically, data are not only curated but also evaluated from three perspectives: documentation of the identification and control of the independent variables governing a measurement; the consistency of measurement results with the laws of nature; and through comparison with similar measurements. Over the years it has become clear that, as new phenomena are identified and measured, it takes yearsif not decadesto truly identify and understand how to control a measurement. Consequently, initial experiments produce data that primarily provide guidance for future experiments rather than be recognised as definitive properties. Feedback from the evaluation efforts to the experimental community is critical for improving the quality of data.
Chirico et al. 53 recently described how NIST data resources and computational tools can be and are being used to improve the quality of thermophysical and thermochemical data submitted for publication within the context of a collaborative effort between NIST and five key journals.
Because uncertainty may be considered a key aspect (Table 3), or even the key aspect, 25,52 of data quality evaluation, the approaches to characterising uncertainty proposed by ISO, 25,52 NIST 32 and SCENIHR 23 merit consideration.
The concept of data quality has received considerable attention within the toxicology and risk assessment communities and a number of proposals for assessing the quality of data, studies or publications have been published. 23,48,[128][129][130][131][132] A number of these were reviewed in Ågerstrand et al. 133 and Przybylak et al. 49 Arguably the most well-known is the framework proposed by Klimisch et al. 48 for categorising the reliability (see ESI Table S5 † literature definition 3.4) of toxicology data, or a toxicology study test report or publication. The Klimisch categories are widely employed within regulatory toxicology. 24,49,132,134 Since the original work of Klimisch et al. 48 lacked detailed criteria for assigning their proposed reliability categories, the ToxRTool program 131,135 was proposed as a means of improving the transparency and consistency with which these categories were assigned. The program assigns a reliability category based upon the score obtained after answering a set of "yes/no" questions. However, it is interesting to note that neither GLP nor test guideline compliance is explicitly considered by the ToxRTool when assessing reliability (although these issues are considered when evaluating "relevance")even though these were deemed key indicators of reliable data in the original work of Klimisch et al. 48 Recently, an extension to the ToxRTool program was developed by Yang and coworkers. 136 Their approach took the following issues into account: (1) an assessor might feel that a given ToxRTool criterion was only partially met, rather than it being possible to simply answer "yes/no" for that question; (2) an assessor might be unsure of the most appropriate answer to a given question. Hence, their approach, based on fuzzy arithmetic, allows toxicity data to be assigned to multiple reliability categories with different degrees of satisfaction.
Consideration of these different approaches to evaluating data quality raises some important questions which arguably need to be taken into account when designing a scheme for assessing the quality of nanosafety data or, where applicable, nanoscience data in general.
1. To what extent should quality be assessed on the basis of considering data completeness as opposed to making judgements regarding the data such as the "soundness and appropriateness of the methodology used" 23 or, equivalently, whether or not a method was "acceptable"? 48 2. More specifically, should data be considered most reliable 48 when they were generated according to Good Laboratory Practice (GLP), 97 or some other "audited scheme" 23 and according to standardised test protocols, 133 such as those presented in OECD Test Guidelines or by ISO? The appropriateness of adherence to standardised test protocols is especially relevant for testing of nanomaterials (see section 5.11). It may also be argued that, even for conventional chemicals, data which were not generated according to standardised test protocols and/or GLP are not necessarily less reliable. 48,132,137 3. To what extent should a data quality assessment scheme be prescriptive as opposed to allowing for flexibility based upon expert judgement? Whilst a scheme which is more prescriptive offers the advantage of promoting transparency and consistency 23,131 in the assigned quality scores (or categories), flexibility based upon allowing for expert judgement may still be necessary. 23 4. Should the outcome of the quality assessment be expressed numerically? Beronius et al. 132 have argued that this risks implying an undue level of scientific certainty in the final quality assessment. However, using a qualitative scheme based on certain criteria being met in order for data to be assigned to a particular category would fail to assign partial credit to data meeting a subset of those criteria. Furthermore, as illustrated by the ToxRTool approach, 131,135 a numeric score might be mapped onto a qualitative category for ease of interpretation.
5. How can the community best characterise uncertainty to provide a clearer understanding of data quality?
The preceding discussion concerns proposals which might be applied by a human expert for the purposes of assessing data completeness and quality in various domains. In principle, where these schemes are sufficiently prescriptive, rather than relying on subjective expert judgement they could be applied programmatically i.e. via parsing a structured electronic dataset or database using specialist software.
Indeed, various validation software programs have been developed to validate electronic datasets, based on standardised file formats, according to a range of criteria. For example, validation programs have been developed to validate different kinds of biological (meta)data reported in XMLbased 58,59,138 or ISA-TAB 139,140 formats and, more specifically, raw sequence and sequence alignment data [141][142][143][144] reported in FastQ [142][143][144] or Binary Alignment/Map (BAM) format. 145 Validation software 146,147 was also developed for crystallographic data reported in the crystallographic information file (CIF) format. 148 As well as checking format compliance, some of these validation programs may also be used to enforce compliance with (implicit) minimum information checklists. 138,149 For example, The Cancer Genome Atlas (TCGA) 150 validation software checks certain fields to ensure they are not "null" (unknown) or missing, as well as carrying out various other data quality checks for errors and inconsistencies. 138 Software used to validate sequence data may carry out data quality assessment via calculating a variety of metrics, including those which are indicative of different kinds of possible errors/biases/artefacts generated during measurement/analysis or possible contamination of the analysed samples. [142][143][144] All of these software programs are potentially relevant to automatically validating nanomaterial characterisation and/or biological data. The ISA-TAB format [151][152][153] was recently extended via the development of ISA-TAB-Nano [113][114][115] to better capture nanomaterial (meta)data, so the ISA-Tools 139,140 software might be extended to validate ISA-TAB-Nano datasets. (As is discussed in section 3.2, some software for validating ISA-TAB-Nano files already exists.) 111,115 Validation software for CIF files is arguably of particular relevance to building quanti-tative structure-activity relationships (QSARs), or quantitative structure-property relationships (QSPRs), for nanomaterials. Crystallographic data has been used to calculate descriptors for nano-QSAR (or nano-QSPR) models of inorganic oxide nanoparticle activities (or properties) in various recent studies. 42,154,155 5. Key challenges Important challenges are associated with nanomaterial data which need to be taken into account when evaluating their completeness and quality. To some extent, a number of these issues are taken into account in a subset of the existing proposals for evaluating nanomaterial data (see section 3). Other challenges relate to limitations of (some) of these existing evaluation proposals. The key challenges are summarised in Table 4 and explained in the remainder of section 5.
Uncertainty regarding the most biologically significant variables
A key challenge associated with defining minimum information criteria for nanomaterials is that the current understanding of the independent variables, such as nanomaterial physicochemical properties and other experimental variables, which contribute most significantly to the variability in the outputs of biological assays is arguably insufficient. 3,41,[68][69][70]89,105,156 Understanding which of the physicochemical properties are most correlated to biological effects is hampered by the dependence of many of these properties on experimental conditions (section 5.2), time (section 5.3), dosimetry uncertainty (section 5.4), possible redundancy in physicochemical data (section 5.5), the potential for artefacts in biological studies related to the presence of nanomaterials (section 5.9) and possible confounding factors (section 5.10). Table 4 The key challenges which impact completeness and quality evaluations of (curated) nanomaterial data Challenge no.
Brief description 5.1 Uncertainty regarding the most biologically significant variables 5.2 Dependence of many physicochemical properties on experimental conditions 5.3 Potential time dependence of physicochemical properties 5.4 Problems expressing dosimetry in biological assays 5.5 Possible redundancy in physicochemical data 5. 6 Batch-to-batch variability of nanomaterials 5.7 Context dependency of (meta)data requirements 5. 8 Lack of clarity in some existing checklists 5. 9 Artefacts in biological studies related to nanomaterials 5. 10 Misinterpretations in biological studies 5. 11 Uncertainty regarding standardised test guidelines 5. 12 Reduced relevance of some standard assays 5. 13 Problems with analysis of environmental samples
Dependence of many physicochemical properties on experimental conditions
Many, but not necessarily all, physicochemical parameters may change significantly depending upon the dispersion (suspension) medium and any additives (e.g. dispersant aids) 37 i.e. many physicochemical characterisation data obtained under pristine conditions (e.g. dispersed in water) may differ greatly from those determined for the nanomaterial dispersed in the medium, plus additives, used for biological testing. [36][37][38][39][40]157 This variability makes it difficult to find correlations between the physicochemical properties and the outcome of biological assays. No straightforward relationship can be expected to exist when these properties are measured under pristine conditions, or conditions which otherwise differ from biologically relevant conditions, even if a simple correlation exists when the physicochemical properties are measured under biologically relevant conditions. For example, a recent study found the positive zeta potential values measured in physiological saline ( pH 5.6) exhibited good linear correlation with acute lung inflammogenicity, but not the negative values measured in more basic ( pH 7.4) media. 157 Other experimental conditions which may significantly affect physicochemical properties include sample processing details such as sonication steps. 37 As well as making it harder to discern which physicochemical parameters are most important to measure and document, this challenge has the following implications for data completeness. Firstly, a careful description of the various factors which could affect physicochemical properties is required 36,38,40,81 in order to establish "uniqueness" and "equivalency" 41 based upon physicochemical characterisation. Secondly, measurement of many physicochemical characteristics under biologically relevant conditions, as is considered best practice, 38 should assist with explaining biological results or developing structure-activity relationships.
Potential time dependence of physicochemical properties
Many nanomaterial characteristics may change over time, depending upon their environment and processing protocols, such as their state of agglomeration, 40,81 their "corona" 158-160 of adsorbed (biological) molecules 40,161 and even primary particle characteristics such as chemical composition (e.g. via dynamic speciation) 162,163 or morphology. 37 Some of these changes may be reversible, 159,164 whilst other processes may give rise to irreversible transformations or "aging" 165 ("ageing"). 166 These time dependent changes in physicochemical properties can give rise to changes in their biological effects. 166 The first implication for data completeness is that temporal metadata, 166 along with corresponding processing (e.g. sonication) 37 and storage history 166 details, are important to capture. Secondly, because "ageing" may have transformed the physicochemical characteristics responsible for biological activity, data for biological studies of nanomaterials might not be considered complete if key physicochemical characteristics were not measured at time points corresponding to biological testing. 166
Problems expressing dosimetry in biological assays
The most appropriate dose metric to use in biological studies of nanomaterials is unclear and may depend upon the kind of nanomaterial being considered. 167 Nonetheless, it is generally accepted 77,81,167,168 that mass based concentrations and doses are less appropriate and that dose metrics based on the total surface area or number of particles should be considered: the use of mass based concentration units may give misleading indications as to the rank order of toxicity for different nanomaterials. 77 Thus, the use of an inappropriate dose (or concentration) metric may be considered to adversely affect the clarity, hence the quality (see Table 3), of nanomaterial biological data. Since additional physicochemical data are required for conversion of the mass based concentration (or dose) units (e.g. surface area measurements or density measurements, depending upon the approach employed), 36,77,81,168 this issue also has implications for the minimum information criteria which might be proposed for nanomaterial data. N.B. Different approaches for estimating surface area based dose units, based upon different physicochemical measurements, have distinct advantages and disadvantages: geometric estimates of surface area may be based upon simplistic assumptions regarding particle geometry and fail to take account of porosity, whilst surface area measurements under dry conditions may not reflect the accessible surface area under biological conditions. 36,168 An additional problem is that the nominal, administered concentration (or dose) may not correspond to the concentration (or dose) delivered to the site of biological action. 101,[168][169][170] Hence, additional data completeness considerations for aquatic toxicity tests include measurements of exposure levels over the course of the experiment and data quality concerns arise regarding whether the experimental methods employed to quantify nanomaterials in complex media are appropriate (see section 5.13). 101,169
Possible redundancy in physicochemical data
As discussed in section 5.4, different kinds of physicochemical data may be required to estimate surface area based dose units, depending upon the approach employed i.e. this is one source of potential redundancy in physicochemical characterisation requirements. However, as is also discussed in section 5.4, even ignoring other rationales for obtaining the same physicochemical data, the different strengths and weaknesses of alternative surface area based dosimetry approaches mean these data cannot be said to be completely interchangeable. The interrelatedness between nanomaterial physicochemical properties 44,68,154 also means that, in principle, extensive lists of "essential" properties 3 may call for excessive characterisation that is both a burden for experimentalists and curators. However, the degree of interrelatedness between physicochemical properties may not mean that some properties are entirely interchangeable and, furthermore, the relationships between different propertiesespecially if measured under different conditionsare arguably hard to discern. 68 Indeed, investigating which properties correlate might be hampered by synthesis challenges 5 which may be associated with producing systematically varied nanomaterial libraries. 171 Given the lack of complete interchangeability and problems associated with determining correlations in physicochemical properties, reducing the necessary physicochemical characterisation data based on potential redundancy remains a challenge. Furthermore, a challenge which arises as a consequence of these correlations is that it may be difficult to interpret the effect of changing a given property upon biological activity (hence, the importance of measuring that property) without this being confounded by variation in other physicochemical parameters.
Batch-to-batch variability of nanomaterials
The issue of batch-to-batch variability, i.e. variability in the properties of nominally identical nanomaterials obtained via repetitions of nominally the same synthesis, is a key challenge which is particularly significant for industrially produced nanomaterials. 5,38,172 The implications for data completeness are arguably that the batch identity of a given nanomaterial (as denoted via its "batch identifier", 38,173 "lot number" 38,173 or "manufacturer lot identifier") 174 should be documented, to establish nanomaterial "equivalency", 41 even for nanomaterials which are nominally the same e.g. which have the same trade name. However, since not all nanomaterial synthesis procedures may exhibit the same degree of batch-tobatch variability, 5,38,172 the importance of these metadata may depend upon the kind of synthesis procedure. Indeed, the kind of synthesis route may be considered important metadata to curate 174 for this reason and because it may implicitly convey (biologically relevant) information regarding chemical composition. 175
Context dependency of (meta)data requirements
Data and metadata requirements may depend upon the experimental scenario and intended use of the data i.e. the specific context. Not all (meta)data are relevant for all experimental scenarios. For example, not all physicochemical parameters are applicable to all kinds of nanomaterials and those physicochemical parameters which contribute most significantly to nanomaterial effects may vary according to the kind of nanomaterial, their intended application and the specific effect of interest. 3,68,69,83,88 Likewise, not all of the key experimental variables which (most) affect the outcome of biological testing will necessarily be common to all kinds of biological assays. 105 For example, whether cytochalasin-B is employed during a micronucleus assay, which may be used to evaluate the genotoxicity of nanomaterials, 6,176 can significantly affect the results. 176,177 However, this experimental variable is not relevant for other genotoxicity tests. 6,176 Moreover, in practice, different stakeholders will have different objectives i.e. the pro-perties and experimental metadata which are important may vary between disciplines and user communities, or even within the same disciplines and communities the information requirements may vary according to the specific questions posed of the data. 41 Hence, enforcing a single set of "minimum information" criteria could lead to some existing data being unnecessarily deprecated due to a lack of completeness even though the existing (meta)data are sufficient for specific purposes. 89 For example, consider toxicological assessment of a commercially available nanomaterial with limited batch-to-batch variability, 5,38,172 assessed during different studies at essentially the same point in its life-cycle or which is not significantly affected by "aging". 165 For such a nanomaterial, its trade name ("X") might be considered a sufficiently unique identifier i.e. one can suppose that essentially the same material is being referred to in different studies of "X" or that the samples being assessed do not cause significantly different biological effects for the endpoint(s) of interest. If these data were simply being used to determine whether material "X" could cause a given set of effects (as determined in different studies), enforcing a requirement for adherence to a "minimum information checklist" in terms of physicochemical characterisation 3 might be considered unnecessarily stringent i.e. in this context, detailed physicochemical characterisation might not be required to establish "equivalency". 41 Conversely, if a nano-QSAR modeller wished to generalise from these data (e.g. to build a relationship between physicochemical characteristics and a given adverse effect), then batch-specific physicochemical characterisation might be considered much more important.
In light of the context dependence discussed here and the evolving state of nanoscience (e.g. challenge 5.1), those utilising stringent "minimum information" schemes should anticipate that their criteria are not necessarily applicable in all contexts and are likely to be superseded as the field develops, instruments improve, and current hypotheses are exhausted. However, the underlying informational value of current and past data may nevertheless remain intact.
Lack of clarity in some existing checklists
Many existing proposals regarding important physicochemical data specify characteristics which are very broadly defined, rather than a specific set of measurements, 3 making it unclear to researchers as to which measurements should be made. For example, many lists propose that the "agglomeration" or "aggregation" state be determined. 3 However, a variety of different measurements (such as number of primary particles per aggregate or agglomerate, as might be quantified via the "average agglomeration number", 178 or assessment of particle size distributions under different conditions) might be considered to assess this. 36,179 A related issue is that two protocols which are nominally measuring the same parameter (such as "average size"), may actually be providing different kinds of information that are not directly comparable. 3,36,38,180 Different measurement techniques, such as transmission electron microscopy (TEM) and dynamic light scattering (DLS), employ different principles and assumptions to estimate "size" and may be measuring different aspects of "size" (e.g. "height above a flat substrate" or "hydrodynamic diameter"). 3,180,181 Some techniques (e.g. TEM) may be used to estimate the "size" of agglomerates, aggregates or the primary particles, depending upon how the raw data are analysed, 37,182 and different kinds of "average size" may be obtained using the same technique. 36,180,181 The implications for data completeness are that (1) recommendations for specific kinds of physicochemical data, or clear guidance regarding acceptable alternatives, should be provided and (2) corresponding metadata regarding the measurement technique, the characterisation protocol and a precise description of the kind of statistical estimate produced (e.g. arithmetic mean of the number distribution vs. volume distribution) 36 are important to capture.
Artefacts in biological studies related to nanomaterials
A growing body of literature has raised concerns regarding various artefacts which may affect the reliability of biological assessment of nanomaterials. 70,98,99,[101][102][103][104][105][106][107][183][184][185][186] These artefacts mean that the measurements obtained may not entirely correspond to the biological phenomena which the studies are trying to detect. For example, various kinds of nanomaterial "interference" with commonly used in vitro (cell-based) toxicity assays have been noted which may lead to overestimation or underestimation of toxicity. 70,98,99,[102][103][104][105][106][107][183][184][185][186] In in vivo aquatic toxicity studies, nanomaterials adhering to the surface of organisms may inhibit movementleading to overestimation of mortality. 101,187 An immediate implication for evaluating the quality of (curated) nanomaterial data is the need to evaluate the possibility for artefacts (e.g. interference). This is complicated by the fact that assay interference may be dependent upon the specific combination of assay, nanomaterial and tested concentration. 98,106,185,186 Indeed, the possible dependency of assay interference on specific physicochemical characteristics 106,185 may be another factor to take into account when evaluating completeness and quality.
Various recommendations have been made in the experimental literature for detecting and, in some cases, correcting for possible assay interference. [104][105][106]184,185 In spite of this, analysis by Ong et al., 185 using a sample size of 200 papers for each year, suggested that around 95% (90%) of investigations using colorimetric or fluorescent-based assays published in 2010 (2012) failed to experimentally assess the potential for nanomaterial interference or, at least, failed to explicitly state that such potential interferences had been ruled out experimentally.
Misinterpretations in biological studies
As well as artefacts which lead to erroneous estimations of toxicity, a variety of additional factors may lead to erroneous interpretation of the cause of the toxicity observed when testing nanomaterials. 104 For example, a failure to experi-mentally determine the presence of different kinds of impurities (e.g. endotoxin contamination, solvent contamination, metals) may lead to the observed toxicity being wrongly attributed to the nominal nanomaterial. 104 The implications for data completeness are that thorough characterisation of the nanomaterial, including with respect to these key impurities, needs to be carried out when studying the biological effects of nanomaterials in order to meet the following objectives: (1) unless the nanomaterial identity is otherwise clear (see section 5.7), to associate a specific nanomaterial identity with the observed biological activity; (2) if desired, to ensure that any mechanistic interpretation of the biological effect is correct. Lack of clarity in the meaning of the data, such as failure to correctly identify which specific nanomaterial was tested in an assay, can also be considered to affect data quality (see Table 3).
Uncertainty regarding standardised test guidelines
An initial review 188 of the applicability of the OECD test guidelines to nanomaterialsdeveloped as standardised test protocols for conventional, small molecule chemicals 101,188concluded that many (but not all) of these were applicable to nanomaterials in principle, if coupled with additional guidance documents regarding nanospecific issues. 81,188 A related question is the requirement for OECD test guidelines for parameters which are specifically important for nanomaterials. 68,189 However, these issues were still not fully resolved as of the time of writing. 68,79,169,179,[189][190][191] Also, at the time of writing, some standardised protocols for nanomaterial assessment with respect to a variety of endpoints were under development by ISO. 82,192 Nonetheless, some recent articles in the nanotoxicology literature have strongly advocated the use of OECD test guidelines, or other standardised protocols, to evaluate nanomaterials. 103,193,194 Clearly, if the use of established standardised protocols cannot be assured to address all of the concerns raised regarding the quality of nanomaterial data (e.g. the artefacts discussed in section 5.9), this has implications as to whether adherence to existing standardised protocols should be considered an indicator of high quality data, as supposed by some existing data quality evaluation schemes discussed in sections 3 and 4, 47,48 compared to a novel protocol which may have been specifically designed to address these concerns. Indeed, it is, in principle, possible that the use of some existing standardised tests might miss novel endpoints or be based upon assumptions regarding the mode of action that are not applicable to some nanomaterials. For example, the use of "omics" methods in nanotoxicology is advocated due their ability to capture novel modes of action. 195 However, the extent, if any, to which nanomaterials can cause novel harm, act via genuinely novel modes of actionor even exhibit novelty in the underlying 45 mechanisms of action and/or structure activity relationshipshas recently been debated. 14,89,167,[196][197][198][199] 5.12. Reduced relevance of some standard assays Another potential problem with some toxicity tests when applied to nanomaterials, as compared to testing of small molecules, is that they might be of reduced relevance for assessment of possible human health effects. For example, the Ames genotoxicity test and cytotoxicity tests, based on bacterial cell cultures, might be inappropriate for nanomaterials as bacterial sensitivity to nanomaterials may be significantly reduced compared to human cells, 176,200 due to reduced uptake as a result of the cell wall and lack of endocytosis for bacterial cells. 176 However, it should be noted that Holden et al. 201 have suggested that bacterial studies may still be relevant to assessing potential nanomaterial impacts on human health, at least in terms of indirect effects following environmental release.
Reduced relevance for human health effects assessment is sometimes considered to be a data quality issue (see ESI Table S5 † literature definition 3.4). 48
Problems with analysis of environmental samples
The analysis of engineered nanomaterials, along with their derivatives, in environmental samples provides important information for risk assessment. 202 The engineered nanomaterials first need to be detected, followed by quantification of their concentration and determination of their physicochemical properties. 202 In particular, quantification of their concentration provides a direct means of validating the predictions of fate and transport models. 203 However, obtaining reliable data on engineered nanomaterials in environmental samples remains challenging. 202,203 In part, this reflects the need to make measurements at or below the detection limits for many analytical techniques. For example, many analytical techniques (e.g. dynamic light scattering) have detection limits 101,[203][204][205][206] which are too high to detect concentrations as low as those expected for engineered nanomaterials in environmental samples. 101,203,205 Recently, single particle inductively coupled plasma mass spectrometry (SP-ICP-MS) has been advocated as a possible solution which would allow detection of realistic environmental concentrations and, in combination with additional information or assumptions, simultaneous measurement of particle size distributions. 202,203,205 However, SP-ICP-MS is not without its limitations, 202,203 including composition dependent size detection limits. 205 (Indeed, detection of small particles is noted to be a problem with many analytical techniques due to their detection limits and/or low sensitivity for smaller particles.) 206,207 In addition to these challenges, it has been argued that the most serious remaining problem with analysis of engineered nanomaterials in environmental samples is discriminating engineered from naturally occurring nanomaterials. 203 The key challenges highlighted in this section emphasise the difficulties associated with generating sufficiently complete and high quality nanomaterial data. Consideration of these challenges is critical when evaluating the completeness and quality of (curated) nanomaterial data.
Recommendations for promoting and improving upon established best practice
The following recommendations are designed to promote established best practice or improve the manner in which the completeness and quality of curated nanomaterial data are evaluated. Many of these recommendations are also applicable to evaluating the completeness and quality of nanomaterial data reported in, say, the published literature prior to curation. They were informed by the preceding discussions regarding the meaning and importance of data completeness and quality (section 2), existing proposals for evaluating the completeness and quality of (curated) nanomaterial data (section 3), lessons which can be learned from mature fields (section 4) and the key challenges associated with nanomaterial data (section 5). These recommendations were developed by the authors of the current publication and were informed by the responses to the Nanomaterial Data Curation Initiative (NDCI) survey on data completeness and quality. (Full details of the recommendations made by specific survey respondents may be found in the ESI. †) However, they should not be considered to provide a definitive road-map for progress in this area which is endorsed by all authors and survey respondents. Rather, they summarise options for promoting best practice or improving the evaluation of the completeness and quality of curated nanomaterial data.
These recommendations are divided into five categories: terminology recommendations (section 6.1), specific (meta) data requirements (section 6.2), computational tool focused recommendations (section 6.3), strategic recommendations (section 6.4), and recommendations regarding the role specific organisations and scientific communities could play in advancing the manner in which the completeness and quality of curated nanomaterial data are evaluated (section 6.5).
To allow the reader to get a quick overview, the recommendations are merely summarised in the main text of the article. An in-depth discussion of these recommendations, including caveats, is provided in section S4 of the ESI. †
Terminology recommendations
It is proposed that the following definitions of terms (Table 5) should be adopted across the nanoscience community. The particular context in which these terms are explained is nanomaterial data curation. However, the definitions and many of the accompanying notes are relevant to the wider nanoscience, or broader scientific, community. These definitions build upon the broad and flexible definitions of (curated) data completeness (Table 1) and quality (Table 3) presented in section 2.
The new definitions are generally consistent with the definitions presented in section 2. However, some issues incorporated into those broad and flexible definitions are deemed out of scope. For example, it is proposed that the relevance of the data for a particular purpose should be considered related to data completeness rather than quality. The broad and flexible definitions (section 2) were appropriate for reviewing prior work as they ensured that different perspectives were not deemed out of scope. However, for the sake of greater clarity, the following, specific definitions are recommended to the community. This greater clarity will aid consideration of the practical recommendations presented in the remainder of this article.
6.1.1. Specific definitions of completeness and quality are recommended to the nanoscience community. The terms data completeness and quality should be considered to be related but should not be used interchangeably. Guidance notes which further clarify the following definitions are presented in the detailed discussion of these terminology recommendations in ESI S4. † Data completeness. This is a measure of the extent to which the data and metadata which serve to address a specific need are, in principle, available.
Data quality. This is a measure of the degree to which a single datum or finding is clear and the extent to which it, and its associated metadata, can be considered correct.
These abstract definitions are further clarified by Fig. 1, which illustrates the kinds of (meta)data requirements for data to be assessed as sufficiently complete and of acceptable quality. A more detailed discussion of specific (meta)data requirements is provided in section 6.2.
6.2. Specific (meta)data requirements 6.2.1. Specific (meta)data highlighted by the NDCI survey. The Nanomaterial Data Curation Initiative (NDCI) survey on data completeness and quality asked respondents to suggest the different kinds of (meta)data required in order for nanomaterial data to be considered sufficiently complete and of sufficient quality.
They were further asked to consider whether these (meta) data were only important in specific contexts and to identify those (meta)data they felt were most important to capture. The aim here was to capture recommendations even if they went beyond the (meta)data considered when curating the nanomaterial data resource for which they were acting as a liaison. (See the ESI † for further details.) Some survey respondents emphasised that their responses were not intended to be a comprehensive summary of all (meta)data and considerations which would need to be taken into account in order to assess the completeness and quality of curated nanomaterial data. Rather, their responses to these questions highlighted issues (e.g. nanomaterial ageing) which they considered to be given insufficient attention. Some respondents kindly provided detailed lists of (meta)data and comments regarding additional considerations required for completeness and quality assessment. Some of these responses also gave some consideration to the relative importance and context/use-case dependence of certain kinds of (meta)data requirements.
The recommendations regarding physicochemical data which should be provided were generally in keeping with the kinds of physicochemical data recommended as being important in the lists analysed by Stefaniak et al. 3 As well as physicochemical data, many kinds of metadata were also highlighted as being important for data to be determined to be sufficiently complete and/or of sufficient quality. Metadata recommendations were concerned with various issues, including experimental conditions, protocols and techniques, as well as data provenance, nanomaterial synthesis and experimental error.
Based on the survey responses and the literature review which informed the current article, a definitive list of all necessary (meta)data cannot be made. Neither can a definitive set of lists presenting all (meta)data requirements for different scenarios be made. Nonetheless, some key recommendations may be made.
6.2.2. Key recommendations regarding specific (meta)data. Table 6 presents key recommendations concerning specific kinds of (meta)data which are important to capture in various curated nanomaterial data collections. ESI S4 † explains these recommendations in detail.
It should be noted that these recommendations are not a comprehensive list of all kinds of (meta)data which need to be captured in curated nanomaterial data collections. Rather, they are designed to emphasise key issues which are not always captured in existing minimum information checklists (section 3.1) or quality assessment schemes (section 3.2) for (curated) nanomaterial data. Additional (meta)data requirements might be determined via consulting existing proposals (see sections 3.1 and 3.2). Indeed, the need to consult existing recommendations is a key strategic recommendation (recommendation 6.4.1).
However, the possible dependence of (meta)data requirements upon the kinds of data and intended use of those data must be remembered (see section 5.7). This consideration is applicable, in principle, to the existing proposals (see sections 3.1 and 3.2) as well as the recommendations in Table 6. To some extent, the context dependence of the recommendations is indicated in Table 6. The discussion of these recommendations in ESI S4 † considers this context dependence in greater depth. Table 7 presents recommendations regarding how computational tools might be developed to support evaluation of the completeness and quality of curated nanomaterial data. Some of these recommendations concern existing nanoinformatics Fig. 1 The quality and completeness of (curated) nanomaterial data are viewed as overlapping, yet distinct, concepts. This figure illustrates various contexts, meaning the experimental scenario and intended use of the data, and the kinds of (meta)data which may be required to assess those data as being sufficiently complete and of acceptable quality. N.B. (1) PCCs is an abbreviation for physicochemical characteristics. (2) The concept of data completeness applies to a set of data and their associated metadata. Hence, the number of data points of a specific kind (e.g. number of nanomaterials screened in a cytotoxicity assay) may be a completeness criterion in specific contexts if a given number of data points are required to achieve a specific aim. (3) In contrast, the concept of data quality applies to a single datum (i.e. a single data point) or a single "finding", taking into account its associated metadata. A "finding" might be a conclusion derived from analysis of a set of raw or processed data and the "metadata" associated with that finding might include these data. (4) The dependence of both completeness and quality upon metadata is not entirely for the same reasons. For example, metadata (e.g. related to the nanomaterial identity and experimental conditions) are required to determine the relevance of the data for answering a specific question. The relevance of data for answering a specific question affects the completeness of the data, since only relevant data should be counted when evaluating completeness, but not the quality of a datum or finding. In addition, metadata are required to make the meaning of the datum or finding clear, reducing uncertainty in a qualitative sense and facilitating reproducibility, and to assess the level of trust, reproducibility, repeatability, uncertainty and error. All of these issues affect the quality of a datum or finding. However, the quality of a datum or finding does not directly affect the completeness of the data. (5) The context determines the (meta)data required for completeness. Whilst quality is not dependent upon the intended use of the data, the specific (meta)data required for quality assessment may be dependent upon the experimental scenario. For example, specific kinds of (meta)data will be required in specific in vitro studies to assess assay interference and, hence, assess the error in a given datum. (6) The examples in this figure are by no means exhaustive or, necessarily, minimum requirements. The example contexts and their requirements are not necessarily mutually exclusive. For example, a nano-QSAR might be developed via integrating data across multiple in vitro mechanistic studies. (7) Where examples are provided in this figure of specific metadata which might be required for data completeness in different contexts, it should be recalled that the availability of these metadata could also affect the quality of individual data points or findings. Data regarding the surface composition and structure/ morphology are important.
Computational recommendations
In principle, when reporting data from any experimental study. N.B. The surface composition and structure/ morphology may arise due to a ligand shell/layer. Careful consideration of the extent to which completeness and quality assessment could be automated using these tools is required and may be contingent upon progress towards recommendation 6.3.2. Recommendation 6.3.3 is also pertinent here. 6.3.2 Standard templates for data exchange should be developed based upon the ISA-TAB-Nano specification.
Some early work towards this objective has already been carried out. The required templates are likely to be scenario specific. 6.3.3 Nanomaterial data resources providing completeness and quality scores should allow end-users to customise these based upon their own requirements.
The scoring systems should include the ability to customise and select the criteria upon which the degree of data completeness (in terms of fitness for purpose), or quality, is defined and provide the decision process and justification involved in this. The potential need to customise data completeness scoring primarily stems from the dependency of completeness on the use-case. The potential need to customise data quality scoring primarily stems from the lack of universal standards as to quality determination. resources, whilst other computational tools may need to be developed de novo.
Strategic recommendations
The proposals in Table 8 should be considered in order to develop scientific strategies for improving the manner in which the completeness and quality of nanomaterial data are evaluated in future work. Table 9 summarises recommendations regarding initiatives which could be undertaken by various organisations, in collaboration with the wider nanoscience community, to improve the manner in which the completeness and quality of nanomaterial data are evaluated.
Conclusions
The curation of nanomaterial data into electronic resources is crucial to realise the potential of nanotechnology to deliver benefits to society whilst having acceptable impacts upon human health and the environment. In order for these data to be fit for their intended purposes, they need to be sufficiently complete and of acceptable quality. Hence, appropriate evaluation of the quality and completeness of curated nanomaterial data is essential even if, in practice, analysis and conclusions may need to be drawn from imperfect data: such an evaluation can inform awareness of the limitations of any work based upon the available data. Any such evaluation needs to take account of the issues related to the completeness and quality of the underlying experimental data as well as additional issues related to their curation such as transcription errors. However, carrying out this evaluation in practice is non-trivial.
There are different perspectives as to exactly what these terms mean as well as different proposals as to how exactly the degree of completeness and quality of (curated) nanomaterial data should be evaluated in practice. After reviewing various existing proposals in light of broad and flexible definitions of these concepts, which accommodate the varying range of perspectives, more precise definitions are recommended to the nanoscience community. None of the existing proposals reviewed herein is perfect. A variety of challenges exist which impede appropriate evaluation of the completeness and quality of nanomaterial data. These challenges include the need to appropriately take account of the dependency of nano-material properties on their processing and storage history (i.e. time dependency), artefacts associated with biological testing of nanomaterials and incomplete understanding of which physicochemical properties and other experimental variables most significantly impact the effects of nanomaterials. In addition, the data requirements are likely to be dependent upon the precise experimental scenario (e.g. type of nanomaterials) and stakeholder requirements (e.g. regulatory decisions regarding a single nanomaterial vs. computational modelling). Some lessons might be learned from work in mature fields, such as the possibility of developing appropriate software tools to facilitate the efficient and transparent evaluation of (curated) experimental data. In the nanoscience domain, automated evaluation of data completeness and quality might best be supported via further development of nascent nanoinformatics resources. Common data collection templates based upon the ISA-TAB-Nano data exchange specification are envisaged. These will likely need to be adapted to the specific data requirements of different experimental scenarios and stakeholder objectives. The development of these resources will require community driven consensus regarding nanomaterial data requirements, which will best be supported by appropriate organisations and initiatives with an international reach. This article is one outcome of just such an initiative, the Nanomaterial Data Curation Initiative (NDCI), as reflected in the wide range of contributors and stakeholders who provided a variety of perspectives which informed the current work and resulted in a variety of recommendations to promote best practice and improve evaluation of the completeness and quality of (curated) nanomaterial data. An overview of the perspectives of these different stakeholders is presented in the ESI † of the current article. Ongoing effort to support adoption and implementation will also be required, including by data curators. | 2018-04-03T03:16:16.695Z | 2016-05-12T00:00:00.000 | {
"year": 2016,
"sha1": "7343539222f81936263c859d1a7076fd8eaf8405",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2016/nr/c5nr08944a",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ce74d48ae7bbfd0b25dfb8825f20996ec11cd089",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
3779823 | pes2o/s2orc | v3-fos-license | Proteus vulgaris and Proteus mirabilis Decrease Candida albicans Biofilm Formation by Suppressing Morphological Transition to Its Hyphal Form
Purpose Candida albicans (C. albicans) and Proteus species are causative agents in a variety of opportunistic nosocomial infections, and their ability to form biofilms is known to be a virulence factor. In this study, the influence of co-cultivation with Proteus vulgaris (P. vulgaris) and Proteus mirabilis (P. mirabilis) on C. albicans biofilm formation and its underlying mechanisms were examined. Materials and Methods XTT reduction assays were adopted to measure biofilm formation, and viable colony counts were performed to quantify yeast growth. Real-time reverse transcriptase polymerase chain reaction was used to evaluate the expression of yeast-specific genes (rhd1 and rbe1), filament formation inhibiting genes (tup1 and nrg1), and hyphae-related genes (als3, ece1, hwp1, and sap5). Results Candida biofilm formation was markedly inhibited by treatment with either living or heat-killed P. vulgaris and P. mirabilis. Proteus-cultured supernatant also inhibited Candida biofilm formation. Likewise, treatment with live P. vulgaris or P. mirabilis or with Proteus-cultured supernatant decreased expression of hyphae-related C. albicans genes, while the expression of yeast-specific genes and the filament formation inhibiting genes of C. albicans were increased. Heat-killed P. vulgaris and P. mirabilis treatment, however, did not affect the expression of C. albicans morphology-related genes. Conclusion These results suggest that secretory products from P. vulgaris and P. mirabilis regulate the expression of genes related to morphologic changes in C. albicans such that transition from the yeast form to the hyphal form can be inhibited.
INTRODUCTION
Most microorganisms in natural ecosystems exist in the form of biofilms. The biofilms of microorganisms are formed over many stages, the first two of which are adherence to a biotic or abiotic surface and production of a structure to increase adherence. In the maturing biofilm, extracellular polymers are formed by microorganisms in the structure and are known to protect microorganisms from changes in the surrounding environment, to participate in supplying nutrients and discharging metabolic waste, and to gather cells in closer proximity in order to facilitate cell-to-cell interactions. Biofilms can consist of multi-species, including coexisting bacteria and fungi. [1][2][3] Candida albicans (C. albicans) is a resident species of healthy human mucous membranes that is also an opportunistic pathogen that induces superficial and systemic infection via the mucous epithelium when a patient suffers from severe disease or when the immune state is deficient. C. albicans has virulence factors that allow it to invade host tissue and to evade the host defense mechanism. 4,5 C. albicans grows in three different forms: budding yeast (or blastoconidia), pseudohyphae, and hyphae. The expression of C. albicans genes differ according to the C. albicans form. Candida hyphae are known to be essential to pathogenicity and disease dissemination, [6][7][8] and several hyphae-specific genes are known, including hwp1, als3, als8, ece1, and sap4-6. [9][10][11] The yeast-specific genes include rbe1, ywp1, and rhd1, and filament formation inhibiting genes include tup1 and nrg1. [12][13][14][15] Proteus vulgaris (P. vulgaris) and Proteus mirabilis (P. mirabilis) exist in both human and animal small intestines and in the natural environment. P. vulgaris and P. mirabilis also can form biofilms on surfaces of various objects, including insertion apparatuses in humans. Proteus infections, especially urinary tract infections, are common in immunosuppressed patients. [16][17][18] In recent years, C. albicans has been highlighted as one of the most common etiologic agents of acquired hospital infections, and Candida biofilms play an important role in initiating infections. Biofilms that exist in the human body are comprised of hundreds of different bacterial species: fungi and bacteria can also co-exist in these biofilms. 19 Both the microorganism relationships with the host and the interactions between various microorganisms are important, the latter of which is not being actively researched. 20,21 We previously reported that bacteria had a negative effect on the formation of C. albicans biofilms, and a more distinct decrease in C. albicans biofilm formation was shown when cultivated with P. vulgaris. 22 In the present study, the influence of co-culture of C. albicans, a chief agent of hospital-acquired infection, with P. vulgaris and P. mirabilis on the formation of C. albicans biofilm and its underlying mechanisms were examined.
Organisms
Clinical isolates of C. albicans were obtained: one commensal strain was isolated from the blood of a patient, and P. vulgaris and P. mirabilis were isolated from the urine of another patient. The identity of each microorganism was confirmed with the commercially-available identification systems (BioMeriéux, Marcy I'Etoile, France): API 32C for C. albicans and API 20E for P. vulgaris and P. mirabilis.
Culture conditions and experimental conditions
Prior to each experiment, C. albicans isolates were cultured at 30ºC for 48 hours on Sabouraud's dextrose agar (SDA, Difco TM , Becton Dickinson, Spark, MD, USA), and one colony of yeast was inoculated into yeast nitrogen base (Difco TM , Becton Dickinson) medium supplemented with 50 mM glucose. P. vulgaris and P. mirabilis were first subcultured at 37ºC for 18 hours on tryptic soy agar. One colony each of P. vulgaris and P. mirabilis was then inoculated into tryptic soy broth (Difco TM , Becton Dickinson) and incubated at 37ºC for 18 hours. The experimental conditions were as follows: 1) the microorganism was cultured alone; 2) C. albicans was co-cultured with live P. vulgaris or P. mirabilis; 3) C. albicans was co-cultured with P. vulgaris or P. mirabilis killed at 100ºC for 30 minutes; or 4) C. al-bicans was treated with bacteria-cultured supernatants of P. vulgaris or P. mirabilis diluted four times, in which the bacteria were removed.
XTT reduction assays
Biofilm formation was quantified using the method developed by Ramage, et al. 23 Biofilms were formed on commercially available pre-sterilized, polystyrene, flat-bottomed, 96-well microtiter plates (Costar, Cambridge, MA, USA). Microorganisms were prepared for each condition and transferred to selected wells of a microtiter plate. The plate was incubated for 90 minutes at 37ºC in an orbital shaker at 75 rpm. After the initial adhesion phase, the cell suspensions were aspirated, and each well was washed twice with phosphate-buffered saline (PBS) to remove loose adherent cells. A volume of 200 µL of medium was added to each well, and the plate was then incubated for another 72 hours. After biofilm formation, the medium was aspirated, and non-adherent cells were removed by thoroughly washing the biofilm three times with PBS. A quantitative measure of biofilm formation was calculated using the XTT [2,3-bis(2-methyoxy-4-nitro-5-sulfo-phenyl)-2H-tetrazolium-5-carboxanilide] reduction assay. A 200-µL aliquot of XTT (1 mg/mL, Sigma, St. Louis, MO, USA) and menadione (0.4 mM, Sigma) solution was added to each well containing the prewashed biofilm and the control well. The plates were then incubated in the dark for up to 3 hours at 37ºC. A colorimetric change resulting from XTT reduction was measured using a microtiter plate reader (EMax, Molecular Devices, Sunnyvale, CA, USA) at 490 nm.
C. albicans cell counts
After biofilm formation, the medium was aspirated, and nonadherent cells were removed by thoroughly washing the biofilm three times with PBS. Then, 1 mL of PBS was transferred to each well, and biomass was meticulously scraped off. The resultant solution containing the detached biofilm cells was gently vortexed for 1 minute to disrupt the aggregates and inoculated on an SDA plate. The colony forming units (CFUs) of C. albicans were quantified after 48 hours of incubation at 30ºC.
Scanning electron microscopy
We developed biofilms from single species, in addition to Candida biofilms that were co-cultured with bacteria on polystyrene coverslips as described. The coverslips were washed twice with PBS and placed in PBS with a fixative of 2.5% glutaraldehyde (Sigma) for 20 hours. Next, they were washed for 5 minutes in PBS and then placed in 1% osmium tetroxide for 30 minutes. After a series of alcohol washes, a final drying step was performed using the critical point drying method. Biofilms were then mounted and gold coated. Samples were imaged with a scanning electron microscope (TM-1000, Hitachi, Tokyo, Japan) in a high-vacuum mode at 15 kV.
Relative quantitation by real-time reverse transcriptase polymerase chain reaction
RNA was isolated from C. albicans cells using the MasterPure Yeast RNA Extraction kit (Epicentre Biotechnologies, Madison, WI, USA). RNA was treated with amplification grade DNase I (Epicentre Biotechnologies) and used for cDNA synthesis with random hexamer primer (Invitrogen Life Technologies, Carlsbad, CA, USA) using Superscript II reverse transcriptase reagents (Invitrogen Life Technologies). Each reaction contained 1 µg of total RNA, 1 µL of 50 µM hexamer, and 1 µL of 10 mM dNTP in a final volume of 10 µL. Reactions were incubated at 65ºC for 5 minutes and cooled on ice. To each reaction tube, 10 µL of the following mixture was added: 4 µL of 5x First-Strand Buffer, 2 µL of 10 mM MgCl2, 2 µL of 0.1 M DTT, 1.4 µL of RNase inhibitor, and 1 µL of Superscript II. Reactions were incubated at 42ºC for 50 minutes and then at 70ºC for 15 minutes. Real-time polymerase chain reaction (PCR) contained 10 µL of Power SYBR Green Master Mix (Applied Biosystems, Foster City, CA, USA), as well as forward and reverse primers (1 µL of each) ( Table 1) 22 and sterile water, at a final volume of 20 µL. The PCR was run on MicroAmp ® Optical 384-well reaction plates in an ABI 7900 Real-Time PCR system (Applied Biosystems). Real-time PCR reactions were performed at 95ºC for 5 minutes, followed by 40 cycles of 15 seconds at 95ºC and 1 minute at 60ºC. Dissociation curves were analyzed for all reactions to verify single peaks/products. Expression levels were analyzed using ABI 7900 System SDS software (Applied Biosystems). Real-time PCR data were normalized with the geometric mean of two reference genes. The ACT1 and PMA1 genes were used for this purpose.
Statistical analysis
All experiments were performed in triplicate on three different occasions. All data are expressed as mean values with corresponding standard deviations (SDs). Student's t-tests and Mann-Whitney U-tests were used to compare the differences between Candida only and Candida co-cultured with P. vulgaris or P. mirabilis. All p-values <0.05 were considered statistically significant.
The effect of P. vulgaris or P. mirabilis on C. albicans biofilm formation
The biofilm value generated when each type of microorganism was incubated separately was 0.07±0.007 for P. vulgaris, 0.120±0.004 for P. mirabilis, and 1.403±0.103 for C. albicans. When culturing C. albicans and P. vulgaris or P. mirabilis together, biofilm formation was reduced by more than 80%, compared to the C. albicans culture alone. To assess the mechanism by which the bacteria impeded Candida biofilm formation, bacteria were initially treated for 30 minutes at 100ºC to eliminate biological activity, followed by co-culture with C. albicans. Biofilm formation of C. albicans cultured together with heatkilled P. vulgaris or P. mirabilis elicited a 70% reduction, compared to the control group. These data indicate that dead bacteria interfere with the structural formation of C. albicans biofilms (Fig. 1A).
When C. albicans and P. vulgaris or P. mirabilis were co-cultured, biofilm formation of C. albicans was significantly reduced. In order to determine if this reduction was due to the effect of the bacteria or of secretory products when cultured, P. vulgaris and P. mirabilis were cultured for 72 hours, and remaining bacteria were removed by filtration. As a result of treatment with P. vulgaris and P. mirabilis culture supernatants, C. albicans biofilm formation was reduced by 60−70%, compared to C. albicans cultured alone (Fig. 1B). To determine whether this effect was due to a depletion of nutrients in the medium or to the secretory products of the bacteria, we diluted the cultured supernatants and tested the concentration effect on C. albicans biofilm formation. C. albicans biofilm formation decreased in proportion to the concentration of the bacterial-cultured supernatants (data not shown). To determine whether P. vulgaris and P. mirabilis inhibited the growth of C. albicans, C. albicans and P. vulgaris or P. mirabilis or bacterial-cultured supernatants were cultured together for 72 hours, and C. albicans CFUs were calculated. When C. albicans was cultured alone, the count was 2.85×10 8 CFU/mL; when cultured together with P. vulgaris, the count of C. albicans was reduced to 9×10 6 CFU/ mL; and when C. albicans was cultured together with P. mirabilis, the count of C. albicans dropped to 2.4×10 7 CFU/mL ( 1C). It seems that P. vulgaris and P. mirabilis inhibit C. albicans biofilm formation and also interfere with its growth. As a result, we investigated the C. albicans CFUs after treating C. albicans with bacterial-cultured supernatants and culturing for 72 hours. In the case of culturing C. albicans alone, the count was 2.85×10 8 CFU/mL. Meanwhile, the count from the treated culture supernatants of P. vulgaris was 3.6×10 7 CFU/mL, and the count from the treated culture supernatants of P. mirabilis was reduced to 2.0×10 7 CFU/mL (Fig. 1C).
The effect of P. vulgaris or P. mirabilis on C. albicans morphology-related gene expression C. albicans morphology changes to the hyphae form from the yeast form as biofilm formation progresses, and the expression pattern of genes related to morphology at these stages was identified. Previous studies have shown that expression patterns of Candida species-related genes are significantly increased during hyphae-specific gene expression over the duration of biofilm formation. On the contrary, the expression of genes suppressing the formation of yeast-specific genes and filament are decreased. 22 These results suggest that Candida present in yeast form adhere to a surface to increase the expression of hyphae-related genes and promote the formation of biofilms by decreasing the expression of genes that inhibit the yeast form and filament formation. To clarify the effect of cocultures on biofilm formation, we analyzed changes in C. albi-cans gene expression levels in biofilms co-cultured with P. vulgaris or P. mirabilis. In contrast to C. albicans cultured alone, als3 and hwp1 showed a reduction in expression by 80% and ece1 and sap5 by 90% ( Fig. 2A). When C. albicans was cultured together with P. vulgaris or P. mirabilis, the expression of tup1 and nrg1, which are genes known to suppress filament formation, increased by about two-fold, compared to when C. albicans was cultured alone (Fig. 2B). Regarding the expression of yeast-specific rhd1 and rbe1, the expression levels of rhd1 increased by 2.5-fold and rbe1 by about more than 3-fold (Fig. 2C).
When C. albicans was co-cultured with heat-killed P. vulgaris and P. mirabilis, biofilm formation was decreased. We confirmed that these results were due to changes in the expression of biofilm-related genes in C. albicans. First, ece1, hwp1, and sap5 gene levels, which are associated with the formation of hyphae, did not increase. While als3 level was slightly increased, the change was not significant (Fig. 3A). There were also no significant changes in tup1 and nrg1, which are filament formation inhibiting genes (Fig. 3B). Yeast-specific genes also showed no difference in expression levels (Fig. 3C). The reduction in C. albicans biofilm formation in co-cultures with killed bacteria is considered to be caused not by changes in gene expression, but by the interference of killed bacteria acting as small particles that fit between C. albicans cells and, thus, suppress the structural formation of biofilm.
The effect of cultured P. vulgaris or P. mirabilis supernatants on expression of C. albicans morphology-related genes
As described above, treatment of P. vulgaris and P. mirabilis cultured supernatants inhibited C. albicans biofilm formation. We examined how this treatment affected the expression of various genes involved in biofilm formation. The hyphae-specific genes als3, ece1, hwp1, and sap5 all showed a significant reduction in expression in contrast to the cultured supernatants in which C. albicans was cultured alone (Fig. 4A). In contrast, the filament formation inhibiting genes tup1 and nrg1 slightly increased (Fig. 4B). The yeast-specific gene rhd1 increased by 2-fold, and rbe1 slightly increased (Fig. 4C). This suggests that secretory products that are formed and released with growth of P. vulgaris and P. mirabilis inhibit the growth of C. albicans and regulate the expression of biofilm-related genes, thereby inhibiting biofilm formation in C. albicans.
Scanning electron microscopy of biofilms
It is known that biofilms are not formed with a simple struc- ture and that Candida exist in the yeast form in the basal layer and in the hyphae and pseudohyphae form in the layer above. These layers form a network with a three-dimensional structure. The structural difference between the biofilm when C.
albicans is cultivated separately and when co-cultured with P. vulgaris or P. mirabilis was investigated via scanning electron microscopy (Fig. 5). The biofilm of C. albicans when cultured alone was high in density and was a multi-layer solid (Fig. 5A 1. and B). The biofilm formed when C. albicans and P. vulgaris or P. mirabilis were cultured together showed that the P. vulgaris or P. mirabilis were attached to the hyphae of C. albicans and inserted between the C. albicans cells. The thickness of the biofilm in these cases also appeared to be thin with a low density ( Fig. 5C and E). The structure of biofilms formed after culturing heat-killed bacteria and C. albicans together was thinner and showed lower density than biofilms formed after separate culture of C. albicans (Fig. 5D and F). However, the heat-killed bacteria biofilm was thicker than the biofilm formed after C. albicans had been cultured together with live bacteria, and more of the C. albicans was in the yeast form. The above results agree with the outcome of XTT reduction assays, which quantitatively confirmed biofilm formation (Fig. 1A).
DISCUSSION
Microorganisms that are fixed inside a biofilm show resistance to the immune system and have a strong tolerance to antibiotics relative to planktonic microorganisms. 24 Many human infections are the result of microorganisms in biofilms.
Most studies on biofilm formation and the interrelation between microorganisms in them have focused on bacteria. 25 However, biofilms affected by the relationship between bacteria and fungi are clinically crucial, because these biofilms increase the morbidity and mortality of infections. 19,26 The research presented here verified the influence of the coexistence of C. albicans and P. vulgaris and P. mirabilis on C. albicans biofilm formation and whether the correlation be-tween C. albicans and P. vulgaris and P. mirabilis was competitive or symbiotic. The architecture and functioning of complex biofilms are very intricate and were not clearly elucidated. In addition, the correlation between microorganisms inside the complex cultivated biofilm was not identified.
When C. albicans was cultivated with P. vulgaris and P. mirabilis, both biofilm formation and number of C. albicans cells decreased, compared to when C. albicans was cultured alone (Fig. 1). Even the diluted supernatants of P. vulgaris and P. mirabilis cultivation were confirmed to hinder biofilm formation, which implies that it was not the depletion of nutrients due to mixed culture of P. vulgaris and P. mirabilis plus C. albicans that decreased biofilm formation, but that the bacteria directly hindered formation due to secretory products (Fig. 1). The Proteus-specific products inhibiting the growth of C. albicans or biofilm formation are not yet clarified, and further research is needed.
Interestingly, even when C. albicans was cultivated with heat-treated P. vulgaris and P. mirabilis, biofilm formation decreased (Fig. 1A). It is considered that both the secretory products from P. vulgaris and P. mirabilis and the bacterial architecture itself induced structural changes and hindered the ability of C. albicans to form biofilms. Further, we verified biofilm structures of C. albicans alone or when cultivated with P. vulgaris and P. mirabilis via scanning electron microscopy. The biofilm formed normally only when C. albicans was cultured alone, showing high density and numerous layers (Fig. 5A). In contrast, biofilms that formed when P. vulgaris and P. mirabilis were cultured together showed low-density, thin biofilms with bacteria among the C. albicans cells, and a notice- ably decreased number of mycelia (Fig. 5). It is clear that both living and dead bacteria particles influenced structural maturation of the biofilm. Thus, P. vulgaris and P. mirabilis suppress the growth of C. albicans and can function as structural obstructive factors to the maturation of Candida biofilms.
The formation of hyphae is essential to C. albicans biofilm formation, so it is also important to understand the genetic basis of the morphological changes in C. albicans. 10,11 The mature biofilm enables the Candida yeast to fix the biofilm on to the extracellular surface, and the hyphae form a cross-sectional structure with structural frames. 27 Inhibition of the hyphae transgenes of C. albicans led to biofilm formation with the basal layer only, whereas inhibition of the yeast transgene of C. albicans led to biofilm formation with only the outer layer among the existing biofilm structures. 11,28 In this study, the expression of hyphae-related genes C. albicans was significantly inhibited in the presence of live Proteus or by Proteus-cultured supernatant. The expression of both yeast-related and filament formation inhibiting genes in C. albicans was up-regulated by treatment with live Proteus or Proteus-cultured supernatant (Figs. 2 and 4); however, the expression of morphology-related genes was not affected by heat-killed P. vulgaris and P. mirabilis (Fig. 3).
These results suggest that secretory products of P. vulgaris and P. mirabilis regulate the expression of genes that are related to morphologic changes, which could be the crucial factor in C. albicans biofilm formation, inhibiting hyphal transition from the yeast form to the hyphal form. Due to an increase in only the yeast form and the lack of hyphal form, the C. albicans biofilm would not form a solid 3D structure, but only a thick, basal-layered structure. | 2018-04-03T01:07:57.540Z | 2017-09-28T00:00:00.000 | {
"year": 2017,
"sha1": "946643534ac48c68313856a4dd3c159e38214141",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3349/ymj.2017.58.6.1135",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "946643534ac48c68313856a4dd3c159e38214141",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
135218575 | pes2o/s2orc | v3-fos-license | Prospective Study About the Influence of Human Mobility in Dengue Transmission in the State of Rio de Janeiro
Dengue is a human arboviral disease transmitted by Aedes mosquitoes and it is currently a major public health problem in which around 2–5 billion people are at risk of infection each year. Climate changes and human mobility contribute to increase the number of cases and to spread the disease all around the world. In this work, the influence of human mobility is evaluated by analyzing a sequence of correlations of dengue incidence between cities in southeastern Brazil. The methodology initially identifies the cities were the epidemy begins, considered as focus for that epidemic year. The strength of the linear association between all pairs of cities were calculated identifying the cities which have high correlations with the focus-cities. The correlations are also calculated between all pairs considering a time lag of 1, 2 or 3 weeks ahead for all cities except the focus ones. Centred differences of the notification number are used to detect the outbreaks. The tests were made with DATASUS-SINAN data of the state of Rio de Janeiro, from January 2008 to December 2013. Preliminary results indicate that the spread of dengue from one city to another can be characterized by the development of the sequence of shifted correlations. The proposal may be useful to consider control strategies against disease transmission.
Introduction
Dengue virus infects each year about 300 million people worldwide and nearly 90 million of them develop the classic symptoms of the disease, such as fever, headache and nausea. Currently, dengue is endemic in more than 100 countries in Africa, America, Asia and Oceania [9]. In Brazil, the first documented occurrence was in Roraima, 1981Roraima, -1982, and the first huge epidemic was in Rio de Janeiro city, 1986 [17]. The largest outbreak in Brazil occurred in 2013, accounting for around 1.5 million of notified cases [13]. Dengue is transmitted primarily by Aedes mosquitoes, particularly Aedes aegypti. The disease manifests in tropical and subtropical areas, which climatic conditions favor the development of eggs into larvae and mosquitoes. In Brazil circulate four strains of the virus, known as DEN-1, DEN-2, DEN-3 and DEN-4 of the family Flaviviridae, genus Flavivirus [3].
Factors such as population growth, global warming, rural-urban migration, environmental deterioration and the quality of basic sanitation, are some of the causes for the increase in infectious disease transmitted by vectors [12,22]. Although there are not a consensus about disease's persistence, recent studies suggested the human mobility may be responsable for the emergence and reemergence of some diseases, in both the direct and indirectly transmitted [1,17,21]. Chikungunya and Zika outbreaks in Brazil are examples of diseases that have emerged in the country lately, and until recently, Chikungunya had only been detected in Africa, East Asia and India [10,15].
Adams and Kappan [1] indicate that the spread of influenza and SARS (Severe Acute Respiratory Syndrome), from national to continental scale has been supported by the growth of airline transport network. In both global and local scales exist a daily traffic of people who move to work, tourism, etc. In the case of dengue, many people are asymptomatic then this scenario may be even more pronounced, because people may be spreading the disease to other places without even known they are infected [1,4,21]. The study developed in [1] highlights the role of human movement for the disease's persistence by establishing a dynamic on a hypothetical network. The authors observed that the understanding of human mobility can be used to map risk areas and provide targets for intervention and prevention. Stoddard et al. [21] investigated the relevance of human movement associated with vector behavior and how these two factors can increase the risk of exposure to disease due to human movement.
In general, most people have the same habit of daily mobility, in this work we analyze if the spread from one city to another can be explained by human mobility. Correlations between all pairs of cities were calculated considering that the beginning of the disease in each pair may be synchronized or not. The methodology is applied to a region composed by the municipalities of Rio de Janeiro State and all the border cities of Sao Paulo, Minas Gerais e Espirito Santo.
Materials and Methods
The state of Rio de Janeiro is located in southeastern Brazil, has a total population of 16.627.880 inhabitants [16]. About 96% of its population live in urban areas. The climate is tropical with an average temperature of 25 Celsius degrees throughout the year. Rio de Janeiro is one of the most visited place in Brazil, receiving tourists from all over the world during all seasons. These conditions, along with climate changes and increasing urbanization, ensure the mosquitoes proliferation and the disease maintenance [9].
Our analysis is based on data obtained from database of Notifiable Diseases Information System (Sistema de Informação de Agravos de Notificação -SINAN) an entity of Federal Government. In our study, the data considered were the weekly cases of dengue incidence from 2008 to 2013 for all cities of Rio de Janeiro and also the surrounding cities, totaling 130 cities [19]. The raw data were normalized by the urban population of each city [16].
The incidence considered significant was based on epidemiological alert thresholds defined by the Ministry of Health, therefore, there were excluded from the study, cities with incidence below than 300 cases per 100.000 inhabitants [13].
Defining a period of 52 weeks the methodology initially identify the cities that had outbreak, for instance, the cities which the number of notifications is equal or greater than 300 cases per 100 thousand inhabitants. Among this subset of cities a second cut is done excluding the cities which the total size of the population is less than 50 thousand inhabitants. After these two filters we selected the cities that first reach the incidence of 300 cases and define them as focus of the infection. Centred differences of the notification numbers were used to detect the outbreaks.
The correlations between all pairs of cities were calculated for the whole region with Pearson coefficient for two cases: Case 1: for all cities the period of analysis is defined from week 1 to week 52; Case 2: except by the focus, the period for the other cities is defined with a delay of 1, 2 or 3 weeks. A high correlation with delays between two cities (C j , C k ), j, k = 1, · · · , m, where m is the total number of cities being analyzed, suggests that the outbreaks have a time lag of n-weeks, which may indicate that the disease migrates from city C j to city C k . We define that the correlation is significant if its value is greater than 0.8 and the significance p − value is less than 0.05. Our hypothesis is that dengue spread from one city to another and it could be verified by the evolution of the sequence of n time lag calculated correlations.
We aimed to test essentially if the disease initiates at the same time all over the state. We were inspired by the work of Saba et al. [18] that used the correlation between the occurrences of cases of dengue between cities in the state of Bahia to build a network of mobility [11,14]. The authors considered that the existence of correlation between cases of dengue in two cities corresponds to an edge of the graph. It is possible to see through the graphs of incidence that there is a time lag between the epidemic curves, then we considered reasonable to verify the hypothesis of human mobility in the spread of the disease through the development of correlations.
Results
The year 2008 was chosen for presentation of results due to the high incidence of reported cases and by present a well-defined qualitative behavior compared to other years, however, the whole period from 2008 to 2013 was analyzed. We defined the epidemiological year considering the period from January to December, because we are assuming that the disease is the same in the whole state.
Considering the two filters described by the methodology, among the 130 municipalities analyzed, 18 of these are part of the metropolitan area of Rio de Janeiro (MARJ), 3 of these are part of Baixada Litorânea, 3 of these are part of the Médio Parnaíba, 2 of these are part of the northwest region, 1 of northern region and one of Região da Costa Verde. From the incidence data observed for this year, the cities Angra dos Reis, Campos dos Goyatacazes, Niterói, Nova Iguaçu, Rio de Janeiro and Seropédica, were chosen as focus of the disease because they were the first cities to achieve 300 cases per 100.000 inhabitants. Table 1 shows in the first column the pairs of cities with high correlation presented without delay, the second and third columns are the correlations with n-weeks of delay n = 1, 2. Correlation nn , n >= n, n, n = 0, 1, 2, 3, means that the correlation is evaluated between the time series of two cities shifted, respectively, by n and by n weeks from the first week of the period of one year. If n = 0 and n > n, the focus cities were fixed in the first position with no delay, and the other cities were shifted by n-weeks, giving Correlation 00, 01, 02 and 03. Intermediate correlations as Correlation 12, 13, 23 were calculated in order to try to explain some cases of high-type correlations between cities that are geographically distant. Table 1 Pairs of Cities in the Rio de Janeiro State with correlation above 0.8. Correlation 00 means correlation between the cites C j and C k without delay. Correlation 01 is the correlation between the cities C j and C k with the city C k shifted by one week. Correlation 12 means the city C j shifted by one week correlated with the city C k shifted two weeks Correlation 00 Correlation 01 Correlation 12 Table 1 is possible to observe that dengue begin simultaneously in the pairs presented in the first column of the Table 1, because high correlation was found with no delay. On the other hand, high correlations between nearby and distant cities with delay of 1 or 2 weeks were found (second and third columns of the Table 1), when one of the city of the pair is a focus.
Examples in which appear high correlation between dengue time series with relatively distant cities, could be an indicative of the role of human mobility in spreading the disease. According to Farias [6] is possible to highlight two types of commuting in the state of Rio de Janeiro: daily flows of short distance and greater frequency, mainly associated with trade and manufacturing industries (intra-regional level); and not daily flow of great distance and low frequency, associated with mining and construction industry inter-regional level.
In fact, such commutings may explain some of the correlations. The state has significant flow rates primarily concentrated in the metropolitan area of Rio de Janeiro, MARJ for short [7], explaining the high correlations independent of the existence of the delay between the cities of MARJ region, respectively, lines 1, 2, 3 and 6 of the Table 1.
On the other hand, intercensual analysis from 2000 to 2010 indicate a decentralization of the pendulum movement inside MARJ. A significant growth of the pendularyity outside MARJ was observed concentrated mainly between the northern regions and the coast. During the first decade of this century some urban centers in the state, especially Macaé, expanded its area of influence to the northern region, in particular to Itaperuna and to Baixada Litorânea. This movement could be observed in the pairs presented in the lines 4 and 7 of the Table 1.
The correlation between Magé and São Pedro da Aldeia in the line 5, may not necessarily be explained by human mobility, since there are few signs of mobility between these two cities. In especial, we see a great migration from São Pedro da Aldeia to Macaé and other cities that make up the region of OMPETRO [5]. As it is generally known, dengue is influenced by several factors such as climate, temperature, basic sanitation or public health policy, and in these cases it is not ruled out the hypothesis that the epidemic curves for these two cities have obtained correlation because the events may have occurred simultaneously but in an isolated manner. For these cases, we address that mobility is not responsible for high correlation.
In addition, for a more complete analysis, we also calculated the correlations including those cities which had urban population smaller than 50000 and greater than 10000, and that also reached more than 300 cases per 100 thousand inhabitants in 2008. About 66 cities were selected, the focus was Cantagalo that reached more than 300 cases in the fifth epidemiological week.
Cantagalo is characterized as an independent pole and correlates with other cities of the mountainous region of north and northwest parts of Rio de Janeiro State [2]. It has obtained correlation between Porciúncula and Santo Antônio de Padua with one and two weeks delay showed in Table 1, line 8. Although the 2010 census data indicates that exists considerable migration between metropolitan cities and the mountainous northwestern region of Rio de Janeiro, we did not find sufficient evidence to suggest that human mobility has been responsible for this association. Table 1. Nova Iguaçu was chosen as focus because this city was the first one to achieve the 300 cases (tenth epidemiological week). Nova Iguaçu and Niterói have high correlation with no shift (line 1, column 2); Itaboraí has higher correlation with Niterói with a shift of 1 week and finally Cachoeiras de Macacu has higher correlation with Itaboraí with a shift of 2 weeks.
In Fig. 2 are presented the correlations obtained in line 5 of Table 1. Niterói was chosen as focus because this city was the first one to achieve the 300 cases (tenth epidemiological week). Niterói and Duque de Caxias have high correlation with no shift (line 5, column 2); Magé has higher correlation with Duque de Caxias with a shift of 1 week and finally São Pedro da Aldeia has higher correlation with Magé with a shift of 2 weeks.
In Fig. 3 are presented the correlations obtained in line 3 of Table 1. Rio de Janeiro was chosen as focus because this city was the first one to achieve the 300 cases (tenth epidemiological week). Rio de Janeiro and Duque de Caxias have high correlation with no shift (line 3, column 2); Magé has higher correlation with Duque de Caxias In Fig. 4 are presented the correlations obtained in line 8 of Table 1. Cantagalo was chosen as focus because this city was the first one to achieve the 300 cases (fifth epidemiological week). Cantagalo and Cordeiro have high correlation with no shift (line 8, column 2); Santo Antônio de Padua has higher correlation with Cordeiro with a shift of 1 week and finally Porciúncula has higher correlation with Santo Antônio de Padua with a shift of 2 weeks.
Conclusions
The hypothesis of an association between the occurrence of dengue cases between different cities in the state of Rio de Janeiro and surrounding areas was tested. The proposed methodology identified significant correlation between cities without delay, this results suggests that the dengue epidemic occurred simultaneously in both cities, while correlations with delay may provide evidence that the mobility of people may be responsible for the spread of the disease among the regions of the state.
Using the proposed methodology, we identified the cities: Nova Iguaçu, Niterói, Rio de Janeiro, Seropédica, Campos dos Goytacazes and Cantagalo as focus of the disease in the year 2008. Then we calculate the correlations with n-delay, n = 0, 1, 2, 3 for the focus cities with the other cities that were selected. We were able to justify part of the significant correlations between various cities through the pendular mobility among regions of the state. The correlations that we can not explain could be independent events or characterize one diffusive process.
This information could provide an efficient control framework to guide health authorities in decision making. Once verified that dengue does not emerge at the same time in all state, and that there exist cities with potential for further spread (due to the concentration of industrial activities, market, turism, etc.) the control services could concentrate resources in a more efficient way in cities that are potential sources of spread.
Based on the identification of the propagation cascade of dengue from the focus into the other municipalities, the next step is the construction of a topological network, representing these spread dynamics coupled with human mobility data. | 2019-04-27T13:09:44.170Z | 2016-06-14T00:00:00.000 | {
"year": 2018,
"sha1": "2e0f495362dd6e06d01b7f5edfd097c2d10dfabd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "344a1674715bed2a1a4eb19b643d54b0cbe5bb99",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Geography"
]
} |
14886112 | pes2o/s2orc | v3-fos-license | The costs of integrated community case management (iCCM) programs: A multi–country analysis
Background Integrated community case management (iCCM) can be an effective strategy for expanding the provision of diarrhea, pneumonia, and malaria services to children under 5 years old but there are concerns in some countries about the corresponding cost and impact. This paper presents and compares findings from a multi–country analysis of iCCM program costs. Methods Data on coverage, utilization, and costs were collected as part of two sets of studies conducted between 2011 and 2013 for iCCM programs in seven sub–Saharan African countries: Cameroon, the Democratic Republic of the Congo, Malawi, Senegal, Sierra Leone, South Sudan and Zambia. The data were used to compare some elements of program performance as well as costs per capita and costs per service (which are key indicators of resource allocation and efficiency). Results Among the seven countries, iCCM utilization ranged from a total of 0.26 to 3.05 contacts per capita (children 2–59 months) per year for the diseases treated, representing a range of 2.7% to 36.7% of the expected numbers of cases. The total recurrent cost per treatment ranged from US$ 2.44 to US$ 13.71 for diarrhea; from US$ 2.17 to US$ 17.54 for malaria (excluding rapid diagnostic testing); and from US$ 1.70 to US$ 12.94 for pneumonia. In some of the country programs, the utilization of iCCM services was quite low and this, together with significant fixed costs, particularly for management and supervision, resulted in services being quite costly. Given the differences across the countries and programs, however, these results should be treated as indicative and not definitive. Conclusion A comprehensive understanding of iCCM program costs and results can help countries obtain resources and use them efficiently. To be cost–effective and affordable, iCCM programs must be well–utilized while program management and supervision should be organized to minimize costs and ensure quality of care. iCCM programs will not always be low–cost, however, particularly in small, remote villages where supervision and supply challenges are greater. Further research is needed to determine the cost–effectiveness of iCCM programs and corresponding patient and service delivery costs.
Due to limited access to effective treatment, diarrhea, malaria and pneumonia remain the leading causes of child mortality in sub-Saharan Africa and result in nearly 41% of global deaths in children under five years old [1]. To improve access to treatment of these illnesses, several developing countries have adopted integrated community case management (iCCM) -the delivery of timely interventions at the community level by community health workers (CHWs). This is seen as a key strategy in meeting Millennium Development Goal 4 on reducing child mortality by 2015.
To be effective, iCCM services must be available from a single provider ("one-stop shopping") within 24 hours of the onset of symptoms. For example, if a child has a fever, the parent should be able to see a CHW in his or her community within 24 hours and the CHW should be able to provide diagnosis and treatment if the case is simple and refer the case if it is not. The integration of these services is important -there is growing evidence that this increases the utilization of malaria and pneumonia treatment [2][3][4] compared with separate community-based interventions, and also delivers more timely and appropriate treatment for fever, including malaria. Easy access is also crucial and the availability of iCCM services is especially important in hard-to-reach areas where people live far from health facilities.
Despite the reported success of iCCM in several low-and middle-income countries, it has yet to be implemented as a national strategy in some other countries. This is partly due to concerns about the costs and financing of iCCM programs and the justification of the extra investment in terms of the related health outcomes. A comprehensive understanding of the costs and results will help countries who are considering implementing or expanding iCCM programs to advocate for funding and to plan and budget appropriately. It will also allow for costs to be better monitored and controlled, thus contributing to the efficient use of scarce resources. This paper describes and compares the results of iCCM cost analyses conducted under two separate sets of studies in seven sub-Saharan African countries.
METHODS
The cost analyses were conducted between 2011 and 2013. The first two studies were conducted of national programs in Malawi and Senegal in 2011 and 2012 as part of the testing of an iCCM costing and financing tool under the United States Agency for International Development (USAID) Translating Research into Action Project, and these countries were selected because they have mature iCCM programs and sufficient data. A third study conducted in Rwanda was excluded because data were not comparable. The second set of five studies was conducted of sub-national programs in Cameroon, Democratic Republic of Congo (DRC), Sierra Leone, South Sudan, and Zambia in 2013 with funding from the Bill and Melinda Gates Foundation (BMGF). These five countries were selected by BMGF to estimate the costs of five iCCM projects funded by another international donor. The areas where these five studies were conducted were, reportedly, based on need and feasibility, and all included areas with hard-to-reach populations.
Data were obtained from records and through interviews with the CHWs who provided iCCM services, their supervisors, and program managers. A standard questionnaire was used for the interviews. The samples of districts, health centers, and communities were selected purposefully in terms of access and availability of health facility staff and CHWs. Time limitations and access constraints, such as poor road conditions, meant that some samples may not have included health facilities and CHWs from very remote areas. The samples were relatively small but were sufficient to validate service delivery protocols and to collect data on the work and supervision of the CHWs. An average of 12 health centers were visited in each country, and interviews were conducted with an average of three CHWs per health center, totaling approximately 36 CHWs in each country.
The costs were analyzed using the USAID iCCM Costing and Financing Tool (the tool is available at www.msh.org/ iccm and is described in detail in the individual country studies). At the service delivery level, this is a bottom-up, activity-based costing tool, in which standard costs are used to estimate total direct costs per service. Indirect costs, such as supervision and training, are then allocated based on CHW time estimates using a top-down methodology. The resulting figures are a mixture of standard and actual costs, obtained from accounting and budget records and through interviews, in what is sometimes known as an "ingredients" approach [5]. The costs shown were generally total costs incurred by both governments and NGOs and financed from government and donor sources.
The data collection and initial analysis took an average of three weeks in-country involving a small team of experienced data collectors and one experienced health economist. The final analysis, report-writing and validation for each study took an additional two weeks.
Country studies
All seven iCCM programs varied in terms of study period, population density and coverage, incidence rates, hard-toreach populations, the nature of the implementing organization (government or NGO), supervision, supply chain, CHW remuneration, user fees, and other aspects. These many differences limit the usefulness of direct comparisons of the findings, but add richness to the analysis. More information on the country studies can be found in the individual country reports [6][7][8][9][10][11][12][13] -brief descriptions are below: Malawi' s national iCCM activities began in 10 districts in 2008 and, with support from donors, were scaled-up throughout the country by 2010. CHWs were remunerated by the government and were expected to spend two days per week on iCCM activities at village clinics and to participate in active case finding through household visits. iCCM services comprised the treatment of malaria, diarrhea, pneumonia, and red eye as well as the identification and referral of anemia and malnutrition. The costing study was based on a sample of iCCM services in the 2328 hardto-reach communities covered by iCCM.
Senegal' s iCCM program started in 2003. Services were provided through USAID' s Community Health Program which covered the whole country in collaboration with the Ministry of Public Health (MoPH). Services were provided at health huts in the communities and were meant to cover rural, remote areas that did not have health posts (the lowest level of facility). The iCCM service package included rapid diagnostic tests (RDTs) and malaria treatment, and diarrhea and pneumonia treatment. User fees were charged and patients were supposed to purchase the medicines; the prices included a mark-up of 5% to 25%. The funds were intended to be used to replenish stocks and to cover other costs.
In Cameroon, a local NGO, in collaboration with the MoPH, began implementing an iCCM project in 2009 in two remote districts -Nguelemendouka and Doumé. Through this project, volunteer CHWs provided free treatment to children between the ages of 2 to 59 months for cases of malaria and diarrhea. Treatment for pneumonia was added in Nguelemendouka District in 2013.
In the Democratic Republic of the Congo (DRC), implementation of a national iCCM program began in 2005 under the leadership of the MoPH. In 2010, iCCM services were expanded to include family planning services, and the MoPH mandated that malaria treatment be integrated at community sites, along with pneumonia and diarrhea treatment services. The focus of the costing study was the iCCM component of a project in 9 of the 16 health zones of the Sud-Ubangi District in Equateur Province. The project started in 2010 and was implemented by a local NGO in coordination with the MoPH.
In Sierra Leone, an international NGO led an iCCM project which began in Kono district in May 2006. Unpaid Community Health Volunteers provided free treatment to children ages 2 to 59 months for malaria, diarrhea, and pneumonia. Starting in September 2013, the plan was to expand the role of the CHW to include the delivery of community-based maternal and newborn health care interventions.
In South Sudan, an international NGO began implementation of an iCCM program in 2009 in hard-to-reach areas in five states and ten counties. These include Kapoeta North County which was selected for the costing study. Unpaid Community-Based Distributors provided free treatment to children ages 2 to 59 months for cases of malaria, diarrhea, and pneumonia.
In Zambia, volunteer CHWs began conducting iCCM activities in four districts of Luapula Province in late 2010, then scaled up to all seven districts in 2012, serving a population of 741 373 in remote communities. The iCCM package included RDTs and treatment of pneumonia, diarrhea and malaria, and was implemented in areas where access to health facilities and services was limited. The program had a demand generation element, including having CHWs conduct behavior change communication activities. The project was managed by an international NGO working closely with the Ministry of Health (MoH) as part of a national iCCM program being implemented by the MoH across the country, although this project may not have been completely representative of the overall MoH program.
A summary of key elements of the iCCM programs studied are shown in Table 1.
Coverage and utilization
The package of iCCM services varied across the programs, with only six of the seven covering the three illnesses in an integrated way ( Table 2). In Cameroon pneumonia treatment was not part of the package at the time of the study.
In some cases, more services were included; for example, treatment of red eye and anemia in Malawi. Malaria treatment was provided symptomatically for fever in Malawi, Cameroon, DRC, Sierra Leone and South Sudan, whereas RDTs were used to detect malaria in Senegal and Zambia. Based on estimates of incidence, the expected number of total annual episodes of illness per child (2-59 months) in the programs where the three main diseases were covered ranged from 5.3 in Malawi to 9.6 in Senegal (where fever was included in the number of episodes, we excluded the numbers of malaria episodes to avoid double-counting).
The catchment areas comprised hard-to-reach communities, and it was assumed that there was no access to health facilities or other qualified service providers and all cases should, therefore, have been seen by CHWs. This may have resulted in an overestimate in terms of expected numbers of diarrhea cases to be treated by CHWs, since home treatment using oral rehydration therapy has been taught and promoted in some communities for many years.
The average total numbers of services provided per child per year ranged from 0.26 in Senegal to 3.05 in Sierra Leone and as percentages of the expected numbers of cases in the hardto-reach areas ranged from 2.7% to 36.7% (also in Senegal and Sierra Leone). These comparisons should be treated as indicative, as estimating the catchment populations in the hard-to-reach areas was difficult. A major difference was the treatment of malaria (diagnosed or presumptive) which accounted for higher numbers of treatments in Zambia, Sierra Leone and, to some degree, in the DRC. Numbers of referrals were only available in DRC, South Sudan and Zambia and amounted to 0.11, 0.01, and 0.38 per child per year, respectively. In South Sudan and Zambia these figures translate to about 1% and 14% of total cases, respectively. A rule of thumb used by some providers is that around 10% of cases need to be referred -a referral rate that is too low may indicate that the provider is treating too many severe cases, whereas one that is too high may indicate a lack of medicines or supplies or a lack of confidence.
Non-recurrent costs
The costs of starting an iCCM program can include the development of plans, policies, guidelines and training materials-most of which are generally financed by the national government and/or partners. For this study, we only took into account the training and equipping of CHWs (and in some cases, of their supervisors). All costs for these activities were included irrespective of who incurred or funded them. If the training included more health topics than iCCM, we only included the proportion related to iCCM. The start-up costs for training and equipment were mostly in the range from US$ 202 to US$ 352 per CHW, with Malawi and Zambia being outliers at US$ 1058 and US$ 897, respectively (these costs are in 2012 US$, representing the cost of training and equipment if it were provided in 2012). In Malawi, costs were higher because they included a portion of general CHW training and an incentive payment. In Zambia costs were higher because they included training-of-trainers and supervisors, the training was longer than in the other countries, and per diem rates were high relative to those in other countries. DRC -Democratic Republic of the Congo, RDT -rapid diagnostic test, NA -not available *We did not include the treatment of non-malaria fever as a separate service, although in some cases fever-reduction medication such as paracetamol is provided and there is, therefore, a cost. †Sub-Saharan Africa incidence rates were used for all three services in Malawi [14][15][16] and for malaria and pneumonia in Rwanda [14,15].
The reported annual CHW attrition rates varied significantly across the study sites, ranging from 2% in Malawi (where they are remunerated) to an estimated 10% in South Sudan (the DRC figure of 40% was an unofficial estimate and may not be reliable). The costs of training and equipping replacement CHWs can be significant, as shown above, with most costs in the range of US$ 202 to US$ 352 per provider.
While it is possible to amortize non-recurrent costs over the expected period of use and include them with recurrent costs, this was not done here as it would have been difficult to estimate certain aspects, such as the length of the use of equipment or how long a CHW will work after the initial training.
Recurrent costs
A comparison of recurrent costs can provide meaningful perspectives on the resourcing, equity, and efficiency of service provision and support, and can provide input into cost-effectiveness analyses, involving comparisons of costs per output or outcome. Recurrent costs are those repeated on an ongoing basis and, in this study, include medicines and supplies, management, supervision, and refresher training. These costs are expressed in two ways: per capita and per service. The costs of the training and equipping of replacement CHWs were not included in recurrent costs in these studies although it would be reasonable to do so.
Per capita recurrent costs are calculated here by dividing the total recurrent cost by the number of children in the catchment population. With the exception of the costs of medicines and supplies (which represent estimates of the quantities consumed), these figures represent the iCCM resources made available to the catchment populations. Per service recurrent costs, on the other hand, are calculated by dividing the total recurrent cost by the number of services provided. These figures represent the iCCM resources that should have been used in providing a single service. The ratio between per-capita and per-service costs is the same as the rate at which services are used per capita.
Medicines and supplies are variable costs which change based on the numbers of services provided. Provider remuneration, management, supervision, refresher trainings and other similar costs are generally fixed and do not vary with the number of services provided. It is important to note that the average total costs of medicines and supplies vary with the mix of services provided as well as with the unit costs of the various medicines. So if a greater proportion of services with higher-cost medicines is used, the average costs across all services will be higher.
The average total recurrent costs per capita (children aged 2-59 months) were much lower for the two national programs (Malawi and Senegal), which were US$ 2.16 and US$ 2.07, respectively, than for the four sub-national programs with the complete iCCM package (DRC, Sierra Leone, South Sudan and Zambia), which ranged from US$ 5.50 to US$ 10.20 ( Table 3). This is largely due to economies of scale in the national programs where the fixed costs, especially of management and supervision, are spread across much higher catchment populations. As noted previously, however, caution should be used in comparing costs across the countries since there were many contextual differences.
The average total recurrent cost per service ranged from US$ 2.15 in Malawi to US$ 8.99 in South Sudan. Cameroon was an outlier at US$ 16.11 due to the high level of management and supervision costs combined with low utilization rates, taking into account that pneumonia treatment was not part of the iCCM package at the time of the study. There was no major difference between the national and sub-national costs per service. In general, lower costs per service were related to higher utilization levels combined with lower management and supervision costs. Differences in case mix did not seem to be major factors since the unit cost per disease followed a similar pattern.
The average cost per service for medicines and supplies ranged from US$ 0.34 in the DRC to US$ 0.65 in Zambia.
Costs were higher in Zambia because RDTs were included. The mix of services and purchasing prices of medicines were different in each country so these figures are not directly comparable.
The iCCM portion of the salary payments to the CHWs in Malawi was significant at US$ 1.40 on average per service. We did not collect information on the user fees charged by the CHWs in Senegal, which is also a form of remuneration. CHWs were not formally remunerated in any of the 5 sub-national projects.
Management and supervision costs ranged from 72% to 85% of total recurrent costs among the sub-national programs compared with 3% for the national program in Malawi. The Malawi figure was proportionally lower because 65% of the total cost went on CHW remuneration. The Senegal figure of 79% was much higher than that of the Malawi programs because it was managed through a donor-funded project. A key factor in the high cost of supervision in South Sudan was that the implementing NGO had to supervise the CHWs and that was done from central levels as it could not be done from the health facilities. It is important to note that the variations in the way these management and supervision costs were captured, calculated, and reported mean that these comparisons across the programs are indicative and not definitive.
The costs of CHW meetings ranged from US$ 0.15 per capita in Malawi to US$ 0.84 in Cameroon (no separate cost was recorded for Sierra Leone). These costs depended mainly on the frequency of meetings, per diem rates, and amounts reimbursed to CHWs for transport. Refresher training was sometimes provided as part of the routine supervision system or through meetings -in others it was provided as a separate dedicated training activity. In the programs where it was a separate activity, the average cost per capita ranged from US$ 0.09 to US$ 0.75.
Recurrent costs can be more meaningfully compared by type of service (eg, malaria) since the average total costs across all services are affected significantly by variations in service mix. Diarrhea treatment was the only service provided in all the studies and the recurrent cost ranged from US$ 2.44 per service in Malawi to US$ 7.80 in South Sudan ( Table 4). Pneumonia diagnosis and treatment costs ranged from US$ 1.70 in Malawi to US$ 12.94 in South Sudan. And the cost of presumptive malaria treatment ranged from US$ 2.17 in the DRC to US$ 7.10 in South Sudan. Cameroon was an outlier in these measurements because of the high support costs and low utilization level described earlier. It is important to note that the costs of treating presumptive malaria in some countries cannot be compared with the costs of testing and treating malaria in others because of the contextual differences among the countries.
The cost per type of iCCM service depends on two important factors -the CHW' s time and the cost of medicines and supplies. The time that CHWs spend on providing each type of service was used to allocate their remuneration (if they were paid) and the indirect costs (eg, supervision) across the service types. The estimates of time used in the studies were obtained through CHW interviews and ranged widely, for example, from 26 minutes to 91 minutes for pneumonia diagnosis and treatment in Malawi and Si-
Efficiency
An important ratio for measuring the efficiency of iCCM is the average number of services provided per CHW. This is influenced primarily by the availability of the CHW and the demand for services. In terms of availability, most providers are volunteers and have to also perform income-generating activities (eg, farming or animal husbandry) and many also provide other voluntary health services. Factors influencing the demand for services include the size of the catchment population, the incidence of the illnesses, the distance a person' s home to the place where the CHW is based, perceptions of quality of care, and the availability of medicines.
The average catchment population of children (aged 2-59 months) per CHW differed considerably, with CHWs in Sierra Leone covering an average of 38 children and CHWs in Malawi covering an average of 454 children ( The average number of cases seen by a CHW ranged from 0.5 per week in Senegal to 8.2 per week in Zambia. In two of the seven countries less than 1 case was seen per week, on average, which raises concerns about the ability of providers to maintain their skill and highlights the importance of hands-on supervision and refresher trainings. For CHWs to maintain their skills, they should probably see at least 10 cases per month in total, including 1 or 2 pneumonia cases, and they should have good supportive supervision where skills can be regularly assessed [17]. The estimated time spent providing iCCM services ranged from 2% to 85% of the total time they said they were available for iCCM services. In some cases, this probably reflects a high degree of over-estimation of available time reported by CHWs. Reported levels of attrition ranged from 2% to 10% (the rate of 40% reported in the DRC may not be reliable), as shown in the same table, and the higher rates are a concern since experienced, skilled providers may be lost and the cost of training and equipping replacements can be high.
The CHWs were remunerated in the two national programs but not in the five sub-national programs and it is notable that the attrition levels were lower in the national programs. There does not, however, seem to be a relationship between CHW remuneration and the numbers of iCCM cases seen, and a deeper analysis would be needed to explore this due to the contextual factors.
User fees were charged to patients in one of the national programs (Senegal) and the average numbers of iCCM cases seen were low. Again, a deeper analysis would be needed to try to determine if there was a relationship between user fees and utilization levels.
Additional studies
Other studies have been conducted which add value to the discussion of CHW costs. In particular, an overview of community health workers by Perry et al [18] which examined different CHW models and accompanying models of effectiveness in achieving improved health for communities. Additional insights into the challenges of scaling up iCCM have been provided by Oliver et al [19]. And a study by Seidenberg et al looks at the impact of iCCM on healthseeking behavior in Zambia, in which one of the findings was that iCCM can reduce workload at primary health centers [20]. Further information is expected when the South-African Medical Research Council publishes the results of a UNICEF-funded study of iCCM program costs in 6 African countries.
Studies have also examined how patient costs, such as transport and user fees, can restrict or delay access to health services and can negatively impact on a poor family' s financial situation, as well as indicating how iCCM can alleviate this economic burden by bringing services closer to the family. A study in Uganda showed, for example, that community treatment of malaria and pneumonia resulted in significant cost-saving for rural, poor communities, who would otherwise lose productive time travelling to health facilities [21]. And a study in Pakistan showed that com-munity based management of severe pneumonia can reduce both provider and patient costs while also improving case seeking and treatment compliance [22].
DISCUSSION
To have maximum impact on child health and mortality, iCCM services should be available within 24 hours of the onset of illness and from a single provider ("one-stop shopping"). This is especially important if there are co-morbidities. If the case is complex or severe, the CHW should be able to refer the case to the nearest health facility and help arrange transport, if necessary.
It is clear that effective iCCM can reduce morbidity and save lives but for the services to be widely accepted and implemented by governments, they must also be affordable and cost-effective. Based on this analysis, there are two main factors that affect this: utilization of services, and management and supervision costs.
The analysis shows that low utilization of iCCM services contributes significantly to high unit costs per service, as fixed supervision and management costs have to be absorbed by fewer cases. The results indicate that iCCM services may have been under-utilized in several of the programs, with less than 20% of the expected number of episodes of illness seen by CHWs and, in some cases, CHWs seeing less than one case per week. Low utilization can also be an issue in terms of quality of care because a provider should see sufficient cases per month to build and retain the necessary experience and skills.
Utilization depends partly on the number and types of service included in the iCCM package. In Cameroon, for example, where pneumonia was not treated as part of iCCM in the year of the study, overall utilization was low and this contributed to a higher average unit cost per service. On the other hand, including other services, such treatment of red eye in Malawi, increased utilization and contributed to the lower average unit cost per service. The degree to which other services can or should be added is, however, an important topic that is beyond the scope of this paper.
Utilization is affected by several other factors such as the size of the catchment population, incidence of illness, CHW access and availability, perceptions of medicine supply and quality of care and perhaps, in some cases, user fees as well. In some of the programs, utilization was low because the catchment populations were small, for example in Sierra Leone where each CHW only covered an average of 38 children. Also, incidence rates were lower in some program areas, such as in Malawi, with 5.3 expected episodes per child per year, compared with 9.6 in Senegal.
The availability of the CHWs does not appear to have been a reason for low utilization, since less than 20% of the reported available time was used for iCCM in 6 of the 7 programs. However, it appears likely that medicine stock-outs have been a factor since this was reported as a problem in several of the studies. The studies did not seek to determine if user fees had an impact on utilization in Senegal, which was the only program that had them, and the results did not indicate any obvious relationship.
The lack of maturity of programs was a possible factor in the low utilization levels seen in three of the sub-national programs that had been running for three years or less. At the 2014 iCCM Symposium in Accra, Ghana, it was noted that it can take at least 3 years before an iCCM program reaches maturity in terms of utilization of services and it may take at least 12 months of implementation at scale (with greater than 80% of CHWs trained) to have higher utilization [3]. Building confidence in the availability and quality of iCCM services can take time, but it seems that active promotion and behavior change activities, including the close involvement of community leaders, can increase demand faster, as has reportedly been the case in the Zambia program, which achieved quite high utilization in less than three years.
As noted above the other key factor in terms of iCCM program costs is management and supervision. In the five subnational programs, this was over 70% of the total recurrent costs and even though it was much lower in the Malawi national program it was 79% in Senegal, where the national program was run by an international organization. It is understandable that the costs of setting up and running pilot projects can be relatively high and even more so if they are run by local or international organizations. These costs should become much lower in relative terms if the programs are scaled-up and taken over by the government. Nevertheless all programs should aim to minimize these costs while maintaining good support for the CHWs so that the availability and quality of iCCM is optimal. Costs can be minimized, for example, by integrating supervision (eg, covering all community health services, not just iCCM), by combining supervision with outreach visits where additional curative services are provided by the supervisor during the visit, and by using local peer supervisors to supplement professional supervisors.
Another key program cost relates to replacing CHWs who stop working. The cost of training and equipping new CHWs can be significant and is often not budgeted. Moreover, the loss of experienced, knowledgeable CHWs can affect the performance of the program. Attrition rates were 5% or more in all of the programs where CHWs were not remunerated, compared with 2% in Malawi where they were remunerated.
In terms of the impact of remunerating CHWs, there was not enough information to assess whether the additional costs were outweighed by increased utilization and reduced attrition, but that is a possibility that is worth exploring in other studies.
The additional costs of iCCM may be offset to some degree by savings elsewhere in the health system. As mentioned previously, there is some evidence that iCCM can reduce workload at primary health centers, and cost savings should also be achieved by treating cases before they become severe. In addition, there is evidence that iCCM results in savings to families with sick children. Unfortunately, it was not possible to investigate these possibilities in the costing studies.
Finally, it is important to note that iCCM programs were sometimes established as a transition strategy to save lives because primary care facility services were weak. Where this is the case, it should be accepted that iCCM services will be costly and may be unsustainable until primary health facilities are fully functional, taking into account that they need to provide the supervision and support (eg, supplies of medicines) and serve as reliable referral units for severe cases. In small, hard-to-reach communities, however, iCCM will probably be the most cost-effective way to provide services in the long term, even if they are costly.
Limitations
There were a number of limitations to the studies that could have affected the results and which necessitate the need for caution in interpreting and comparing them. The most significant overall limitation is that the studies were carried out at different time periods in seven very different countries which were selected purposively for reasons other than cross-country comparisons. Other limitations include the following. Some of the sub-national programs only started in 2010, and the use of data from 2011 and 2012 to measure costs and efficiency may be premature as it can sometimes take 3 years before programs reach maturity in terms of utilization of services. The samples of facilities and communities were relatively small and were limited in terms of remote communities. Recurrent costs may be underestimated because of lack of complete information on services provided, such as follow-up visits, numbers of referrals, and treatment of fever which is not diagnosed as malaria or pneumonia. Costs do not include the removal of bottlenecks and other health system strengthening activities or economic costs, such as the value of a voluntary CHW' s time, family out-of-pocket costs or income losses due to treatment seeking. Finally, the measurement of costs and efficiency depend significantly on CHWs' estimates of time available and time needed for services, and some inaccuracy in these estimates is likely.
Funding: The original studies on which this paper is based were conducted with funding from the USAID TRAction Project and from the Bill and Melinda Gates Foundation. The preparation of this manuscript was supported through the USAID TRAction Project.
Ethics approval: Ethical clearance was obtained where needed for the studies on which this research is based.
Authorship declaration: DC wrote the manuscript; ZJ, CG and US critically reviewed the manuscript for intellectual content. ZJ led the individual country studies, conducted the analysis and led the writing of those studies. All authors read and approved the final manuscript. DC is the guarantor of the paper.
Competing interests: All authors have completed the Unified Competing Interest form at www.icmje. org/coi_disclosure.pdf (available on request from the corresponding author). The authors declare no support from any organization for the submitted work; no financial relationships with other organizations that might have an interest in the submitted work in the previous 3 years, and no other relationships or activities that could appear to have influenced the submitted work.
Recommendations
While there are a growing number of studies on iCCM costs, additional analyses are needed to assess the cost-effectiveness of iCCM. Such analyses are important in making a stronger case that iCCM is a worthwhile investment, while simultaneously helping to determine the most affordable ways to provide quality services. There is a need to look at the role of iCCM within the primary health care system, not as an alternative to facility-based or other community services, but as an effective way of providing treatment for key childhood illnesses in hard-to-reach communities. It is important to take into account patient financial and economic costs as well as service provision costs, and to include factors such as timeliness, quality and appropriateness of treatment. There is also a need to look at the costs of removing bottlenecks, including the costs of improving medicine supply and demand generation, as well as the impact of CHW remuneration and the impact of charging patients for services. Supervision and management can be costly element of iCCM, and the cost-effectiveness of strategies to minimize these costs should be explored. Analysis of financing and financial sustainability is also needed, including the use of medicine sales to patients as a way of financing supplies. Finally, system improvements are generally needed to ensure the availability of routine iCCM and CHW service data, which is necessary for in-depth analysis and performance monitoring.
CONCLUSIONS
The results of this analysis show that in order to be costeffective and affordable, iCCM programs must be well-utilized and management and supervision must be organized in a way that minimizes cost while ensuring quality of care. This requires the removal of bottlenecks, such as medicine stock-outs, and of any barriers to access. It also requires activities that encourage the utilization of iCCM services such as the promotion of those services and the involvement of community leaders. To minimize the costs of iCCM management and supervision it is important they are an integral part of the routine systems under which, for example, a supervision visit to a community covers multiple health services, not just iCCM.
In some cases, however, it must be accepted that iCCM will not be low-cost even if the CHWs are volunteers. For example, a sub-national iCCM program that is established by an NGO to save lives because facility-based services are weak is likely to be relatively costly until the health system is strengthened. And in the case of small, remote villages, while iCCM is likely to be the most cost-effective way to provide services, it may never be low-cost because of the supervision and supply challenges. | 2018-05-08T17:45:36.068Z | 2014-12-01T00:00:00.000 | {
"year": 2014,
"sha1": "753391328364a660549a7d99832ecdac4aaee86c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7189/jogh.04.020407",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "753391328364a660549a7d99832ecdac4aaee86c",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": []
} |
203132223 | pes2o/s2orc | v3-fos-license | Insights in the anode chamber influences on cathodic bioelectromethanogenesis – systematic comparison of anode materials and anolytes
Abstract Cathode and catholyte are usually optimized to improve microbial electrosynthesis process, whereas the anodic counter reaction was not systematically investigated and optimized for these applications yet. Nevertheless, the anolyte and especially the anode material can limit the cathodic bioelectrochemical process. This paper compares for the first time the performance of different anode materials as counter electrodes for a cathodic bioelectrochemical process, the bioelectromethanogenesis. It was observed that depending on the anode material the cathodic methane production varies from 0.96 µmol/d with a carbon fabric anode to 25.44 µmol/d with a carbon felt anode of the same geometrical surface area. The used anolyte also affected the methane production rate at the cathode. Especially, the pH of the anolyte showed an impact on the system; an anolyte with pH 5 produced up to 2.0 times more methane compared to one with pH 8.5. The proton availability is discussed as one reason for this effect. Although some of the measured effects cannot be explained completely so far this study advises researchers to strongly consider the anode impact during process development and optimization of a cathodic bioelectrochemical synthesis process.
critical design parameters such as system resistance or used materials; often, electrodes in MES are carbon based due to the lower price compared to high costs of more efficient precious metal electrodes [1,10].
Naturally, the cathode as working electrode often lies within the focus of research and optimization in MES technology. As an example, different carbon based electrodes with various structures were tested, carbon electrodes were combined to assemblies such as carbon sticks wrapped with carbon paper, or assemblies with metal wires [11]. To further improve the performance, carbon materials were coated with metal particles or polymers [12]. It was reported that chitosan treatment of a graphite cathode improved the acetate production by means of 7.7 [12]. Not only carbon materials, but also non-precious metals were tested, however, the use of carbon materials seemed to allow higher production rates in many cases [13].
During working electrode optimization in another bioelectrochemical process, the microbial fuel cell (MFC), it turned out that the counter electrode (in case of an MFC the cathode) can be limiting for the desired process, although no biological reaction takes place at the counter electrode surface [14,15]. Comprehensive studies were carried out in MFCs showing that type and size of the counter electrode limit the current production. Oh et al. reported an improvement of current production in MFCs when coating the counter electrode with platinum [14]. It was found that an enlarged counter electrode surface area improved the current production, but not in a proportional way [14]. It was suggested that the counter electrode contributed to a large portion of the system resistance, limiting the electrochemical performance [16]. Other conditions at the counter electrode, such as dissolved oxygen concentration or ferricyanide addition, also influence the MFC performance [14,17].
In MES, the anode is the counter electrode, and to our knowledge, it has not systematically been studied so far about how the process can be optimized by alteration of the conditions at the anode. In this publication, we want to reveal whether and why different anodes materials influence the desired process at the cathode. Mainly, carbon based anode materials commonly used in bioelectrochemistry were chosen and compared. As an example process for MES, the bioelectrochemical production of methane by the electroactive methanogen Methanococcus maripaludis was chosen, which was already described in literature [5,18,19]. Not only different anode materials, but also different anolytes are investigated and compared. Apart from that, we support the findings by electrochemical electrode characterization using linear sweep voltammetry (LSV), which shall give a better understanding of the influences the anode chamber has on the process of bioelectromethanogensis. This kind of systematic comparison of different counter electrode conditions was not shown before for cathodic processes.
PRACTICAL APPLICATION
The optimization of bioelectrochemical systems is a crucial step towards industrial applicability. The investigations shown in this paper suggest a systematic optimization of the counter chamber. This system part has not been studied before in a comparative manner. Researchers can transfer the results shown here to other bioelectrochemical systems to improve the process performance. For a future industrial application, counter electrode optimization is crucial to gain a sustainable and feasible process. Most probably it is easier to optimize the overall process by improving the abiotic electrode reaction than by improving the biotic reaction.
H-cell setup
The used H-cells (Fischer Labortechnik, Frankfurt am Main, Germany) consisted of two 100 mL glass bottles connected via a glass bridge. To create a two-chamber system, a membrane (Nafion117, DuPont, Wilmington, USA, 4.9*10 −4 m 2 ) was inserted in the bridge; Nafion is used as a standard in bioelectrochemistry due to the fact that it can be autoclaved [20]. Including the side ports, the absolute volume of each chamber added up to 142 mL. As cathode, a graphite rod was used (0.5 cm diameter, 7.5 cm long; Metallpul-ver24, Sankt Augustin, Germany). Different materials were used as anodes (see Section 2.2). if not stated otherwise, a graphite rod was also used as anode. The electrodes were placed into each chamber and contacted with a titanium wire (0.5 mm diameter, Goodfellow, Bad Nauheim, Germany; 2 mm diameter in case of dimensionally stable anodes (DSA, De Nora, Milan, Italy)) The wires were pierced through a butylseptum (Glasgerätebau Ochs, Bovenden, Germany, septum for GL45) closing the main opening of each H-cell chamber. The contacting titanium wire was not submerged in the electrolyte. The cathode chamber was equipped with a Luggin capillary (Fischer Labortechnik, Frankfurt am Main, Germany) filled with 0.5 M Na 2 SO 4 holding an Ag/AgCl reference (Ag/AgCl electrode; +199 mV vs. SHE, SE 21, Sensortechnik Meinsberg, Xylem Analytics, Germany). Further septa (Glasgerätebau Ochs, Bovenden, Germany) closing the side arms of each chamber allowed gassing and sampling of headspace gas (gas inlet: (0.6*80 mm needle, gassing rate 0.5 mL/min N 2 /CO 2 (80/20); gas outlet: 0.6*30 mm needle). A further cannula was inserted into the anode chamber for air exchange between anode chamber and environment and avoidance of overpressure by the production of oxygen at the 4 Cl, 0.14 g/L CaCl 2 ⋅2H 2 O, 0.14 g/L K 2 HPO 4 , 0.002 g/L Fe(NH 4 ) 2 SO 4 , 18 g/L NaCl, 10 ml/L trace element solution DSMZ M141 and 10 ml/l vitamin solutions DSMZ M141, 5 g/L NaHCO 3 ; all chemicals used are of analytical grade). The MES medium used was an alteration of the standard methanogenium medium M141 given by the DSMZ. If not stated otherwise, the anode chamber was filled with 100 mL of 100 mM phosphate buffer (pH 6.9; 5.62 g/L KH 2 PO 4 , 9.28 g/L Na 2 HPO 4 ) to increase the conductivity.
Anode materials
Five different anode materials were tested: graphite rod, activated carbon felt, carbon fabric, DSA, and carbon laying. All anodes were contacted with titanium wire (as the cathodes), whereas the connecting titanium was not submerged in the electrolyte. The electrical connection led to different contact resistances among the materials due to their material properties. Details are given in Table 1. For all materials except DSA, the basic material was carbon, scanning microscopy images are given in Supporting Information. DSA is a titanium-mesh with Ir-MMO (mixed metal oxides) coating. In contrast to the carbon based electrodes, DSA showed a grid-like structure. The carbon based materials offered a similar, but not exactly equal geometrical surface area, so current densities and specific methane production rates were also calculated based on the geometrical surface area of the anode to still allow comparison of the materials. The projected surface area of DSA was similar to the geometrical surface area of the carbon based materials, but due to the grid-like structure, the geometrical surface area was much smaller.
To enhance the anolytes conductivity, an experiment was done in which the phosphate buffer was replaced by 100 mL MES medium.
LSV
Abiotic characterization of the electrode materials was performed to demonstrate the different electrochemical behavior of the different materials and the influences of the pH on the electrochemical performance. LSV was chosen as method for evaluation. The same anode materials as in Section 2.2 were used, but the geometrical surface areas were altered. Carbon fabric (geometrical surface of 0.0002 m 2 ), carbon felt (geometrical surface of 0.0002 m 2 ), and carbon laying (geometrical surface of 0.0008 m 2 ) were connected to a platinum wire (diameter 0.5 mm) to decrease the contact resistance for this experiment. The graphite rod was wrapped with PTFE tape in order to achieve a geometrical surface area of 0.00019 m 2 and electrically connected with a titanium wire (diameter 0.5 mm). The DSA electrode was used as delivered by the manufacturer (geometrical surface area: 0.0005 m 2 , welded to a titanium wire, diameter 2 mm). The surface area of the electrodes used for the abiotic characterization was smaller than for the biotic experiments in H-cells, since larger electrode areas would lead to current overloads when performing the LSV. Since the results are given in current densities based on the geometrical surface areas, conclusions may be transferred to larger electrodes. Images illustrating electrode materials and electrical connection are presented in the Supporting Information. The experiments were carried out in a 100 mL Schott flask (one-chamber system, in contrast to the biotic chronoamperometric measurements) equipped with a lid with GL14 ports. The potential of the anode was controlled with an Ag/AgCl/KCl sat electrode (+199 mV vs. SHE, SE 21, Sensortechnik Meinsberg, Xylem Analytics, Germany), inserted via a Luggin capillary filled with KCl sat . A platinum mesh (geometrical surface of 0.0012 m 2 ) served as cathode during LSV experiments. The electrodes were each inserted through a respective GL14 port. An image illustrating the electrode positioning can be found in the Supporting Information. A 100 mM phosphate buffer was used as electrolyte at three different pH values, set by the ratio of hydrogenphosphate to dihydrogen-phosphate (see Section 2.3).
Linear sweep experiments were carried out with a Gamry Reference 600 potentiostat (Gamry Instruments, Warminster, USA). LSV were started at 0 V vs. Ag/AgCl and driven to 2.5 V vs. Ag/AgCl with a scan rate of 100 mV/s and a step size of 2 mV. The resistance was uncompensated. The experiments were carried out at controlled room temperature of 20 • C.
Biotic experiments
All experiments in H-cells were conducted in two independent biological duplicates and one abiotic control. All chronoamperometric H-cell experiments were operated at −900 mV vs. Ag/AgCl and 35 • C (close to the temperature optimum of the used organism [21]) for 80 h.
As electroactive organism, Methanococcus maripaludis S2 (DSM No.: 14266, DSMZ, Braunschweig, Germany) was used for the biotic experiments. The precultures for the inoculation were cultivated in 1 l septum flasks with 300 mL of M141 medium and 2 bar H 2 /CO 2 80/20 v/v gas atmosphere to an optical density of approximately 1 (late exponential phase) at 180 rpm and 37 • C. The cathode chamber of the H cell was inoculated after sparging with N 2 /CO 2 for half an hour to an OD of 0.1. During the experiments, the cathode chamber was continuously gassed with 5 mL/min N 2 /CO 2 80/20 v/v. This lead to an equilibrium of bicarbonate and CO 2 and a pH of 7.2.
Analytics
Gas samples were taken from the H-cells twice a day and analyzed via GC (Agilent technologies 490 Micro GC, Agilent, Santa Clara, USA (with external 2-pointcalibration)). For analysis of the off gas samples, an injector temperature of 100 • C and a column temperature of 60 • C were set. Samples were injected to three columns: Channel 1: PoraPLOT U pre-column and Molsieve 5A main column with argon as carrier gas; Channel 2 PoraPLOT U pre-column and Molsieve 5A main column with helium as carrier gas; Channel 3 PoraPLOT U as pre column and main column with helium as carrier gas. A thermal conductivity detector was used. Hydrogen was detected on channel 1, oxygen and nitrogen on channel 2 and methane and carbon dioxide on channel 3. The sampling time was set to 30 s, the total runtime to 3 min. From the percentage of methane and hydrogen in the off-gas stream, the production rate was determined using the gas flux and the molar standard volume. The mean values given in the results section were calculated using the mean values from 24 h after inoculation to the end of the experiment to exclude effects of initial electrode polarization and microbial lag phase during the start-up phase and to avoid the measurement of residual gas form the pre-culture. To calculate the Coulombic efficiency, Equation (1) was used with r e,l as electron transfer rate from the electrode given by the current and r e,m as electron transfer rate to the metabolite given by the methane production.
The Coulombic efficiency was calculated from the mean current and the mean methane production.
After the experiments with different anolytes, pH (VoltcraftPH100ATC; Voltcraft, Hirschau, Germany) and conductivity (HI99301 conductivity meter, Hanna instruments, Vöhringen, Germany) were measured in the anode and cathode chamber. After all experiments, the optical density at the end of the chronoamperometric experiment was measured (WPA Biowave CO8000 Cell Density Meter, 600 nm, Biochrom, Cambridge, England).
Effect of anode material
Five different electrodes were tested as anodes for the bioelectromethanogenesis. Exemplary, the current consumption and Black dots: biotic methane production rate using carbon felt as anode; grey dots: biotic methane production rate using graphite rod as anode. (B) Current consumption during bioelectromethanogenesis using different anodes; solid black line: using carbon felt as anode; solid grey line: using graphite rod as anode the methane production rate for the graphite rod and the carbon felt anode are shown in Figure 1 and Table 2. The methane production rate increased rapidly and remained relatively stable after 45 h, corresponding to a stable current. Previous studies showed that the process of bioelectromethanogenesis can be operated with stable methane production rates over longer periods of time [22], therefore the results obtained here within 80 h are considered as representative. The current uptake was larger with the graphite rod anode than with the carbon felt ( Figure 1B), although the methane production rates in both experiments were similar. Using the graphite rod as anode, a high current was observed in the beginning of the experiment, which decreased rapidly before the current increased again due to microbial current uptake. The initial current (first 10 h of the experiment) was excluded when calculating the mean current and mean efficiency, since it was assumed that the initial release of electrons was not connected to microbial methane production but polarization of the electrode surfaces. The first measured value of the methane production and the hydrogen production was also excluded for the calculation of the mean production rate since it might result from the gas phase from the preculture introduced to the Hcell during inoculation. Randomized, samples were measured with HPLC, but in no case soluble organics such as acetate, formate or lactate were detected.
T A B L E 2 Performance of biotic H-cells using different anodes
The highest absolute methane production rate was observed for carbon felt anodes (25.44 ± 1.88 µmol/d, equals 21.56 mmol/d m 2 based on the geometrical cathode surface area and 15.63 mmol/d m 2 based on the geometrical anode surface area), followed by the use of graphite rod anodes (25.31 ± 4.42 µmol/d, equals 21.45 mmol/d m 2 based on the geometrical cathode or anode surface area, respectively). Lower values of 15.54 ± 3.75 µmol/d (equals 11.43 mmol/d m 2 based on the geometrical anode surface area) for carbon laying and 14.46 ± 2.89 µmol/d (equals 28.00 mmol d m 2 based on the geometrical anode surface area) for DSA were obtained. DSA therefore gave the highest production rate based on the geometrical anode surface area, leading to the conclusion that the anode surface might be limiting in this case. The lowest amount of methane was produced with a carbon fabric anode. The results clearly show that the changes in methane production do not depend solely on the geometrical or specific surface areas of the anodes.
When using the graphite rod as anode, the anolyte changed its color to yellow and further to brown during the process. Also, the surface of the graphite rod roughened during the experiment. It is thus likely that the graphite rod corrodes/oxidizes when used as anode in combination with the phosphate buffer and thereby serves as a kind sacrificial anode (see pictures in the Supporting Information). Although Table 2). Although the graphite rod anode shows a very good methane production during bioelectromethanogenesis, it is not a suitable anode material because of the corrosion during the process, which limits the lifetime of the system. Consequently, activated carbon felt turned out to be the most suitable material, since the absolute methane production was the highest and no oxidation of the electrode material was observed; DSA, which offers a higher specific methane production rate based on the anode surface is limited for usage in a process in its current geometrical confirmation since the material is very space consuming at low geometrical surface areas. However, oxidation of activated carbon felt could not be neglected; it is assumed that oxidation took place with all carbon based anode materials, since the anode potentials are always similar to that during the experiments with graphite rod anodes. To use DSA, the grid structure could be altered to allow larger geometrical surface areas within the reaction volume.
Interestingly, no direct correlation was observed between the abiotic hydrogen production in the abiotic control experiments and the methane production in the experiments with M. maripaludris (Figure 2A). For graphite rod, DSA and carbon laying it seemed that a high abiotic hydrogen production was responsible for a high biotic methane production and the majority of the methane is explainable by an indirect electron transfer via H 2 (65 % in case of the graphite rod, 90 % in case of the carbon laying and 142 % in case of the DSA, hydrogen observed in biotic set-ups not taken into account; therefore, percentages above 100 % are possible). The carbon fabric anode led to a smaller amount of abiotic hydrogen production, whereas the methane production in the biotic experiment was low, but the hydrogen production in the biological system was increased. It was already reported that M. maripaludis might secret hydrogenases which catalyze the hydrogen production [18]; a lack of abiotically produced hydrogen might favor the secretion of hydrogenases in this case, resulting in a high biotic hydrogen production ( Table 2). The methanogens might have lost the ability to produce methane due to a metabolic shift towards hydrogenases production and release. However, this effect was not confirmed when looking at the carbon felt anode and has to be stated as speculative. Although little hydrogen was produced abiotically with a carbon felt anode, the methane production was higher than for the other electrodes with a higher Coulombic efficiency of 56.1%. Only 18% of the methane produced can be explained by indirect electron transfer via abiotically produced hydrogen.
For the two materials with high methane production rates (carbon felt and graphite rod), high Coulombic efficiencies were calculated (Table 2), resulting in the conclusion that the electrical current was not the main limitation given by the anode reaction. For the two materials with medium methane production rate (carbon laying and DSA), the Coulombic efficiencies were around 25%, whereas for the carbon fabric anode, only 2% of the transferred electrons could be found in the desired product, while more than 25% were found in hydrogen. The Coulombic efficiency did not reach 100%, suggesting an alternative electron sink. Since not further organic products were detected, alterations of surface charges on the electrodes or shifting ion charges in the medium could result in lower electron flux towards the desired product. Calculating the Coulombic efficiencies for the abiotic production of hydrogen in the abiotic controls in H-cells, it was observed that only when using DSA (100.9%) and graphite rod anode (71.5%) the electrodes were efficiently used for hydrogen production at the applied potential. For carbon laying (55.7%) and especially carbon felt (24.2%) and carbon fabric (24.1%), other electron acceptors or side reactions seemed to play a major role in the current flux. Interestingly, carbon based materials, which only differ in their structure but not in their basic material already show a high impact on the process. Our results show that the changes in the process performance cannot be explained by the geometrical or specific surface areas of the anodes. Obviously, the surface properties of the chosen anode material have a great impact on the methane production at the cathode.
LSV of different electrodes
In the LSV experiments, the different materials showed different current densities at applied potentials between 0 and 2.5 V vs. Ag/AgCl ( Figure 2B). The current densities were calculated based on the geometrical surface area of the respective anodes. The highest current density was observed for the carbon fabric electrode, but due to the steep ground slope it seemed that a large proportion of the current results from internal electrode resistance and a capacitive behavior. Excluding the carbon fabric, the LSV revealed that for the anode potential of +1 V vs. Ag/AgCl, the carbon felt and DSA lead to the highest current density. With increasing potential up to 2.5 V vs. Ag/AgCl carbon felt and DSA show similar increasing current density curves, whereas the increase of the current density of the graphite rod curve was smaller. These results suggested that especially carbon felt and DSA led to the highest specific water splitting reaction rate, the highest specific oxygen production rate and the highest specific production of protons in the anode chamber. Since the geometrical electrode surface area of the DSA was smaller than that of the felt, it seems obvious that a larger absolute current, oxygen evolution rate and proton release rate occur in when using carbon felt anodes. In general, the higher proton release at the anode is likely to also increase the proton availability at the cathode side due to the use of a proton exchange membrane, which explains higher methane production rates. To a certain content, this is also valid for the graphite rod electrode. The increased electron flux to the cathode when using anode materials with high current densities during LSV also allowed an increase of direct and indirect electron transfer to the microorganisms in bioelectromethanogenesis. Carbon laying shows lower current densities, a proportional link to the methane production rates could not be observed. The production of abiotic hydrogen was even less predictable from the LSV. Actually, high current densities should allow higher abiotic hydrogen production rates, but the carbon fabric, which showed a high current density in LSV led to low abiotic hydrogen production rates compared to the other materials.
Based on the total surface area instead of the geometrical surface area, the current densities obtained would look different; the highest current density would then be observed for the carbon laying, followed by the carbon felt. The lowest current density based on the total surface area resulted from the use of the carbon fabric due to its high specific surface area, leading to the question why this material often works well as anode in MFC set-ups; it seems that only minor parts of the total surface area actively participate in the reaction. However, apart from water splitting and oxygen evolution, electrode oxidation should also be considered as possible anodic reaction during LSVs, which is valid for all carbon based electrode materials.
All in all, only the internal anode resistances and partially the current density observed in the LSV seemed to correlate with the biotic methane production. A lower internal anode resistance would lead to a lower overall system resistance, resulting in lower energy losses. This could allow higher current densities in the biotic experiments at constant working potentials, and therefore higher methane production rates. Total surface area, anode mass and abiotic hydrogen production did not show a predictability of the biotic performance. A table comprising all electrode material properties and performance is given in the SI.
In general, the performance might be further increased by the use of precious metal anodes, but this would lead to increasing costs and is therefore usually not considered in bioelectrochemistry.
Effect of anolyte
Apart from different anode materials, different anolytes were tested, using an acidic, neutral and basic phosphate buffer. In LSV experiments, the currents observed at the potential of +1 V vs. Ag/AgCl are relatively similar, with the highest current observed at the higher (basic) pH, the lowest at the lowest (acidic) pH ( Figure 3B). Against this finding, the acidification of the anolyte improved the methane production by 1.6 ( Table 3). The use of a more basic phosphate buffer did not significantly alter the performance, but the pH of the anolyte measured after the chronoamperometric experiment was 6.8 in biotic and abiotic experiments when starting at pH 8.5. Protons produced at the anode seem to decrease the anode pH because the proton transport through the membrane is either slower than the proton release or limited by the proton gradient in the other direction, resulting in a neutral pH. Therefore, the methane production rate in experiments with a basic anolyte is similar to the one observed using the neutral phosphate buffer from the beginning.
A possible explanation for the improved performance using acidic buffer at the anode is the higher proton availability, leading to an increased proton transfer to the cathode chamber, which allows an increased methane production. The larger amount of protons at the cathode could also lead to a better hydrogen production, thus providing hydrogen for an increased indirect electron transfer for the methane production. However, the hydrogen production rates, neither biotic nor abiotic differ significantly enough to finally confirm this hypothesis ( Figure 3A); it might be that the increased proton flux only occurred in the biotic experiments, since the uptake of protons by the microorganisms increased the concentration gradient between anode and cathode chamber, improving the proton flux through the proton exchange membrane. Apart from protons, K + and Na + are likely to cross the membrane [23], especially if the proton availability due to basic pH is limited; acidification increases the selectivity of the membrane towards proton transport.
To further examine the influence of anodic pH on the process, experiments were conducted using 0.1 M NaOH and 0.1 M HCl as anolyte, respectively (Table 4). With HCl solution as the anolyte, a high hydrogen production in the abiotic (2.25 mmol/d) as well as in the biotic (0.6 mmol/d) experiments was measured, while no methane could be detected. The pH after the chronoamperometric measurement in the anode chamber was 1.87 and in the cathode chamber 2.41, instead of the initial neutral pH. This revealed that protons from the anode migrated through the proton exchange membrane, leading to an acidification of the cathode and consequently an inhibition of the cells. The pH optimum for M. maripaludis lies in the range between 6.8 and 7.2 [21]. Furthermore, the use of HCl solution as anolyte led to complete dissolving of the graphite rod anode (see picture in Supporting Information). With NaOH solution at the anode, a higher mean methane production rate compared to that observed with phosphate buffer pH 6.8 as anolyte was observed, but the deviation of this experiment was larger than in other experiments, leading to a non-significant increase. An increase in methane production was observed using MES medium as anolyte, probably due to the higher medium conductivity (41.5 mS/cm instead of 32.7 mS/cm in the phosphate buffer pH 6.8, see Table 3). However, the use of acidic phosphate buffer resulted in a methane production similar to that observed with the MES medium, so the use of costly media with high salt contents is not required.
CONCLUDING REMARK S
For the first time, different anode materials and anodes were compared regarding their impact on cathodic MES. It was shown that the anode chamber strongly influences the overall process of bioelectromethanogenesis, although the anodic reaction is always supposed to be a water splitting reaction. All in all, the anode material showed a higher impact on the process than the electrolyte; the methane production rate differs by a factor of 26.5 between the worst and the best anode material tested. Using LSV, a rough estimation can be made whether a material might be suitable or not, especially when compared to other electrode materials; a suitable material shows a high current density at the desired working potential in the LSV, combined with a low internal and contact resistance and material stability in under the respective conditions.
The influences observed are explainable, but still, the performance with different cathode materials stay relatively unpredictable from observations in abiotic experiments; for an optimization, it is not sufficient to only investigate the abiotic electrode behaviour of the anode, it shall always be tested in the biotic experiment as well. This publication showed that the optimization of the anode material and the anolyte significantly influence the cathodic MES process of bioelectromethanogenesis. However, it is not yet possible to make general statements about the effects and reasons of these improvements. For the elucidation of the underlying mechanisms further investigations are required (e.g., more electrode materials, different potentials, different electrode surfaces). Apart from that, designs with decreased system resistances (e.g., using larger membrane areas) might help to decrease the anodic overpotentials and avoid anode oxidation. In summary, our investigations show that the optimization of the anode reaction has great potential for optimizing the overall process of MES. Together with current research about the scalability and stability of the process [22,24], this optimization possibility can be a further step on the road to industrial application. | 2019-09-17T02:59:46.023Z | 2019-09-10T00:00:00.000 | {
"year": 2019,
"sha1": "7205193f3bd392c80cbae4754ff1dd3d80df7c15",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/elsc.201900126",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4392eca86627aa72def7069572a03003b657175",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
16263100 | pes2o/s2orc | v3-fos-license | The Milne spacetime and the hadronic Rindler horizon
A direct relation between the time dependent Milne geometry and the Rindler spacetime is shown. Milne's metric corresponds to that one beyond Rindler's event horizon (in the region $t \succ |x|)$). It was found that the shear tensor from the dissipative term of the RHIC expanding fireball has the same structure as that corresponding to the anisotropic fluid from the black hole interior, even though the latter geometry is curved.
Introduction
It is well known that the standard perturbation techniques fail to describe the strongly interacting system of quarks and gluons ( QGP plasma, similar to a liquid with a small shear viscosity) [1] [2]. The AdS -CFT correspondence [3] predicts a universal bound on the ratio η/s ( η -the shear viscosity of the fluid and s -the entropy density), namely η/s ≥ 1/4π, close to the value obtained from a hydrodynamic model of the relativistic heavy-ion collisions (RHIC) [4] [5] Castorina, Grumiller and Iorio [2] showed that at high energies the universal hadronic freeze out temperature T f ≈ 170M eV is a Unruh temperature T U = a/2π = (σ/2π) 1/2 ≈ 170M eV , where a is the deceleration of quarks and antiquarks and σ ≈ 0.18GeV 2 is the QCD string tension. In addition, they consider the hadronic Rindler spacetime formed at RHIC as the near horizon approximation of some black hole (BH) geometry.
Nastase [5] stated that the fireball observed at RHIC is a dual BH with a temperature proportional to the pion mass and with a value about the experimental "freeze out" temperature of 176 M eV . The core of the fireball is the pion field soliton . Moreover, the BH created in the collision has no any singularity at its centre and decay products are all thermally distributed.
Luzum and Romatschke [4] consider dissipative hydrodynamics offer a sensible description of the experimental data for the properties of RHIC. It is a system expanding in a boost invariant fashion (when the hydrodynamical variables are independent on rapidity) along the longitudinal direction (Bjorken expansion). The fluid is thus comoving in the Milne coordinates [6] which are most appropriate as the outcome of the collision taking place in the Rindler wedge t ≻ |x|. In addition, the hydrodynamic variables are independent on the rapidity y and the transverse directions to the expansion. Kajantie et al. conjectured that the 4 -dimensional QCD matter undergoing scale free Bjorken expansion contains a Casimir -type contribution (vacuum energy term) to the holographic stress tensor.
Nakamura and Sang-Jin Sin [7] have remarked that, since the RHIC fireball is expanding along the collisional axis , we need to understand AdS/CFT in the time dependent regime . In Milne's frame all the fluid points are at rest and , therefore, they share the same proper time since the real fireball produced by RHIC experiments is localized (the central rapidity region playing the basic role).
We mention that the Hawking -Unruh radiation has never been observed in astrophysics so far [2] . The thermal hadron spectra in RHIC may thus be the first experimental opportunity to detect such radiation. Throughout the paper the conventions G = c = 1 will be used.
Milne geometry from Rindler
Let us take the Minkowski line element where dx 2 ⊥ stands for the two spatial direction orthogonal to x direction. By means of the coordinate transformation which represents the well known Rindler metric viewed by a uniformly accelerated observer having a constant rest-system acceleration g. The X = const. observers move along the trajectory x 2 − t 2 = (X − 1/g) 2 in Minkowski space. The event horizon X = 1/g corresponds to the two light cones x = ±t. The transformation
brings (2.3) in the form
Let us consider the region wherex ≻ 1/2g. In that case 1 − 2gx becomes negative andx is timelike andt -spacelike. One means we are beyond the horizonx = 1/2g. Therefore, we replacex withT andt withX (the conversion is similar with that encountered in the BH spacetime when the horizon r = 2m is crossed [8]). One obtains The above procedure is equivalent to the analytical continuation across the Rindler horizon . We replace now (T ,X) coordinates with (τ, y), according to whereT ≻ 1/2g. The spacetime (2.6) becomes now which is the Milne metric, well known from cosmology and, more important in our case, from the RHIC (Milne's coordinates are adapted to the Bjorken flow since the velocity vector of the flow is ∂ t ). They are nothing else but the Rindler coordinates in the quadrant t ≻ |x|, where the stationary observer is located. That observer (beyond the Rindler horizon) detects thermal radiation of temperature T U = g/2π as a consequence of the fact that some particle accelerates in the region |x| ≻ t.
While the accelerated particle moves in the Rindler wedge, for the stationary observer from the "hidden" region the geometry is time dependent (Milne's universe). A similar situation we encounter when an electric charge is uniformly accelerated. The retarded radiation emitted by the charge is measured by an observer located in a region inaccesible to the charge but the total energy passing through the surface comoving with the particle is zero [10] (no electromagnetic energy flux). As Boulware has noticed, the flow of energy comes from the past horizon as if it were another opposite charge in the opposite Rindler wedge. However, the Unruh radiation is nonvanishing both for the accelerated observer and for the stationary one from the Milne region. As Castorina et al. [1] have observed, the stationary observer in the "hidden" region measures thermal radiation of Unruh temperature as a consequence of the passage of the accelerated particle.
In terms of the Minkowski coordinates, we have In other words, τ corresponds to the Minkowski interval (proper time) and gy is the (adimensional) rapidity , with τ ≻ 0 and −∞ ≺ y ≺ ∞ (|y| ≈ ∞ corresponds to the fronts of the expansion fluid [7] and y ≈ 0 is the central rapidity region). It is a known fact that the Schwarzschild geometry is almost flat near the event horizon r = 2m, where the line element appears as where t S is the Schwarzschild time , m is the central mass and dΩ 2 = dθ 2 + sin 2 θ dφ 2 is the metric on the unit two -sphere. Using the expression of the surface gravity on the horizon, κ = 1/4m, eq. (2.10) gives us with r ≻ 2m = 1/2κ . The spacetime (2.11) resembles the Rindler metric (2.5) when we take θ, φ = const. and replace κ with the acceleration g. This is a well known result. We should only stress the direct connection between the mass and the proper acceleration g [9] . If we go further by analogy, we may conclude that for r ≺ 2m and near the BH horizon, the geometry must be Milne's, which is flat. A time dependent metric inside a BH has been proposed in [8] and seems to be ds 2 = −dt 2 + dz 2 +t 2 dΩ 2 (2.12) where z plays the role of the radial coordinate ( −∞ < z < ∞ ) andt is the temporal coordinate. The above geometry is curved (the scalar curvature R µ µ = 4/t 2 ) with a singularity on the hypersurfacet = 0. Nevertheless, whent → ∞, the spacetime (2.12) becomes flat. That could be seen from the expression of the Kretschmann scalar R αβµν R αβµν = 16 t 4 , (2.13) computed from the only nonzero component of the Riemann tensor R θφθφ = 2t 2 sin 2 θ. (2.14) In addition, all the components of the stress tensor inside the BH have the same behaviour : they vanish whent → ∞ As it was shown in [8],t → ∞ is equivalent to "near the horizon", viewed from the interior of the BH. It is not surprising to get flat spacetime at temporal infinity. We only remind that the "near horizon " approximation (2.10) for the Schwarzschild spacetime ( see also [11] ) leads to a curved metric in spherical coordinates (one has a nonzero component of the Riemann tensor : R θφ θφ = 1/4m 2 ). The metric becomes flat when m → ∞ ; that has the same effect aŝ t → ∞ inside the horizon.
The metric (2.3) from [8] is curved but when the time tends to infinity the components of the Ricci tensor are vanishing and the geometry becomes Minkowskian. The difference compared to Milne's line element comes from the spherical coordinates used . The fact that the interior of the BH from [8] has no any singularity has been recently remarked by Nastase [5] in his dual BH model. He stressed that the core of the fireball carries informations from inside the BH, with no singularity at its centre.
The shear viscosity tensor
Our next task is to compute the components of the shear tensor corresponding to the RHIC fireball, in the Milne spacetime (2.8). The non -null Christoffel symbols we need are The covariant expression of the shear tensor is given by where u α = (−1, 0, 0, 0) is the proper velocity of the fluid . We have taken a "comoving" frame where all the fluid points are at rest and share the same proper time τ [7] . In other words, the local rest frame of the fluid is given by τ and the rapidity gy.
The scalar expansion Θ can be found from
The components (3.5) of σ αβ have the same form with those from [8] (excepting a factor of 2 , due to the fact that in [8] the spacetime contains two time dependent metric coefficients) , even though the spacetime is not flat in the latter case. The two angular components θ and φ have the same contribution as x ⊥ from the present case. The Milne geometry is, of course, more appropriate since the Bjorken expansion is more or less one dimensional (along the collisional axis of the heavy ions). | 2012-01-24T12:15:11.000Z | 2008-10-24T00:00:00.000 | {
"year": 2008,
"sha1": "8bb0a586b7596c04b59a07072a435dcc795bcf55",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0810.4459",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8bb0a586b7596c04b59a07072a435dcc795bcf55",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
922727 | pes2o/s2orc | v3-fos-license | Structural conversion of neurotoxic amyloid-β(1–42) oligomers to fibrils
The Aβ42 peptide rapidly aggregates to form oligomers, protofibils and fibrils en route to the deposition of amyloid plaques associated with Alzheimer's disease. We show that low temperature and low salt can stabilize disc-shaped oligomers (pentamers) that are significantly more toxic to murine cortical neurons than protofibrils and fibrils. We find that these neurotoxic oligomers do not have the β-sheet structure characteristic of fibrils. Rather, the oligomers are composed of loosely aggregated strands whose C-terminus is protected from solvent exchange and which have a turn conformation placing Phe19 in contact with Leu34. On the basis of NMR spectroscopy, we show that the structural conversion of Aβ42 oligomers to fibrils involves the association of these loosely aggregated strands into β-sheets whose individual β-strands polymerize in a parallel, in-register orientation and are staggered at an inter-monomer contact between Gln15 and Gly37.
the mica surface. Our previous single-touch AFM images of soluble Aβ42 oligomers (see Fig. 6 in Mastrangelo et al. 5 ) reveal that disc-shaped particles with the dimensions similar to those in Supplementary Figure 1c are readily obtained at physiological temperature and salt. We now interpret these "high molecular weight" oligomers as decamers/dodecamers. In fact, we found that peptide inhibitors are able to cap the oligomers at a 2.8 nm average height without changing their observed width 5 . The amphipathic peptide inhibitors may prevent decamer assembly by binding to the hydrophobic face of the pentamer.
Recent results using analytical mass spectrometry to investigate oligomer size distributions found that only hexamers and dodecamers were formed by Aβ42 6 . Additional hexamer units did not add to the dodecamer to form 18-mers, and the formation of dodecamers from hexamers did not involve the addition of monomer or dimer units. These observations are consistent with the dimerization or stacking of hexamers (or pentamers) as described above. In our model, the interaction of the two hydrophobic surfaces of the hexamer caps the growth of the dodecamers.
The model of the pentamer with a large hydrophobic surface is consistent with results from urea denaturation and Bis-ANS binding studies 7 . One of the most striking differences in Aβ40 and Aβ42 oligomers is the ability of the oligomer to bind Bis-ANS. There is a 10-fold increase in binding of Bis-ANS to Aβ42 in the oligomeric state over denatured Aβ42 indicating that the folded structure has a distinct hydrophobic surface. Also, urea has been shown to denature the oligomers, but does not prevent fibril formation. This observation suggests that the oligomers are not an obligate intermediate in fibril formation. Rather, the oligomers appear to adopt a stable, folded conformation and must unfold prior to refolding into the parallel and in-register geometry characteristic of fibrils.
Molecular Dynamics Simulations of the Aβ42 Monomer in the Oligomer Conformation
The structural constraints obtained by solid-state and solution NMR spectroscopy, along with the information on the size and molecular composition of the Aβ42 oligomers from SEC, native gels and light scattering, can be used to restrain and evaluate molecular models of the Aβ42 oligomer (e.g. see Urbanc et al. 8 , Yun et al. 9 , and Nguyen et al. 10 ). Supplementary Figure 1a presents a molecular model of the monomer unit in the Aβ42 oligomer obtained by restrained molecular dynamics simulations and energy minimization using Discovery Studio 2.5 (Accelrys, San Diego, CA). A standard dynamics cascade was implemented using the CHARMM force field with a two stage steepest descent and conjugate gradient minimization, followed by heating and equilibration.
Restraints based on solid-state NMR distance measurements and solution state NMR amide H/D exchange measurements were placed on a starting extended structure of Aβ42. A restraint (≤ 5 Å) was placed between the side chain positions of Phe19-(Cζ) and Leu34 (Cδ) corresponding to the restraint obtained from our solid-state NMR measurement. Two additional restraints were included between the backbone nitrogens of Val12 and Leu17 (10 Å) and between Val36 and Val39 (6 Å) on the basis of amide H-D exchange (Supplementary Fig. 6). We found that two stretches of amino acids (His13-Gln15 and Gly37-Gly38) were accessible to bulk solvent, but were flanked by regions that were inaccessible. The added restraints generated turns that allowed the adjacent sequences to collapse onto one another. A similar turn region at Gly37-Gly38 was observed in molecular dynamics simulations of Aβ42 oligomer formation by Urbanc et al. 8 . The resulting structure contained one surface that was largely hydrophobic, while the opposite surface was more polar and dominated by charged groups at the N-terminus.
Aβ42 Fibrils
The solid-state NMR measurements presented in this study show that the hydrophobic core of Aβ42 fibrils has a β-strand-turn-β-strand (β-turn-β) conformation with Gln15 and Phe19 of the Nterminal β-strand in contact with Gly37 and Leu34 of the C-terminal β-strand, respectively. The β-strands are staggered at the Gln15-Gly37 contact, and polymerize in a parallel, in-register orientation. Supplementary Figures 1d-f present models for the monomer unit within the fibril, the protofilament, and a cross-section of a mature fibril.
There are three defining structural features of the protofilament: cross-β structure, parallel and in-register orientation of the β-strands and staggered packing (domain-swapping) of the βstrands. Experimental data on the Aβ42 protofilament generally agree on these features. Fiber diffraction studies of Aβ fibrils reveal cross-β structure where the individual β-strands polymerize perpendicular to the fibril axis 11 , and both solid-state NMR and EPR measurements have shown that the β-strands within Aβ42 fibrils have a parallel and in-register orientation [12][13][14] . The domain swapping feature identified in this study by the intermolecular Gln15-Gly37 contact is consistent with the pairwise mutational studies of Riek and coworkers 15 . There is disagreement in the structure of the protofilament at the N-terminus (before Gln15) and at the C-terminus (after Gly37). On the basis of solvent exchange measurements Olofsson et al. 16 found that the first 10 residues of Aβ42 are solvent accessible (and possibly unstructured), whereas similar experiments by Riek and coworkers 15 found that in the first 17 residues, the amide N-H protons in approximately half of the population exchanged quickly (≥ 10 h -1 ), whereas the other half exchanged slowly (≤ 10 -2 h -1 ). The interaction of the N-terminus (see below) with the β-turn-β core of the protein may explain the slowly exchanging population.
There are also differences in Aβ42 fibril models after Gly37. Irie and co-workers 17,18 concluded that the C-terminal β-strand breaks at Gly37/38 to allow Ala42 to contact the sidechain of Met35. They observed a cross-peak in DARR NMR experiments between Met35 and Ala42 in fibrils formed after two days of incubation at 37 °C 17 . These studies were based on their earlier proline scanning experiments where it was found that substitution of proline at positions 40-42 reduced fibril formation and cytotoxicity, whereas threonine substitutions at residues 41 and 42 aggregated strongly and exhibited potent cytotoxicity. Figure 1f presents a model of the mature Aβ42 fibril. We have previously proposed that the cross section of the Aβ42 fibril has two protofilaments that form contacts between Met35 and Gly37 19 . Our model of the mature Aβ42 fibril developed from the observation of this Met35-Gly37 contact also predicts an intermolecular Ile31-Val39 contact between protofilaments. We were unable to observe this contact in our current studies (Supplementary Results 7) suggesting that the protofilament interface may be a source of polymorphism in Aβ42.
Supplementary
In general, polymorphisms in fibril structure might arise from differences in the conditions under which the fibrils are grown. For example, oxidation of Met35 may serve to disrupt hydrophobic interactions between protofilaments. Cryo-EM images of fibrils formed of Aβ42 with Met35 oxidized show bundles of individual protofilaments each separated by a 15 Å gap 15 . In this case, the unstructured N-terminus may mediate the association of protofilaments to form fibrils. A more recent cryo-EM study of Aβ42 generated a similar model of the mature fibril 20 . The protofilaments have a β-turn-β conformation and a parallel and in-register orientation as in Supplementary Figure 1e, and the two protofilaments that form the mature fibril wind around a hollow core as in Supplementary Figure 1f. In this reconstruction, the hydrophobic C-terminal sequence of the peptide lines the hollow core and protofilament interactions are mediated by the N-terminal sequence.
The similarities and differences between fibril images observed by TEM and cryo-EM suggest that differences in the experimental conditions (incubation time, buffer conditions, and temperature) may influence fibril morphology. For Aβ40, there are a number of studies that suggest that the conditions under which the fibrils are grown can produce polymorphisms [21][22][23] . Tycko and co-workers have provided strong evidence for two well-defined polymorphs 24 , one containing two protofilaments in the fibril cross section and one containing three protofilaments.
For our studies, we typically grow fibrils for at least 10-14 days to observe dense networks of fibrils by TEM. We can monitor the oxidation state of Met35 by the chemical shift of the Met- 13 Cε resonance. Under all of the conditions of our experiments, Met35 remained in the reduced state. While we use low salt concentrations to trap the oligomers, fibrils are grown under physiological conditions (10 mM phosphate, 150 mM NaCl, pH 7.4). Figure 1. Molecular models of the Aβ42 oligomers (a-c) and fibrils (d-f). The model of the Aβ42 monomer in the oligomer conformation (a) was developed on the basis of structural constraints obtained by solid-state and solution NMR spectroscopy. The constraints were used in restrained molecular dynamics simulations on the Aβ42 monomer to generate a folded structure. (b) Aβ42 pentamer. The assembly of five monomers has the dimensions of the most abundant particles observed by AFM. The orientation of the monomer in the model of the pentamer was chosen to place the C-terminus in the center of the oligomer. Hydrogendeuterium exchange data indicate that the C-terminal three amino acids are among the most protected from solvent exchange. (c) Aβ42 decamers. The association of disc-shaped pentamers can form decamers with heights of ~ 4 nm. (d) Monomer within the Aβ42 fibril. The monomer unit has a U-shaped geometry with Phe19 in contact with Leu34. (e) Aβ42 protofilament. At least two protofilaments associate to form mature fibrils 19 . Our NMR measurements indicate that Gln15 and Gly37 (blue spheres) from adjacent peptides are close in space in the protofilament structure, while pairwise mutational experiments have shown that Asp23 and Lys28 may interact in adjacent peptides 15 . The Asp23-Lys28 salt bridge is thought to be a key element of the turn structure in Aβ40 25 . The observation that the Phe19-Leu34 contact in Aβ42 is the same as in Aβ40 26 argues that the β-turn-β structure is the same in the fibrils formed from these two peptides. Aβ40 and Aβ42 can homogeneously co-mix in amyloid fibrils suggesting they have the same structural architecture 14 . (f) Cross section of Aβ42 fibrils. Two protofilament units can associate to form mature fibrils (shown here in an expanded view since the molecular contacts mediating protofilament association may be a source of polymorphism in Aβ42 fibrils). We have previously observed that Aβ42 protofilaments form contacts between Met35 and Gly37 19 . However, a recent cryo-EM study of Aβ42 indicates that the protofilaments wind around a hollow core 20 .
Supplementary Results 2: Atomic Force Microscopy and Fluorescence Spectroscopy.
Atomic force microscopy (AFM) provides a means to image Aβ42 oligomers in a hydrated state without the need for negative staining. We had previously introduced a new approach for AFM measurements of Aβ oligomers that images the particles using a low force single touch of the AFM probe per pixel 5 . The single touch AFM methodology has two advantages over conventional tapping mode AFM: the measurements are done in aqueous buffer and only low force contacts with the samples are made, minimizing disruption due to impact of the AFM tip. The height measurements by AFM are extremely accurate, whereas the width measurements require a correction based on the width of the AFM tip 5 . Supplementary Figures 2a and 2b present representative fields of Aβ42 oligomers obtained by single touch AFM.
AFM was carried out using a LifeScan controller developed by LifeAFM (Port Jefferson, NY) interfaced with a Digital Instruments (Santa Barbara, CA) MultiMode microscope fitted with an E scanner. With this instrument configuration, only a single contact of the AFM probe is made with the sample per pixel with minimal compressive forces (30-100 picoNewtons nm -1 ) applied to the sample. AFM samples were prepared by adsorbing 20 µL of sample mixture to freshly cleaved ruby mica (S & J Trading, Glen Oaks, NY). Samples were imaged under hydrated conditions using super-sharp silicon probes (SSS-Cont, Nanosensors, Neuchatel, Switzerland) that were modified for magnetic retraction by attaching samarium cobalt particles. We estimate the effective diameter of the super-sharp silicon probes to be 4 ± 1 nm at a height of 2 nm. For volumetric estimates, non-overlapping particles in several fields were analyzed with different dilution. Data analysis and graphics was performed using Interactive Display Language 5.0 (Research Systems Inc., Boulder, CO). In the Z-scale bars, numbers in each color square indicate the Z-value at the middle of the range for that color.
To monitor the stability of Aβ42 oligomers prepared under low salt and low temperature conditions, thioflavin T binding was monitored by fluorescence spectroscopy. Thioflavin T fluorescence has been widely used to characterize the kinetics of fibril formation 27 We have previously observed that the particles with heights of ~4 nm can be capped by the addition of peptide inhibitors without changing their width 19 . If the particles with heights of ~2 nm represent the pentamers and hexamers that have been identified by Teplow and co-workers through cross-linking studies 28 , then particles with heights of ~4 nm would be consistent with dodecamers observed by mass spectroscopy 6 and other methods 29 .
Volumetric analysis of Aβ42 oligomers
In Supplementary Figure 2d, the volumes of non-overlapping particles observed in the AFM field of view for Supplementary Figures 2a-b can be calculated from their heights and corrected widths. The histogram of particle volumes suggests there are three separate distributions of particles. The first distribution corresponds to monomers/dimers with volumes of less than 100 nm 3 . We have previously arrayed the soluble oligomers formed from Aβ42 by size and height (see Figure 2 in Mastrangelo et al. 5 ). The smallest particles observed had heights of 1-1.5 nm and widths of 5-7 nm, which we categorized as monomers or dimers. The second distribution corresponds to pentamers with a mean volume of ~ 350 nm 3 . A volume of 350 nm 3 is equivalent to a cylinder with a radius of 7.1 nm and a height of 2.2 nm. Visually, this distribution corresponds to the most numerous oligomers observed in Supplementary Figure 2b with blue/green height coloring. The third distribution corresponds to decamers or dodecamers with a volume of ~ 700 nm 3 , twice the size of 350 nm 3 particles. This distribution corresponds to the oligomers with orange/red height coloring. The oligomers are disc-shaped, rather than spherical micelles, i.e. the widths of the Aβ42 oligomers are appreciably greater than the heights.
We can estimate the number of individual monomers/dimers in the oligomer sample as ~3% of the total number of monomers by analyzing the height or volume distributions and assuming monomers/dimers have heights of 0.75-1.5 nm and volumes of < 200 nm 3 (see Mastrangelo et al. 5 ). Similarly, we can estimate the number of large oligomers with volumes greater than ~1000 nm 3 to be less than 1% on the basis of their relatively numbers.
Time-resolved binding of thioflavin T to Aβ42 oligomers incubated at 4 °C and 37 °C
Oligomer samples are stable at 4 °C for several days. The same samples incubated at 37 °C rapidly form fibrils with a very short lag phase due to the relatively high concentration (200 µM) of Aβ42 peptide used (see sample preparation methods in main text). Thioflavin T binding assays were performed for Aβ42 oligomer samples incubated at 4 °C and 37 °C. Thioflavin T dye was added to Aβ42 samples at a 20:1 ratio, and fluorescence was measured at 490 nm with an excitation wavelength of 446 nm in a SpectraMax spectrofluorometer (Molecular Devices, Sunnyvale, CA) using SoftMax Pro control software.
Supplementary Results 3: Size Exclusion Chromatography and Dynamic Light Scattering.
Size exclusion chromatography (SEC) has been used extensively for characterizing the size distributions of oligomers of Aβ peptides under nondenaturing, nondisaggregating conditions 30,31 . The method is useful for distinguishing protofibrils from small oligomers, and in some cases has the ability to distinguish smaller populations of oligomers. Supplementary Figure 3 presents an analysis by SEC of the Aβ42 oligomers stabilized at 4 °C under low salt conditions in order to estimate their composition and homogeneity. Additionally, dynamic light scattering measurements were taken of the Aβ42 oligomer samples and are described below.
The two methods, SEC and AFM, for estimating the composition and purity of the sample are not readily comparable. However, there are some general conclusions that can be drawn. First, the level of monomers/dimers in the sample estimated by both methods is small, on the order of 2-3% of the total. Second, the level of very large oligomers is small, again less than ~2% of the total. Taken together, the AFM and SEC results agree that over 90% of the sample is comprised of small oligomers from pentamers to dodecamers. The SEC results and the AFM results are not completely consistent with respect to the distribution of small oligomers (pentamers to dodecamers). The SEC analysis shows a relatively narrow distribution for the predominant oligomer that migrates with a molecular weight corresponding to a pentamer (and possibly hexamer). In contrast, our analysis of AFM heights suggests the presence of two distributions of particles. The major fraction of the particles observed by AFM has heights of ~2 nm and molecular volumes corresponding to pentamers/hexamers. A minor fraction of the particles observed by AFM has heights of 3-4 nm and molecular volumes corresponding to decamers/dodecamers. Both populations of particles have the similar widths (10-15 nm) suggesting that the 2 nm high particles (i.e. pentamers) can associate in solution. We propose that the absence of decamers (or dodecamers) in the SEC (and native gel) measurements is due to weak association of the pentamers under low temperature and low salt conditions. Aβ42 oligomer samples were analyzed by size exclusion chromatography using an ÄKTA Purifier 10 FPLC (GE Healthcare, Piscataway, NJ) placed in a deli-case refrigerator at 4 °C. Samples were injected into a Superdex 200 column (3000 -600,000 MW range) at an elution rate of 0.4 ml min -1 and detected by absorbance at 220 nm. A standard calibration curve (Supplementary Figure 3b) was constructed by measuring elution times of ferritin (440 kDa), alcohol dehydrogenase (150 kDa), bovine serum albumin (66 kDa), ovalbumin (44 kDa), carbonic anhydrase (29 kDa), soybean trypsin inhibitor (20 kDa), lysozyme (14 kDa), and aprotinin (6.5 kDa).
Dynamic Light Scattering
Dynamic light scattering has been used to study the initial aggregation kinetics of Aβ peptides since the amount of scattering increases exponentially with the size of the particle 30,32 . The drawback of dynamic light scattering is that the exponential dependence of the scattering intensity on particle size means that a few, large aggregates in the sample can overwhelm the scattering from smaller, more abundant species. Nevertheless, dynamic light scattering can provide an estimate of the hydrated radius of the particle if the larger aggregates are removed. Measurements were taken of Aβ42 oligomers stabilized by low temperature and low salt before and after filtering through a 0.02 µm filter. Unfiltered samples had average diameters of 98.6 nm (0.327 polydispersity), whereas filtered samples had average diameters of 14.8 nm (0.375 polydispersity). The SEC analysis of the Aβ42 oligomers showed that in addition to the dominant pentamer/hexamer complex, a very weak larger complex of ~100 monomers was observed. This small contribution of a large aggregate dominates the light scattering from the sample and results in a very large effective particle. However, when the sample is filtered, the result is an oligomer diameter that matches that obtained from AFM.
Dynamic light scattering measurements were made using a Brookhaven Instruments 90Plus Particle Size Analyzer (Brookhaven Instruments, Holtsville, NY). Aβ42 oligomer samples were prepared as described in the main text or with an additional filtering step using a 0.02 µm nylon filter (PALL Microelectronics, East Hills, NY). Figure 3. Size exclusion chromatography of Aβ42 oligomers. The SEC chromatogram in (a) shows a single intense peak indicating that the majority of particles (94%) have molecular weights corresponding to a pentamer/hexamer complex. Small percentages of trimers and dimers, as well as higher order complexes, are also present. No monomers are present. The use of low salt and low temperature in the SEC may contribute to improving the resolution. In (b), a standard calibration curve is presented with an r 2 value of 0.98. The observation of smaller oligomeric species supports the use of a standard calibration curve in analyzing particle sizes and supports the conclusion that the particles observed by single-touch AFM are relatively homogeneous.
Supplementary Results 4: Toxicity and Cell-Viability Assays.
Several cell culture models have been used to assay β-amyloid toxicity, including PC12 33-35 , neuroblastoma [36][37][38][39] , and primary murine neuronal cultures [40][41][42] . Primary cultures of murine cortical neurons were chosen for this study because they closely resemble the human neuronal environment. Cell viability was determined by assaying mitochondrial reduction of the tetrazolium redox dye 3-(4,5-dimethylthiazol-2yl)-2,5-diphenyltetrazolium bromide (MTT). MTT reduction has been shown to be a reliable and reproducible indicator of the cytotoxic potential of Aβ peptides 43 . In Figure 2f (main text), we showed the effects of toxicity on primary neuronal cells by the addition of soluble oligomers formed at low temperature and low salt conditions and of protofibrils (and oligomers) after incubation at 37 °C for 6 h. In Supplementary Figure 4, we extend these studies to show that the toxicity of Aβ42 decreases further as the oligomers/protofibrils are incubated for longer time periods (1-3 days) under physiological salt conditions before adding to the neuronal cell cultures. Under these conditions, mature fibrils are formed, and the population of neurotoxic oligomers diminishes.
Neonatal murine cortical neuronal cultures were prepared using 6-8 E16 pups. Prior to culture, 48-well plates were incubated overnight at 4°C with 4 µg ml -1 laminin (Sigma, St. Louis, MO) and 100 µg ml -1 poly-D-Lysine (Sigma, St. Louis, MO). The plates were rinsed with sterile water and allowed to dry under UV light. Under sterile conditions, the pups were extracted from the uterus of a decapitated female, their brains removed and placed into chilled Hank's BSS without Ca 2+ and Mg 2+ (Invitrogen, Carlsbad, CA). The brain stem, olfactory bulbs, and leptomeninges were removed from each brain. The HBSS was replaced with Neurobasal medium (Invitrogen, Carlsbad, CA) containing 0.25% trypsin and the tissue was incubated at 37 °C for 10-15 min. The tissue was washed 3x with plain Neurobasal medium followed by 10 min, room temperature incubation in Neurobasal medium containing 2 mg ml -1 soybean trypsin inhibitor (Sigma, St. Louis, MO). The tissue was washed 3x with plain Neurobasal medium. The tissue was dispersed using three fire polished decreasing size borehole Pasteur pipettes in G3 Neurobasal medium containing 10 µg/ml gentamicin (Sigma, St. Louis, MO), 25 µM Lglutamate, 500 µM L-glutamine (Invitrogen, Carlsbad, CA) and B27 (Invitrogen, Carlsbad, CA), followed by filtration through a 40 µm cell strainer (Fisher Scientific, Waltham, MA). The volume of the filtered cell suspension was increased to 25 ml with G3 medium. The filtered suspension was left undisturbed for 5-10 min after which 10 ml of filtered suspension was removed from the top and place into a new tube. 10 ml of G3 medium was added back into the filtered cell suspension and this procedure was repeated two more times. The 30 ml of collected cell suspension was centrifuged 200 x g for 2 min and the supernatant removed. The cell pellet was re-suspended in 1 ml G3 medium. The cells were counted and plated at 6.7 x 105 cells/ml in two 24-well plates. On day two, one half of the G3 medium was removed from the cell cultures and replaced with the same volume of fresh G3 medium. On day three, one half of the medium was removed and replaced with the same volume of G2 Neurobasal medium containing 10 µg ml -1 Gentamicin, 500 µM L-glutamine, B27, and 10 µM araC (Cytosine-ß-F-arabinofuranoside) (Sigma, St. Louis, MO). All experiments were performed between days 4-6 using G2 medium without araC. FTIR 44,45 , Raman 46 and CD 47,48 spectroscopy have been used extensively to assess the secondary structure of Aβ fibrils and oligomers. Raman and IR methods monitor the amide vibrational frequencies, which are sensitive to secondary structure. Supplementary Figure 5 presents the FTIR spectra of Aβ42 fibrils and oligomers. In Supplementary Figure 5a, the IR spectrum of the fibrils is dominated by an amide I vibration at 1630 cm -1 , characteristic of βsheet secondary structure 49 . The integrated intensity of this band suggests that most (85%) of the fibril structure is β-sheet. The amide I intensities are consistent with the model proposed in Supplementary Figure 1 where only the N-terminal 10 amino acids are solvent exposed. The N-terminal residues are thought to be unstructured. In the IR spectrum, random coil is observed as a broad featureless band at ~1650 cm -1 . The region from 1660-1695 cm -1 has several distinct (i.e. narrow) bands that may be attributed to the turn structure in the fibrils and to side chain vibrations of arginine, asparagine and glutamine.
Supplementary
The IR spectrum of the stable oligomers (blue) is distinctly different from that of the fibrils. There is a relatively well-defined band at 1645 cm -1 . This band is broader than the corresponding band for the fibrils, but still too low in frequency to correspond to α-helix. We assign the 1645 cm -1 vibrational band to unordered secondary structure and/or to 'aggregated strand' that has a less defined preference for φ and ψ torsion angles. There is increased intensity in the amide I region from 1660-1695 cm -1 relative to the fibrils. We assign the vibrational modes in this region to turns in the Aβ42 sequence and side chain vibrations of arginine, asparagine and glutamine. The solvent accessibility studies described in Supplementary Results 6 suggest there are more turns in the oligomers than fibrils. We do not assign the high frequency vibration at 1685 cm -1 to anti-parallel β-sheet structure 45 on the basis of the H/D exchange and volumetric analysis of the AFM images. Figure 5b presents FTIR spectra of hydrated (red dashed line) and lyophilized (black line) oligomers. The major component of the spectrum at 1645 cm -1 has the same frequency and intensity indicating that the overall conformation is not influenced by lyophilization. There is a small increase in intensity between 1660 and 1685 cm -1 and a small decrease in intensity at 1620 cm -1 upon lyophilization that we attribute to small changes in the interaction of the N-terminal sequence of the Aβ42 peptide upon lyophilization.
Supplementary
The FTIR spectra were obtained from 400-4000 cm -1 on a Bruker IFS 66V/S spectrometer. The spectral area from 1425 to 1900 cm -1 were curve fit and the integral values of the fitting curves were used to determine the amount of β sheet, random coil, α helix, β turn, and antiparallel β sheet contributions to the peptide structure. Different secondary structures have characteristic amide frequencies as shown 49 . Anti-parallel β-sheet aggregated strands Arg, Asn, Gln side-chains
Supplementary Results 6: Amide Hydrogen-Deuterium Exchange by Solution NMR Spectroscopy.
The soluble nature, stable character and relatively low molecular weight of the neurotoxic oligomers makes them amenable to solution NMR methods. Previous studies on monomers of Aβ42 have yielded assignments for the resonances observed in the 1 H-15 N heteronuclear single quantum coherence (HSQC) spectra 47,50,51 . Supplementary Figure 6 takes advantage of these assignments to probe the solvent accessibility as a function of the Aβ42 sequence. Peaks with overlapping assignments are marked with (*) and the exchange ratios are colored in light gray. No peak assignments were made for Asp1, Ala2, Glu3, His6, Phe20, or Lys28. (c) Schematic of Aβ42 monomer in the oligomer conformation based on solidstate NMR and amide H-D exchange. In addition to the turn conformation defined by the Phe19-Leu34 contact observed by solid-state DARR NMR measurements, amide exchange suggests solvent accessible turn regions at His13-Gln15, Gly25-Gly29, and Gly37-Gly38, as well as a loosely defined solvent accessible N-terminal segment up to Gly9.
Supplementary
Oligomer samples were prepared using U-15 N-Aβ42 peptides (rPeptide, Bogart, GA) and analyzed on a Bruker AVANCE 700 MHz spectrometer using a TXI probe. 1 H-15 N HSQC spectra were obtained at 4 °C on Aβ42 oligomer samples containing either 10% D 2 O (for signal locking) or 70% D 2 O (for exchange). Peak assignments were based on previous reports 47,50,51 . Peak exchange occurred within the experimental acquisition time (< 1.5 h) and exchange ratios were calculated as the ratio of the deconvoluted peak volumes of the exchanged samples (containing 70% D 2 O) relative to the deconvoluted peak volumes of the non-exchanged samples (containing 10% D 2 O).
Supplementary Figure 7.
Parallel and in-register orientation of β-strands in Aβ42 fibrils, but not in neurotoxic oligomers. (a) Labeling scheme to test for a parallel and in-register orientation of the C-terminal β-strand in Aβ42 fibrils and oligomers using an equimolar mixture of Aβ42-GMG and Aβ42-G33 peptides. The red dashed line corresponds to the 4.7 Å distance expected between adjacent Gly33 residues along the fibril axis. (b) Rows from the two dimensional DARR spectra of Aβ42 fibrils formed from an equimolar mixture of Aβ42-GMG:Aβ42-G33 (red trace) or Aβ42-GMG alone (black trace). A large cross-peak (red line) is observed in the Aβ42-GMG:Aβ42-G33 mixture indicating inter-strand molecular contacts between Gly33-CO and Gly33-Cα in a parallel and in-register orientation. A smaller natural abundance (*) cross-peak is observed in the spectrum of Aβ42-GMG alone. Spinning side bands (ssb) are observed due to magic angle spinning as indicated. (c) Rows from the two dimensional DARR spectra of Aβ42 oligomers formed from an equimolar mixture of Aβ42-GMG:Aβ42-G33 peptides (red trace) or from Aβ42-GMG peptide alone (black trace). No change in cross-peak intensity is observed, indicating that C-terminal strands in Aβ42 oligomers do not have a parallel and in-register orientation. 13 C NMR chemical shifts (ppm relative to neat tetramethylsilane) of Aβ42 oligomers and fibrils. For split assignments, the major chemical shifts (>70%) are shown in bold. | 2017-11-08T01:08:26.481Z | 2010-04-11T00:00:00.000 | {
"year": 2010,
"sha1": "0146e32c9ca3a08aa515e7a4b8cbdbc69de0ecff",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc2922021?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "0146e32c9ca3a08aa515e7a4b8cbdbc69de0ecff",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
14724734 | pes2o/s2orc | v3-fos-license | Alarming Levels of Drug-Resistant Tuberculosis in HIV-Infected Patients in Metropolitan Mumbai, India
Background Drug-resistant tuberculosis (DR-TB) is a looming threat to tuberculosis control in India. However, no countrywide prevalence data are available. The burden of DR-TB in HIV-co-infected patients is likewise unknown. Undiagnosed and untreated DR-TB among HIV-infected patients is a major cause of mortality and morbidity. We aimed to assess the prevalence of DR-TB (defined as resistance to any anti-TB drug) in patients attending public antiretroviral treatment (ART) centers in greater metropolitan Mumbai, India. Methods A cross-sectional survey was conducted among adults and children ART-center attendees. Smear microscopy, culture and drug-susceptibility-testing (DST) against all first and second-line TB-drugs using phenotypic liquid culture (MGIT) were conducted on all presumptive tuberculosis patients. Analyses were performed to determine DR-TB prevalence and resistance patterns separately for new and previously treated, culture-positive TB-cases. Results Between March 2013 and January 2014, ART-center attendees were screened during 14135 visits, of whom 1724 had presumptive TB. Of 1724 attendees, 72 (4%) were smear-positive and 202 (12%) had a positive culture for Mycobacterium tuberculosis. Overall DR-TB was diagnosed in 68 (34%, 95% CI: 27%–40%) TB-patients. The proportions of DR-TB were 25% (29/114) and 44% (39/88) among new and previously treated cases respectively. The patterns of DR-TB were: 21% mono-resistant, 12% poly-resistant, 38% multidrug-resistant (MDR-TB), 21% pre-extensively-drug-resistant (MDR-TB plus resistance to either a fluoroquinolone or second-line injectable), 6% extensively drug-resistant (XDR-TB) and 2% extremely drug-resistant TB (XDR-TB plus resistance to any group-IV/V drug). Only previous history of TB was significantly associated with the diagnosis of DR-TB in multivariate models. Conclusion The burden of DR-TB among HIV-infected patients attending public ART-centers in Mumbai was alarmingly high, likely representing ongoing transmission in the community and health facilities. These data highlight the need to promptly diagnose drug-resistance among all HIV-infected patients by systematically offering access to first and second-line DST to all patients with ‘presumptive TB’ rather than ‘presumptive DR-TB’ and tailor the treatment regimen based on the resistance patterns.
Introduction
India is a high burden country for tuberculosis (TB) and multidrug-resistant TB (MDR-TB). The World Health Organization has estimated that India accounted for 26% of the total number of TB cases worldwide in 2012, with 2.2% and 15% of the new and retreatment cases respectively being caused by multidrugresistant strains [1]. Further, India is home to approximately 2.4 million people living with HIV [2] and considered to have a high burden on account of the large absolute numbers of people living with HIV in the country.
The dual burden of HIV and TB/DR-TB in India is significantly high with a combined rate of 5.2%, ranging from 0.4% to 28.8% in various studies, with increasing trends noted in states having a higher burden of HIV infection [3][4][5][6][7]. However, nation-wide studies do not exist and previous studies have occurred mainly in hospitals and tertiary care centres [2,[6][7][8][9][10][11]. A crude estimate from these studies suggests that 2500-3000 HIVinfected persons develop MDR-TB annually in India.
Country-wide or state-wide drug resistance surveys (DRS) aim to estimate the DR-TB burden at the country or state level. While this approach is scientifically and operationally acceptable, it may mask significant and important variance in the magnitude of the epidemic in different localities, communities or specific populations. For India, a vast country with an enormous burden of TB and a relatively large burden of HIV in absolute numbers, this statement seems to hold true; from an overcrowded impoverished slum in Mumbai to a small isolated village in the Northern Eastern Provinces of the country, one can assume that several different epidemics may exist. A description of such local epidemics is necessary so as to complement the country-wide prevalence estimate. While there is an urgent need for a nationally representative, country-wide DRS in India, specific studies to identify pockets of extremely high DR-TB prevalence or extensive drug resistance patterns are equally needed in order to advocate for and implement effective control strategies.
The overall aim of this study was to assess the burden of drugsusceptible and drug-resistant tuberculosis among HIV-infected patients attending antiretroviral treatment (ART) centers in the metropolitan area of Mumbai. The specific objectives were a) to determine the proportion of HIV-infected patients with DR-TB among those attending public ART centers, b) to describe drug susceptibility patterns among Mycobacterium tuberculosis isolates from this population, and c) to identify factors associated with TB and drug-resistant TB among HIV patients. We aimed to contribute to the evidence base that informs policies and practices and help to estimate the resources needed to control the epidemic in this specific group, as well as the community.
Ethics
The study was approved by the Institutional Ethics Committee of Grant Medical College and Sir J.J. Group of Hospitals (Mumbai, India), the Ethics Review Board of Médecins Sans Frontières (Geneva, Switzerland) and the Ethics Advisory Group of the International Union Against Tuberculosis and Lung Disease (Paris, France). The study protocol was approved by the Indian
Study design
This was a cross-sectional survey among HIV-infected adult and paediatric patients attending public and public-private ART clinics in the greater metropolitan Mumbai area. All patients with presumptive pulmonary or extra-pulmonary TB were assessed with smear microscopy and conventional liquid culture. All M. tuberculosis isolates underwent drug susceptibility testing (DST) for first-and second-line anti-TB drugs.
Sample size
The desired sample size was determined separately for new and previously treated culture positive TB cases. Previous tuberculosis treatment was defined as any anti-tuberculosis treatment reported by the patient. Assuming a prevalence of MDR-TB of 3% among new cases and 17% among retreated cases, based on a DST survey conducted in Gujarat [12], a sample size of 123 confirmed new cases and 110 confirmed retreatment cases was sought in order to estimate the prevalence of MDR-TB, with 95% confidence intervals having a margin of error of 3% for new cases and 7% for retreated cases respectively. All HIV-infected adult and paediatric patients enrolled in the ART centres were potentially eligible to be enrolled in the study, if they had presumptive pulmonary or extra-pulmonary TB based on symptom screening, regardless of the time they were enrolled in the centres or whether they were on ART or not at the time of the study. Patients on TB treatment at the time of the study were excluded.
Recruitment and sampling procedure
All HIV-infected ART center attendees were screened by an MSF-employed nurse during the study period. Patients with presumptive TB were investigated using a standard diagnostic algorithm recommended by the World Health Organization [13] that included TB culture and DST. The nurse explained in detail the objectives of the study to the patient and/or caregiver and obtained the signature or thumbprint of the patient if consent was given to participate. When pulmonary TB was presumed, two sputum specimens were collected on the same day, one hour apart, at each study site/hospital laboratory. When extra-pulmonary TB (EPTB) was presumed, biological specimens (fine needle aspirates, pleural fluid, cerebrospinal fluid, etc) were obtained from extrapulmonary sites. All specimens were transferred to Hinduja Hospital Microbiology Laboratory in Mumbai for culture and first-and second-line DST.
Conventional microscopy with Ziehl-Neelsen (ZN) staining for acid-fast bacilli and further sputum decontamination was performed using the N-acetyl-L-cysteine and sodium hydroxide method. Concentrated sediment was inoculated in one liquid culture tube for testing using the Mycobacterial growth indicator tube (MGIT 960) method. Positive cultures underwent microscopy with ZN staining to confirm cord formation, and speciation with MPT 64 antigen detection by Immunochromatography was carried out to confirm M. tuberculosis complex. Specimens fulfilling the above criteria underwent further testing with phenotypic DST using the MGIT System for the following drugs: isoniazid, rifampicin, ethambutol, ofloxacin, moxifloxacin, kana-mycin, capreomycin, PAS, ethionamide, clofazimine and linezolid. Non-tuberculous Mycobacteria (NTM) speciation was done by molecular methods using Reverse Line Blot Hybridisation. Hinduja laboratory is quality controlled and has been accredited for first-line DST by the WHO Supranational Reference Laboratory in Bangalore and the College of American Pathologists. The laboratory was also accredited by the TB programme for second-line DST in December 2013; prior to this date, if a strain was suspected to have resistance to one or more second-line anti-TB drugs, it was sent to the National Tuberculosis Institute Laboratory in Bangalore for confirmation.
Multidrug-resistant tuberculosis (MDR-TB) was defined as resistance to both isoniazid and rifampicin; pre-XDR-TB was defined as MDR-TB with additional resistance to either a fluoroquinolone or a second-line injectable agent; and extensively drug-resistant tuberculosis (XDR-TB) was defined as MDR-TB with additional resistance to both a fluoroquinolone and an injectable agent. Extremely drug-resistant tuberculosis (XXDR-TB) was defined as XDR-TB with additional resistance to any group IV and/or group V TB drugs (PAS, ethionamide, clofazimine, linezolid) [13].
Management of those diagnosed with DR-TB
All patients diagnosed with MDR-or XDR-TB were managed in accordance with the national DR-TB treatment guidelines [14], while those with pre-XDR-TB were offered individualized treatment with 4 drugs likely to be effective.
Data collection and analysis
Demographics, clinical and laboratory data, antiretroviral treatment (yes/no) and duration on ART, as well as data on previous TB treatment were doubly-entered into an EpiData database (Version 3.1, EpiData Association, Odense, Denmark), validated and analyzed.
To identify factors associated with TB and DR-TB, univariate and multivariate analyses were performed using Poisson and binary logistic regression models. Factors significant (p = 0.05) on univariate analysis were entered into the multivariate logistic regression models. Factors were coded as categorical variables and missing values for CD4 cell counts were imputed using a multiple imputation method. Transgender individuals (all were male to female) were grouped with biological males in the models. All factors were entered as a block into multivariate logistic regression models. Data analysis, including multivariate logistic regression models, was conducted with SPSS Version 20.0. Armonk, NY: IBM Corp. Released 2011).
Results
Screening for presumptive TB was carried-out during 14,135 patient visits at seven ART centers in metropolitan Mumbai between March 2013 and January 2014 ( Figure 1). Individual patients might have been screened more than once during the study period. A total of 1741 HIV-infected patients with presumptive tuberculosis (TB) were identified. All of them consented to participate in the study and were investigated for drug-resistant TB. The sputum specimens of 17 patients were found insufficient for laboratory investigations and had to be excluded. Thus, 1724 (99%) of the eligible patients were included in the study.
Patient characteristics
The median age of the 1724 patients was 35 (Inter-quartile range, IQR: 24-44) years (Table 1) and the majority (60%) were male. A large proportion (53%) of patients had an average family income between 3500 and 7000 Indian National Rupees (equivalent to 60-120 USD) per month. Most of the patients (98%) had pulmonary TB. Among the entire study cohort, 80% were on ART during the study period and the majority (52%) had CD4 cell counts lower than 500 cells/mL at their last visit to an ART center. The median duration of exposure to ART prior to enrollment in the study was 26 months (IQR: 10.7-47.5). More than half (933/1724) of the presumptive TB patients had had at least one episode of active TB disease in the past. patients were smear-positive but culture negative and 141 patients were culture-positive but smear-negative. Those patients having a history of TB had a higher rate of smear-positivity (4.4% versus 3.9%), but lower culture-positivity rate (9.4% versus 14.4%) as compared to patients without TB history ( Figure 2
Factors associated with culture-confirmed TB, DR-TB and MDR-TB
The demographic and clinical factors were assessed for association with culture-confirmed TB, DR-TB and MDR-TB. The univariate and bivariate analyses found age, ART status, CD4 count at last visit and previous episode of TB significantly related to culture-positive TB (Table 3). A multivariate Poisson regression model showed that older age, pre-ART status (i.e. not yet on ART), CD4 count less than 200 cells/mL at the last visit and a previous episode of TB were associated with culture-positive TB. None of the factors other than previous history of TB were associated with drug-resistant TB (Table 4) and multi-drug resistant TB (Table 5) in bivariate and multivariate binary logistic regression models.
Discussion
To our knowledge this is the first DR-TB survey carried out among HIV clinic attendees in India. This study shows that, among HIV-infected children and adults in Mumbai, the burden of drug-resistant tuberculosis is extremely high: almost one in four new TB cases and one in two of those previously treated for TB have a drug-resistant strain. Of just as great concern, a large proportion of these strains was resistant to one or more second-line tuberculosis drugs, especially fluoroquinolones.
The overall rate of culture positivity amongst presumptive TB cases was surprisingly low (11.7%). We hypothesize that this was due neither to limitations in laboratory techniques nor the presence of NTM disease, but instead to the broad inclusion criteria that required a person attending a study site to have just one of four possible TB symptoms as recommended by WHO [13]; a person with 'current cough', for example, who was otherwise stable was eligible for enrolment. Another possible contributor to the low rate of TB culture positivity was the relatively large number of poor quality specimens (e.g. consisting of saliva) despite active instruction being given by a dedicated study nurse at each site. In any case, this finding warrants further investigation.
Even though the overall yield of TB was small in the pediatric cohort as well, it remains significant that almost half of the children with TB were infected with drug-resistant strains, most commonly pre-XDR-TB. Since bacteriological confirmation of DR-TB is more challenging in young children than in adults, as they cannot expectorate sputum and are more likely to have paucibacillary and extra-pulmonary TB, we hypothesize that the burden of TB and DR-TB is likely to be underestimated among children in this study, similar to what has been found in a recent meta-analysis [15]. With less than 2% of all study participants having specimens taken from extrapulmonary sites, it is almost certain that EPTB is being underdiagnosed as well in this cohort. A separate analysis found no significant association between EPTB and DR-TB in children or adults.
Our statistical models revealed no significant associations between most demographic and clinical factors and the risk of DR-TB and MDR-TB. We believe that these findings are important for their lack of associations; it seems that most TB/ HIV co-infected patients attending ART centers in Mumbai are at risk for DR-TB. Although the relatively small sample size limits the power of our analyses and calls for cautious interpretation, the lack of associations suggests that all those infected with HIV and presumed to have active TB be tested for drug-resistant strains. Given the high population density in Mumbai, in which a large proportion of the population lives in slums under extreme poverty, the very high TB prevalence and the relatively high HIV burden reported in greater metropolitan Mumbai, these data are unlikely to be representative of a country as vast and diverse as India. Nevertheless the living conditions in Mumbai and common practices in the public and private health sectors (as for example the prescribing of inappropriate regimens and over-the-counter availability of fluoroquinolones and other drugs with anti-TB properties) are similar to those of other large metropolitan centres in the country, so these data could very well represent the DR-TB situation in such cities as New Delhi, Kolkata and Chennai.
While it may not be possible to generalise our estimates for the entire country or even for HIV-uninfected populations, they serve to highlight the overall magnitude of the DR-TB epidemic in Mumbai, which is not unknown [16,17]. A high prevalence of MDR-TB strains (11-68%) was reported in tertiary health facilities as early as 1991, followed by further documentation in 2006 [18][19][20], including information on the magnitude of the epidemic in children [21]. A study by D'Souza et al in 2009 [18] documented high levels of multiple drug resistance (both MDR and poly-drug resistance) amongst previously untreated cases in urban parts of Mumbai. In 2011 Udwadia et al reported a casesseries of totally-drug resistant TB (a term that has not officially been endorsed by WHO) in Mumbai, which captured the attention of local and international media [22,23]. However todate such findings are often overlooked and their importance minimized as representing only selected populations, laboratory or tertiary care settings and small case-series. Our study confirms that there is more than one epidemic ongoing in Mumbai and reinforces the urgent need to accurately measure the overall prevalence and incidence of DR-TB around the country in order to define appropriate interventions. Studies in selected populations such as this complement the overall estimates and can help in directing resources and prioritizing interventions targeted at the most vulnerable groups.
This survey is subject to the usual limitations in survey design and data collection. There is likely to be a tendency for patients to not report previous treatment either because they do not remember (recall limitation) or, on purpose, to avoid going through a long course of treatment that includes daily injectable medication and is known among patients for debilitating side effects [24]. Such bias could have led to an overestimate of DR-TB among new cases and an underestimate among retreatment cases. However, most HIV-infected patients attending ART clinics are usually aware of tuberculosis and have been counseled and screened for TB on several occasions, so recall limitation is rather unlikely.
The majority of HIV-infected patients attending public and public-private ART centers in the city are likely to access the public national TB programme for TB diagnosis and treatment. However many still seek care from private practitioners or may switch between the public and private sectors. The contribution to DR-TB levels from suboptimal treatment regimens prescribed in the unregulated Indian private health sector has been well documented [25][26][27]. Cox et al in 2007 have shown that even under well-established DOTS programmes in areas with high levels of drug resistance, high levels of amplification of drug resistance are to be expected [28].
The high level of resistance to three or more first-line anti-TB drugs and to fluoroquinolones has been previously described by others [29]. The proportion of previously untreated cases in our study that were resistant to more than three drugs, especially isoniazid, rifampicin and a fluoroquinolone, was particularly alarming and highlights two major issues in the management of TB in the setting of HIV/ART clinics. Firstly, it points to the scenario of nosocomial transmission of TB and DR-TB. Those attending an ART clinic at least once a month are more likely to be exposed to susceptible and resistant strains of M. tuberculosis than the general population. Given that the ART centers in Mumbai are usually extremely busy, constantly crowded and that they often lack adequate TB infection control interventions, this scenario is not unlikely. Instead of hypothesizing that most cases of DR-TB are due to non-adherence among patients on treatment, exogenous infection or re-infection should first be considered [30,31]. Secondly, considering the high levels of resistance to second-line TB drugs and especially fluoroquinolones in this population, it is reasonable to assume that patients with presumptive TB may actually have pre-XDR-TB or even XDR-TB. This statement implies a huge investment in laboratory capacity in an already constrained public sector in Mumbai in order to screen all TB patients at the outset for strains that are resistant to fluoroquinolones and anti-TB injectables Nevertheless, we believe that it is a reasonable investment to make if the epidemic of DR-TB is to be controlled in the city in the future. Conversely, if DST is only offered afterward to those failing their TB treatment regimen, a large proportion of DR-TB cases will be missed due to the high risk of mortality among HIV-positive patients with untreated DR-TB [32].
There is an ongoing plan to systematically offer molecular TB diagnosis (mainly using Xpert MTB/RIF, also known as GeneXpert) to all HIV-infected patients in Mumbai and elsewhere in the country. While this is a giant leap forward, since GeneXpert can rapidly detect MTB and rifampicin resistance within 2 hours, we are concerned that 'scale up' of DR-TB diagnosis using this particular diagnostic may lead to suboptimal practices, since a diagnosis of rifampicin resistance alone and/or assumption that it represents a diagnosis of MDR-TB, may mask a diagnosis of pre-XDR or XDR-TB (or worse); the risks then associated with giving a suboptimal treatment regimen are significant both in terms of morbidity and mortality for the patient, as well as amplification of resistance and subsequent community transmission of resistant strains. While GeneXpert is an excellent and efficient diagnostic tool for MTB and screening test for DR-TB, in settings like Mumbai it is essential that it be complemented by culture and DST involving first-and second-line anti-TB drugs. The national programme has recently changed the policy to account for this risk, starting with HIV-infected patients in Mumbai and Maharashtra.
Our initial study protocol included fingerprinting studies using spoligotyping, which we had to abandon due to the high cost. Cox et al have in the past found a strong association between the Beijing genotype and amplification in situations of preexisting resistance in a central Asian setting [33]. Similarly, the proportion of the Beijing genotype was reported to be 35% in the urban Mumbai population studied by Almeida et al [34]. We need fingerprinting studies to establish how often nosocomial transmission occurs and to guide TB infection control interventions. Another area of research that is urgently needed relates to chemoprophylaxis for child contacts of DR-TB cases in Mumbai; preventative regimens that have shown to be effective in other settings are unlikely to prevent development of active disease in many children in Mumbai due to the high baseline rate of fluoroquinolone resistance [35].
Conclusion
Our findings strongly suggest that there is an ongoing DR-TB epidemic among people living with HIV and attending ART centers in Mumbai, which requires urgent, innovative and feasible models of care that allow for rapid and accurate detection and treatment of as many DR-TB patients as possible. Ideally all patients with presumptive TB attending any ART center in Mumbai, or settings with similar drug resistance patterns, should be screened with a rapid molecular diagnostic followed by DST to first-and second-line anti-TB drugs, including for fluoroquinolones, so that the correct diagnosis is made as early as possible and followed by prompt treatment initiation with an appropriate individualized regimen. The high rate of DR-TB amongst new TB patients also highlights the need for better TB infection control measures in order to prevent ongoing transmission of DR-TB in the community and health facilities, especially those attended by vulnerable populations, such as those living with HIV. | 2017-04-28T09:42:19.082Z | 2014-10-21T00:00:00.000 | {
"year": 2014,
"sha1": "df7b278b19cd697a95c38f7c0c326adf57afc0db",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0110461&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d014ca473a0813b55f2b1cee1d333aea31afcce5",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201836221 | pes2o/s2orc | v3-fos-license | Does numerical similarity alter age-related distractibility in working memory?
Similarity between targets and distracters is a key factor in generating distractibility, and exerts a large detrimental effect on aging. The present EEG study tested the role of a new stimulus dimension in generating distractibility in visual Working Memory (vWM), namely numerical similarity. In a change detection paradigm a varying number of relevant and irrelevant stimuli were presented simultaneously in opposite hemifields. Behavioral results indicated that young participants outperformed older individuals; however, in both groups numerical similarity per se did not modulate performance. At the electrophysiological level, in young participants the Contralateral Delay Activity (CDA, a proxy for item maintenance in vWM) was modulated by the numerosity of the relevant items regardless of numerical similarity. In older participants, the CDA was modulated by target numerosity only in the same numerical condition, where the total number of (relevant and irrelevant) items increased with increasing target numerosities. No effect was present in the dissimilar numerical condition, where the total number of items did not vary substantially across target numerosity. This pattern was suggestive of an age-related effect of the total number of (relevant and irrelevant) items on vWM. The additional analyses on alpha-band lateralization measures support this interpretation by revealing that older adults lacked selective deployment of attentional and vWM resources towards the relevant hemifield. Overall, the results indicate that, while numerical similarity does not modulate distractibility, there is an age-related redistribution of vWM resources across the two visual fields, ultimately leading to a general decrease in task performance of older adults.
Introduction
Feature similarity between targets and distracters is a key factor in generating distractibility during the execution of several tasks (e.g. [1,2,3]). For instance, it has been shown that when targets and distracters are similar in terms of primary physical properties (such as size, orientation, shape, color), they compete to enter the memory buffer [4]. Thus, highly similar distracters exert large distractibility, and thus worsen performance on the target items [5,6].
The effect exerted by target-distracter similarity among physical features should have a large detrimental impact in aging. Aging is characterized by several physiological and functional modifications, among which deterioration of working memory (WM) is the most representative one [7]. According to several findings (e.g. [8,9]), the age-related deterioration in visual WM (vWM) is due to an increase of distractibility, namely the inability to discard irrelevant information and focus only on the relevant objects [10], which in turn reduces the storage resources available in vWM. Recent EEG studies [11,12,13] addressing the neurophysiological substrates of the effect of aging on vWM indicated that the Contralateral Delay Activity (CDA; [14]), an electrophysiological index for vWM capacity, is indeed modulated by aging. This modulation has been interpreted as evidence of age-related differences in the efficiency to filter out irrelevant information from the vWM buffer, due to increase in distractibility for the elderly.
Results from studies on target-distracter similarity in aging [15,16,17,18] are in line with this interpretation. For instance, older individuals are slower and less accurate than young participants when detecting targets embedded in conjunction-search displays with distracters highly similar for orientation and size [19]. Given these results, one should expect that similarity between targets and distracters exerts a detrimental effect in the healthy elderly population for all primary stimulus attributes.
Research in the past two decades has indicated object numerosity as a new stimulus attribute that is independent from other physical attributes, but can nonetheless be considered a primary visual property (e.g., see [20,21,22]; but see [23]). Thus, a straightforward prediction is that, as for the other primary attributes, numerical similarity between targets and distracters would impair performance during the execution of various tasks, and that the impairment would be larger in aging. To investigate this issue, the present study probed the contribution of target-distracter numerical similarity to distractibility in young and older adults performing a vWM task.
In a change detection task we presented a varying number of targets and distracters in the visual field. Crucially, their number was manipulated independently, in order to create conditions where targets and distracters shared the same numerosity (e.g. 2 targets and 2 distracters) and conditions of disparity between the two sets (e.g. 2 targets and 4 distracters; see also [24,25]). From an ecological perspective, the manipulation of the similarity in the number of targets and distracters offers a good approximation to everyday scenarios. Indeed, in order to accomplish the majority of tasks (e.g. shopping at the supermarket), individuals typically deal with multiple relevant and irrelevant items that are presented simultaneously and with varying numerosities, rather than one isolated element against a constant number of distracters.
We predicted that in the same numerical condition, the redundant information due to target and distracter numerical similarity (e.g., the fact that there are two targets and two distracters) should induce inadvertent processing of the distracter elements. The additional processing of distracters should result in a reduction of the number of the consolidated target items with respect to the dissimilar numerical condition (where no numerical redundancy is present).
In terms of behavioral measures, we thus predicted a lower performance for the same distracter numerosity condition, compared to the dissimilar condition. Moreover, we expected the detrimental effect induced by numerical similarity (if present) to be larger in the older group, due to age-related increased distractibility [10].
In terms of EEG measures, our main focus was on the CDA and its modulation as a function of target numerosity for the same versus dissimilar distracter numerical conditions. In young adults, we expected a reduced modulation of the CDA amplitude as a function of target numerosity for the same numerical condition. As previously mentioned, distractibility is more evident in aging, as evidenced by a lack of suppression of the neural activity related to the processing of irrelevant material [8,9] and its subsequent memorization [11,12]. Given this greater age-related distractibility, the effect of numerical similarity (if present) should be larger in older than young participants. Thus, we expected a larger reduction of the CDA modulation as a function of target numerosity in the same versus dissimilar distracter numerical condition for older compared to young adults.
Finally, lateralization in alpha power is also measured during the retention interval in WM tasks [26], and it has been interpreted as evidence of suppression of irrelevant items. As contrasting evidence of aging effects on alpha lateralization has also been found in this time window (preserved: [27]; reduced: [28]), we additionally investigated the impact of distracter numerical similarity and aging on modulations in the alpha band activity after the memory array presentation.
Participants
Thirty-three healthy young adults and 33 healthy older adults participated in the study. All reported normal or corrected-to-normal vision and a negative history of neurological or psychiatric disorders. Data from 2 young and 1 older participant were not included in the analyses due to excessive noise during EEG recording, resulting in a final sample of 31 younger adults (16 women; age range: 19-31; mean age ± standard deviation = 23.5 ± 3.3; mean education ± standard deviation = 15.7 years ± 1.8) and 32 older adults (16 women; age range = 63-79; mean age ± standard deviation = 69.8 ± 4.6; mean education ± standard deviation = 13 years ± 2.4). Written informed consent to participate in the study was obtained prior to testing. The study was approved by the Ethics Committee of the University of Trento and conducted in accordance with the 2013 Declaration of Helsinki.
Neuropsychological testing
Older adults were administered a battery of neuropsychological tests in order to assess their cognitive fitness. The exclusion criterion was set to more than one test score below the cut-off values. None of the older participants was excluded on the basis of this criterion. The results for each cognitive test are shown in Table 1.
Stimuli and procedure
Stimuli were colored and light grey dots (30 cd/m2, with a diameter of 1˚), presented on a dark grey background (20 cd/m2 (distracters). To (at least partially) exclude the effect of spatial proximity between targets and distracters, which may have a role in modulating vWM (e.g., [11,12,13]), we chose to present targets and distracters in separate hemifields (see also [24] in young adults only). In each trial either 1, 2 or 4 colored dots were independently presented in each side of the screen together with grey dots, resulting in the same or different number of colored dots across the two hemifields. In order to equate the sensory information presented on both sides, the total number of stimuli presented on the screen was kept constant throughout the experiment (18 items in total: 9 items for each hemifield, comprising colored + grey dots). The items were positioned using an invisible 8 (rows) by 10 (columns) (13.8˚x 16.4˚) grid centered at the center of the screen, where a white fixation cross was present for the entire trial procedure. Colored dots never appeared in the extreme rows and columns or in the columns closest to the fixation cross.
Participants sat in front of a 19-inc LCD monitor (resolution 1280 x 1024, refresh rate of 75 Hz, viewing distance of 85 cm) and performed a change detection task on lateralized stimuli (Fig 1). In each trial, after a 1500 ms inter stimulus interval, a black arrow (3.3˚) appeared for 500 ms above the central fixation cross. The arrow pointed randomly and with equal probability leftward or rightward, signaling the to-be attended hemifield ('relevant hemifield'). The arrow cue was always valid. After 1 second, the memory array appeared for 300 ms, followed by a 1200 ms retention interval. Participants had to memorize the colors of the stimuli in the cued relevant hemifield (targets). On 50% of the trials, the test array was identical to the memory array (i.e. no change condition), while in the remaining 50% of the cases one target in the relevant hemifield changed color (i.e. change condition). Participants were informed that the colors of the distracters in the irrelevant hemifiled never changed. The test array remained on the screen until response, or for a maximum of 3 seconds. Participants reported whether the probe differed or not with respect to the memory array by pressing a key (letter M or C) on the keyboard. Response assignment to each key ('same', 'different') was counterbalanced between subjects. Participants completed a total of 720 trials divided in 15 blocks of 48 trials each, after performing a practice block of 10 trials. Each block comprised 24 trials where targets (relevant hemifield) and distracters (irrelevant hemifield) shared the same numerosity (8 trials for each shared numerosity: 1, 2, 4), and 24 trials where there was a numerical disparity between the two sides (4 trials for each possible numerosity combination of targets and distracters).
EEG recordings and analysis
EEG was continuously recorded using 29 active electrodes placed according to the 10-20 International System (Fp1, Fp2, F7, F3, Fz, F4, F8, FC5, FCz, FC6, T7, T8, C3, Cz, C4, CP5, CP6, P7, P3, Pz, P4, P8, PO7, PO9, PO8, PO10, O1, Oz, O2), with a digitization rate of 1000 Hz, a time constant of 10 s as low cut-off and a high cut-off of 250 Hz. AFz served as ground and the right mastoid as the on-line reference. Horizontal ocular movements were recorded using two electrodes placed on the outer canthi of both eyes. Electrode impedance was kept below 20 kO. The continuous EEG signal was processed off-line using EEGLAB [39] and ERPLab [40]. Data were down-sampled to 250 Hz and filtered with a low-frequency cutoff of 0.1 Hz and a high-frequency cutoff of 40 Hz. In order to remove the 50 Hz line noise, a notch band-pass filter (width: 2 Hz) was also applied. All channels were re-referenced to the average of the left and right mastoids. Independent component analysis (ICA) was applied to the whole dataset (Infomax ICA algorithm, [41]) to correct for eye blinks, muscle and cardiac activity. Epochs with correct responses were segmented from -200 ms to 1 second relative to the onset of the memory array, with a baseline correction of 200 ms pre-stimulus onset. Epochs were visually inspected and those contaminated by large eye movements or residual noise were removed. Finally, epochs were collapsed across change condition (change, no change) and target side (left, right), to obtain contralateral and ipsilateral activity regardless of the actual cue direction. A total of six different conditions were extracted (target load x target-distracter numerical similarity): Load1 -Same Numerosity (SN), Load1 -Dissimilar Numerosity (DN), Load2-SN, Load2 -DN, Load4 -SN and Load4 -DN. After pre-processing, the mean number of epochs retained for the average in the Young group was 95.16 for Load1 -SN, 93.68 for Load1 -DN, To investigate alpha-band lateralization changes, a time-frequency (TF) analysis was performed with a zero-padded complex Morlet wavelet decomposition of 5 cycles per frequency, as implemented in Fieldtrip toolbox [42]. Power was calculated for frequencies from 1 to 40 Hz (frequency resolution: 1 Hz) by sliding a time window over each trial in steps of 20 ms (from -2.5 to 2.5 s, relative to the memory array onset). Resulting TF data were averaged across correct trials collapsed for target side (see above for the six different load x similarity conditions and mean number of trials used) and then baseline corrected (-1.8 to -1.6 s with respect to memory array onset) in order to investigate relative changes in power (i.e. post-target power / baseline power).
Statistical analysis
Behavioral data. For each subject and condition, the memory capacity index [43] was computed as follows: k = (hit rate-false alarm rate) � load. Load refers to the number of colored target dots that participants had to remember. Hit rates were defined as 'different' responses in change conditions, while false alarms were 'different' responses in no change trials. An analysis of variance (ANOVA) was conducted with Age (2 levels: young, old) as between-subjects factor, and Load (3 levels: 1, 2, 4) and Numerical Similarity (2 levels: same, dissimilar) as withinsubjects factors. When significant, any interaction involving Load as a factor was further analyzed by considering only the two extreme values (i.e. 1 and 4 targets), in order to reduce the complexity of the analyses.
For both behavioral and electrophysiological data (see description below), in case of violation of sphericity, Greenhouse-Geisser (when G-G epsilon < .75) or Huynh-Feldt (when G-G epsilon >.75) correction was used, and adjusted p values are reported. All follow-up pairwise comparisons were conducted through t-tests. Correction for multiple comparisons was performed using the False Discovery Rate (FDR) procedure [44].
ERP data. To assess the temporal evolution of the electrophysiological correlates of active maintenance in vWM after the memory array onset, and following previous studies [11,13], the ERP analysis was performed in two consecutive steps. First, a main temporal window of interest was analyzed by computing the lateralized activity (contralateral-ipsilateral activity with respect to the cued hemifield) for each condition in a region of interest (ROI) comprising electrodes O1/2, P7/8 and PO7/8 (see [45]) over an interval from 300 to 900 ms after the memory array onset (the typical time range used for the analysis on CDA, see [14]). An ANOVA was carried out on mean amplitude values, with Age as between-subjects factor, and Load and Numerical Similarity as within-subjects variables.
Second, significant main or interaction effects resulting from the main ANOVA were separately investigated (via paired-samples t-tests, and comparing 1 and 4 target-trials only for Load, see [12]) over consecutive time windows of 20 ms (see [11,13] for a similar approach). A significant difference for at least 2 consecutive time windows (i.e. 40 ms) was considered reliable.
Alpha lateralization. To characterize the time course of alpha-band lateralization, relative power changes were averaged over alpha frequencies (8)(9)(10)(11)(12)(13)(14) in the whole retention interval window (from 300 to 900 ms after memory array onset, hereafter referred to as "post-target" onset). Mean relative power change values were computed for the two posterior contralateraland ipsilateral-to-target ROIs (always including electrodes O1/2, P7/8, PO7/8). An ANOVA with Age as between-subjects and Hemisphere (2 levels: contralateral, ipsilateral), Load and Numerical similarity as within-subjects variables was performed to investigate relative power changes occurring during the retention interval. (Fig 2A, red line). Thus, vWM capacity of both groups increased with target load in both similarity conditions. Then, the difference between Load4 and Load1 (i.e. vWM increase) was computed for each group and compared between young and older participants. The comparison between the two groups in the vWM capacity increase revealed a significant difference (t (Fig 2A).
Control analyses. In the dissimilar numerical condition, trials with 1 or 4 targets were always associated with either more (1 target) or fewer (4 targets) distracters. To evaluate the effect of numerical similarity in a more balanced condition (i.e., in trials where the number of distracters could be both smaller and larger than the target numerosity), we conducted two further analyses for two-target and two-distracter trials, respectively.
On the basis of the results of the main analysis on k values, two subsequent repeated-measures ANOVAs were performed to further explore the significant interaction between Load and Numerical similarity.
The first ANOVA was conducted on trials with 2 targets, with Distracter as a within-subjects variable (3 levels: 1, 2 and 4). The factor was not significant (p > .05), suggesting that the performance when 2 targets were presented was not modulated by the number of distracters in the irrelevant hemifield.
The second ANOVA was conducted on trials with 2 distracters, with Load as a within-subjects variable ( . These results indicate that the limit of vWM capacity is between two and four targets. Taken together, the behavioral analyses do indicate a genuine effect of numerical similarity on participants' performance (but likely an effect driven by distracter numerosity at Load4; see Fig 2B). between Load, Numerical similarity and Group (F(2, 122) = 3.163, p = .046, η p 2 = .049). To further explore the significant three-way interaction, we conducted a series of t-tests over 20 ms time windows comparing Load1 and Load4 in each numerical similarity condition and for each age group separately (see Methods for a detailed explanation).
Event-related potentials (ERPs)
In young adults, in the same numerical condition a reliable difference between the two loads was evident from 300 to 740 ms and from 840 to 900 ms post memory array onset (all ps < .019; Fig 3A); similarly, in the dissimilar numerical condition significant differences emerged from 300 to 840 ms and from 860 to 900 ms (all ps < .034; Fig 3B).
In older adults, in the same numerical condition the difference between Load1 and Load4 was significant from 300 to 900 ms (all ps < .016; Fig 1C). Conversely, no significant difference Numerical similarity in aging was found for the dissimilar numerical condition between Load1 and Load4 (Fig 3D). Taken together, the results in older adults revealed that a modulation of the CDA as a function of target load was present only when the same number of targets and distracters were presented in the visual field. 1 Control analyses. The same control analyses as for the behavioral data were performed on the CDA for trials with either 2 targets or 2 distracters. As the main analysis on mean amplitude values found a significant interaction between Load, Numerical similarity and Group, two subsequent mixed ANOVAs were conducted.
The ANOVA on 2-target trials, with Distracter as a within-and Group as a between-subjects variable, did not reveal any significant effect (all ps > .05), meaning that the CDA amplitude was not modulated by the number of distracters at Load2.
From the ANOVA on 2-distracter trials, with Load as a within-and Group as a betweensubjects variable, a significant effect of Load (F(2, 122) Overall, the results indicated a CDA modulation as a function of target load for young participants regardless of numerical similarity. In older participants, there was an effect of target load on CDA only in the same numerosity condition; however, there was no CDA modulation of numerical similarity per se, as revealed by the control analyses. Load x Hemisphere interaction was not further investigated as we were mainly interested in age and numerical similarity effects. Given that alpha lateralization is measured as a power reduction for contralateral relative to ipsilateral sites [46], comparisons were conducted by means of one-tailed t-tests, separately for young and older adults. The pairwise comparisons revealed a lateralization effect in the young age group (t(30) = -2.26, p = .016, 95% CI = [-.04 -.002]), with the contralateral sites exhibiting greater alpha reduction than the ipsilateral ones ( Fig 4C). In older adults, the trend of the lateralization went in the direction opposite to what expected (the ipsilateral was more negative than the contralateral hemisphere), hence the null hypothesis must be accepted (i.e, no significant difference between the two hemispheres; t(31) = 2.08, p > .05, 95% CI = [.0002 .02]) ( Fig 4D).
Alpha event-related synchronization/desynchronization (ERS/ERD
To investigate the Load x Group interaction, in the young age group pairwise post-hoc comparisons between Load1 and 4 indexed greater alpha power decrease at Load4 than at Load1 (t(30) = 4.59, p < .001, 95% CI = [.04 .11]) (Fig 4E, left histogram). In the elderly, no significant difference emerged (p > .05) (Fig 4E, right histogram). Overall, a reduction in alpha power with target load was evident in the young but not in the older group.
In sum, in young participants the results showed a global alpha power suppression (i.e. irrespective of hemisphere) that covaried with memory load, thus confirming its role as an index of spatially global vWM representations [25,47]. No such effect was visible for older participants. Moreover, alpha power lateralization favoring the target hemisphere was absent in the older group.
Additional analyses
ERPs: 0-300 ms (lateralized activity). Another temporal window of interest was analyzed by computing the lateralized activity in the ROI comprising electrodes O1/2, P7/8 and PO7/8, over a 0-300 ms interval after the memory array onset. This time window was included to control for possible differences between the two groups in the early stages of stimulus Although we were mainly interested in the late time range (which is the typical latency range of the CDA), and despite the presence of significant main and interaction effects in our 0-300 ms analysis, we acknowledge that using such a large window for the earlier analysis could have in principle reduced the chance to find significant effects. To further explore the Load effect, we conducted t-tests over 20 ms time windows comparing Load1 and Load4. Significant differences emerged from 320 to 900 ms (all ps < .046). To our knowledge, only [12] and [48] investigated lateralized ERPs in vWM by looking also at more anterior regions. Specifically, Sander and colleagues [12] found a significant effect in a similar region only in children and older participants, suggesting that it might reflect a greater engagement of prefrontal control processes. In a paradigm where distracters appeared together with targets in the relevant hemifield, Liesefeld et al. [48] instead revealed greater prefrontal activation in distracter-present conditions. In our experimental design, target elements were additionally embedded with non-salient items (grey dots) in the relevant hemifield, thus (partly) requiring more effort to perform the task. This might be the reason why the frontal effect was evident also in young participants (note also that single-neuron activity recordings in the primate identified a sustained activity in the prefrontal cortex as one of the physiological correlates of WM, see for example [49]). Moreover, the greater frontal activation observed in the older group is in line with the notion of a posterior to anterior shift in aging (PASA; [50]), with frontal regions compensating for the reduced activation of posterior areas. Overall, since the majority of the ERP studies on vWM and concurrent age-related decline have not investigated anterior regions, the functional significance of this effect deserves further investigation. ERS/ERD: Pre-target interval. Following the results found for alpha lateralization during the post-target interval, we investigated ERS/ERD during the pre-target (i.e. post-cue) time window. TF data were averaged across all correct trials collapsed for cue direction and then baseline corrected (-1.8 to -1.6 s with respect to memory array onset) to measure relative changes in power. The mean number of trials used was 520.96 (72.36% of the total number of trials).
Relative power changes were averaged over alpha frequencies (8)(9)(10)(11)(12)(13)(14) in the last 200 ms preceding the memory array onset (see [51]), when the spatial bias induced by the cue (namely, a reduction in power for the contralateral sites relative to the ipsilateral sites) is supposed to be stronger [52]. Mean relative power change values were computed for the two posterior contralateral-and ipsilateral-to-cue direction ROIs (O1/2, P7/8, PO7/8). An ANOVA with Age as between-subjects and Hemisphere as within-subjects factors was conducted. A significant interaction between Hemisphere and Group (F(1, 61) = 9.12, p = .004, η p 2 = .130) was evident. The pairwise comparisons performed separately in each group through onetailed t-tests revealed a lateralization effect in the young age group (i.e. greater alpha reduction in the contralateral than in the ipsilateral-to-cue-direction hemisphere; t(30) = -1.91, p = .033, 95% CI = [-.048 .002]) (Fig 4A). In the elderly, no difference between the two hemispheres was evident (t(31) = 2.43, p > .05, 95% CI = [-.001 .04]), as again the results went against predictions (the ipsilateral alpha power was more negative than the contralateral alpha power) (Fig 4B).
Overall, in line with the results on alpha lateralization in the post-memory array onset, young but not older participants exhibited greater cortical facilitation for the cued hemisphere.
Discussion
In many everyday scenarios, individuals experience the need to act on multiple relevant objects that are presented amidst other irrelevant items sharing the same attributes, such as shape, color or numerosity. This type of similarity between targets and distracters can be a potential source of distraction, especially in senescence. The present study provides new information on 1) the effect exerted by numerical similarity on vWM in young and older adults and 2) how age-related distractibility modulates vWM capacity.
As expected [7], the behavioral results highlighted a reduction in performance for the group of older participants. Whereas the estimated number of elements retained (provided by k values) increased with target load in both groups, the increasing rate was larger for young adults (who could efficiently retain up to approximately three elements, while older participants reached their WM capacity limit at around two targets).
Numerical similarity seemed to slightly influence the performance of both young and older participants: k values were higher when targets and distracters had different numerosities, although the effect was not magnified by aging. Crucially, the similarity effect was not confirmed by the additional analysis investigating the influence of the number of distracters when subjects had to retain two target elements: following these comparisons, no behavioral advantage for the two dissimilar conditions (one and four distracters, respectively) emerged. By looking at the graph (Fig 2), it seems plausible that the interaction found in the main analysis is driven primarily by the difference between the same and dissimilar numerical conditions at the highest memory load, i.e. four targets. However, the presence of the effect only at Load4 could be explained by the disproportion between the numerosity of targets (four elements) and of distracters (always fewer than four) in this condition. Thus, the effect is likely driven by distracter numerosity rather than numerical similarity per se.
At the electrophysiological level, the CDA pattern associated with the distracter numerical similarity was crucial in unravelling two novel findings.
First, numerical similarity did not influence the load-related modulation of the CDA amplitude in young adults: the same modulation as a function of memory load was observed in both conditions (in line with [25]), and no significant effects of numerical similarity could be inferred from the control analyses. The effect of memory load was not persistent for the whole CDA interval, as the modulation ceased and then reappeared shortly before the probe onset. This result suggests that before the presentation of the probe array (always occurring at a fixed time interval after the target display onset) young participants refreshed the items in their WM buffer.
Second, in older participants the results of the main analysis showed an effect of numerical similarity, with a modulation of the CDA as a function of target load in the same but not in the dissimilar condition. Does this pattern imply that numerical similarity facilitated older adults in the memorization of targets when they have the same numerosity of distracters? On the basis of previous literature [14], a larger CDA modulation as a function of target numerosity indicates a better ability to maintain the relevant elements in vWM. However, on the basis of previous research [1,2,3,4,5,6], in the present study the larger modulation should have been expected for the dissimilar (not the same) numerical condition. Therefore, the opposite pattern found for the modulation of the CDA observed here recommends caution with this interpretation.
Alternatively, we could reconsider the entire profile of the EEG responses for older adults in terms of a substantial overlap in the analysis of the relevant and irrelevant hemifields, due to an age-related broadening of the processing field for the relevant side (Fig 5A).
According to previous research, in tasks engaging different cognitive abilities, including working memory, activity in several brain areas appears less lateralized in the elderly [53]. This reduced lateralization is thought to reflect either a compensatory function or a de-differentiation process. Moreover, models of deployment of spatial attention [54,55,56] predicted and proved that the focus of visuospatial attention becomes broader and less concentrated in healthy aging.
In line with these findings, we propose that older adults exhibit a weaker ability to focus processing resources towards a spatially delimited portion of the visual field, where relevant elements are expected or presented. As a consequence, they also tend to encompass a variable portion of the irrelevant visual field at various stages of analysis, ultimately achieving a less efficient behavioral performance with respect to young individuals. Therefore, we propose that in the memory retention phase (CDA), the different pattern related to target load for the same and dissimilar condition reflects the covariance between target and distracter numerosity in the current experimental design. In fact, in the same numerical condition, the number of targets and distracters was equal in each trial, so that the overall amount of elements presented in the visual field increased across target load (Fig 5B, left panel). Given the hypothesis of an agerelated broadening of the processing field beyond the relevant side, one should predict that the memorization field encompassed (part of) the irrelevant side. However, the positive correlation in numerosity between targets and distracters in the same numerical condition ensures an overall increase in the number of (target and distracter) items retained (up until the limit of the WM capacity of the elderly, i.e. approximately 2 elements), as visible from the modulation of the CDA as a function of load in this specific condition.
Conversely, in the dissimilar numerical condition target and distracter numerosities were negatively correlated (i.e. when targets increased, distracters on average decreased, and vice versa), so that the global amount of elements presented on the screen was on average the same (i.e. around four) across all target loads (Fig 5B, right panel). Hence, due to the broadening of the "memorization field", the number of items retained does not change across loads. Indeed, here the target load effect on the CDA disappears, given that the sum of all the elements always exceeds the WM limit of the elderly (i.e. the minimum amount of overall elements presented in the dissimilar numerosity conditions is three). The control analyses conducted on trials with two distracters seem to support this hypothesis, given that the CDA was modulated by target load in young but not older participants. Here again the minimum amount of overall elements is three (two distracters plus at least one target), which in turn exceeds the WM limit of the elderly.
The pattern of oscillatory data found in the present study supports the hypothesis of an agerelated broadening of the processing field in the elderly. First, in line with previous results [57], the attention-related cortical facilitation induced by the cue was present in young participants but absent in the elderly, as revealed by the data on alpha lateralization after cue presentation. This pattern indicates that older adults tend to lose cortical facilitation for the relevant side, and therefore deploy attentional resources to both hemifields. Moreover, the same pattern of alpha lateralization persisted during the retention interval: lateralized alpha favoring the contralateral hemisphere was still present for young participants, while it was absent in the elderly (as in [28]). In fact, there was a trend towards an inversion of the alpha lateralization for the older group (with more negative values for ipsilateral than contralateral sites in both pre-and post-array intervals). While future studies replicating this observation are needed, we speculate that together with the overall pattern of alpha activity along the entire time window, this inversion supports our interpretation of the broadening of the memorization field. In addition, such interpretation entails that distracters should produce more interference when presented in a more medial/nasal than lateral/temporal position, a testable prediction for further research. According to the proposal that alpha lateralization as an index of suppression of irrelevant items [26], older adults did not show an enhancement of the relevant hemifield (i.e. lack of alpha lateralization) and processed also the distracting material presented in the irrelevant side (i.e. no distracter suppression). However, we prefer to remain agnostic as to the specific functional role of alpha lateralization, and to report the absence of lateralization as revealing an age-related broader focusing of processing resources.
Finally, the overall increase of the amplitude of the early lateralized ERP activity (0-300 ms window post-target onset) in older with respect to young participants seems to indicate a delayed attempt made by older participants to tune their processing resources exclusively towards the relevant hemifield (see [58]), although this was not sufficient to completely prevent distracters from being memorized (as reflected by the CDA load-related pattern).
Two aspects about this study should be considered. First, the majority of older participants performed at ceiling (i.e. obtaining an equivalent score of four) in the neuropsychological tests administered, thus showing a high level of cognitive functioning. It would be interesting in future research to investigate a sample of older individuals with higher variability in cognitive functioning. One could speculate that distractibility would increase in healthy elderly with a lower cognitive profile. Second, since the task was performed on a computer, we cannot totally rule out the impact of expertise with technological devices on the difference in performance between young and older adults. However, since participants were only required to provide responses by pressing one of two keys over a relatively long time period, computer expertise should have only minimally contributed to the present results.
To conclude, the behavioral and EEG pattern indicates that young adults do not suffer from distraction due to numerical similarity. In older participants, the effect of numerical similarity on the CDA was instrumental to get an insight on the nature of distractibility in the elderly. We propose that age-related fluctuations in endogenous attention, when coupled with the simultaneous presentation of targets and distracters in opposite hemifields, may result in a redistribution of the vWM resources across the two visual fields. This resource-consuming enlargement of the "memorization" field in turn affects the vWM capacity of older adults, and their performance compared to younger individuals. | 2019-09-06T13:06:03.837Z | 2019-09-04T00:00:00.000 | {
"year": 2019,
"sha1": "f61abc8884e82f4e5e96d51e1b91884398c879ea",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0222027&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "abdefc3852a963321ec26dba936395a04e0dc049",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
249358619 | pes2o/s2orc | v3-fos-license | Anxiety in the Medically Ill: A Systematic Review of the Literature
Background Although anxiety is highly represented in the medically ill and its occurrence has relevant clinical implications, it often remains undetected and not properly treated. This systematic review aimed to report on anxiety, either symptom or disorder, in patients who suffer from a medical illness. Methods English-language papers reporting on anxiety in medically ill adults were evaluated. PubMed, PsycINFO, Web of Science, and Cochrane databases were systematically searched from inception to June 2021. Search term was “anxiety” combined using the Boolean “AND” operator with “medically ill/chronic illness/illness/disorder/disease.” Risk of bias was assessed via the Joanna Briggs Institute (JBI) Critical Appraisal Tools—Checklist for Prevalence Studies. The PRISMA guidelines were followed. Results Of 100,848 citations reviewed, 329 studies met inclusion criteria. Moderate or severe anxious symptoms were common among patients with cardiovascular, respiratory, central nervous system, gastrointestinal, genitourinary, endocrine, musculoskeletal system or connective tissue, dermatological diseases, cancer, AIDS and COVID-19 infections. The most common anxiety disorder was generalized anxiety disorder, observed among patients with cardiovascular, respiratory, central nervous system, dermatologic diseases, cancer, primary aldosteronism, amenorrhea, and COVID-19 infection. Panic disorder was described for cardiovascular, respiratory, dermatology diseases. Social anxiety was found for cardiovascular, respiratory, rheumatoid diseases. Specific phobias were relatively common in irritable bowel syndrome, gastroesophageal reflux, end-stage renal disease. Conclusion Anxiety is a major challenge in medical settings. Recognition and proper assessment of anxiety in patients who suffer from a medical illness is necessary for an appropriate management. Future reviews are warranted in order also to clarify the causal and temporal relationship between anxiety and organic illness.
INTRODUCTION
Anxiety is a feeling characterized by anguish, sense of threat, and fear. When it is explained by a real and objective trigger, it is considered physiological. When there are no objective reasons of being in such status, anxiety becomes pathological (1). For instance, when an organic disease occurs, anxiety can be a normal psychological reaction (2) but it can also flourish and evolve into a symptom with a pathological meaning or into a mental disorder. Anxiety, indeed, is highly represented in the medically ill (3), with generalized anxiety disorder as the most prevalent disorder (10.3%) in primary care settings (4). The pathways between anxiety and physical illness co-occurrence are not fully understood. Several possible mechanisms and synergies exist. Among them, environmental and genetic factors (5) have been proposed as being able to favor such co-occurrence as well as individual vulnerability (6) and socio-economic status (7).
In addition, anxiety may influence how the patient experiences the pathological process of own medical illness and his interaction with others (8,9), including medical and nursing staff (10). In particular anxiety in association with a chronic medical illness worsens the quality of life (11), affects social functioning (12), increases medical burden (13). Anxiety has a negative impact on compliance (14), resulting in exacerbation of illness (15) and high health care utilization and costs (16). Anxiety increases the susceptibility to illness leading to illness progression, rehospitalization, and mortality (17)(18)(19)(20). Among chronic illness patients, anxiety negatively affects emotional stability resulting in depressive symptoms, suicidal ideation, and social isolation (21,22). Several studies reported the strong association between anxiety and somatization (23)(24)(25). In medically ill patients, anxiety amplifies physical symptoms, leading to useless (if not dangerous from a physical or psychological point of view) and inappropriate invasive tests/procedures to investigate hypothetical, but never confirmed, organic explanations (26). The treatment of anxiety, whether pharmacological or psychological, was found to favorably affect the outcome of a number of organic diseases (27,28).
Unfortunately, anxiety often remains unacknowledged, unrecognized, untreated in medically ill patients (29,30). Such a phenomenon is the result of several converging factors. First, the differentiation of anxiety worthy of clinical attention is hindered by the widespread occurrence of non-clinically relevant anxious symptoms in medical settings (31). Secondly, when an anxiety disorder is associated with a medical illness, there is a tendency to regard it as a physiological psychological reaction, secondary to the distress of the medical illness or to the patient's awareness of its consequences (32). However, in the clinical realm it is evident that not all medically ill patients have anxiety or an anxiety disorder (33). In addition, the expression of emotional distress is often disregarded or even discouraged by clinicians, and patients' needs are satisfied if they refer to the body rather than to the psychological sphere (34). Finally, the use of anxiolytics, particularly benzodiazepines, is widespread, especially during hospitalizations (35), often without a real indication. On the contrary the prescription seems justified by the organizational limits of hospital, being the staff able to handle only a limited number of requests.
Although anxiety is highly represented in the medically ill and its occurrence has relevant clinical implications, no systematic reviews on its prevalence and rate seem currently available. To fill this gap, the aim of the present systematic review was to report on anxiety, either symptom or disorder, in patients who suffer from a medical illness.
Registration
This review protocol was registered in the "International Prospective Register of Systematic Reviews" (PROSPERO) in 2021, under the registration number: CRD42021296741, and available at: https://www.crd.york.ac.uk/prospero/display_ record.php?ID=CRD42021296741, and not published elsewhere.
Eligibility Criteria
Eligible articles included English-language papers published in peer-reviewed journals reporting data on anxiety in medically ill adults. Anxiety disorder had to be diagnosed according to the Diagnostic and Statistical Manual of mental disorders (DSM-III, -III R, -IV, -IVTR or -5th edition) or the International Classification of Diseases (ICD-9, -10, -11). Anxious symptoms had to be assessed via standardized rating scales.
Additional inclusion criteria were: age of at least 18 years, a sample of at least 10 subjects (as it was already used in 36 in order to guarantee and minimum representativeness of results). Studies with different designs were included (i.e., cross-sectional, longitudinal study, observational, case-control studies).
Exclusion criteria were: (a) patients with multiple organic diagnoses, (b) not original data, (c) non-clinical samples (d) results on anxiety aggregated with results on depression or other psychological features. Treatment outcome studies were not included, being off topic for the present review.
Information Sources and Search Strategy
The following electronic databases were systematically searched from inception to June 2021: PubMed, PsycINFO, Web Of Science, and Cochrane. In addition, a manual search of reference lists from relevant reviews was done. Search term was "anxiety" combined using the Boolean "AND" operator with "medically ill/chronic illness/illness/disorder/disease."
Selection and Data Collection Process, Data Items
Titles and abstracts were screened by two authors (S.R. and G.M.). Articles potentially relevant were retrieved and the authors independently assessed each in full. Any disagreement was resolved by consensus. Risk of bias (quality of the studies) was assessed via the Joanna Briggs Institute (JBI) Critical Appraisal Tools-Checklist for Prevalence Studies (36). It consists of 9 questions and the scoring system is: "yes" scores 1, "no" or "not clear" or "not applicable" score 0. The JBI Checklist-Total score is the sum of the items.
Effect Measures
The search strings, the list of relevant reviews, the data coding, and the quality criteria are available on request to the corresponding author. No missing data were found. The methods described fulfilled the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (37).
RESULTS
The search provided a total of 100,848 citations. After reviewing the abstracts to exclude those which clearly did not meet the criteria, 1,289 remained. Of these, 960 were excluded, not meeting the inclusion criteria ( Figure 1 shows the flow diagram of the search). A total of 329 studies were identified for inclusion, the following parameters were extracted: country/city where data were collected, study design, organic disease diagnosis, sample size, instrument used to assess anxiety, eventual psychopathological manifestations co-occurring with anxiety (for details see Supplementary Table S1). Among them, 7 studies reached the maximum JBI score, 68 had a score of 8, 244 studies had a score of 7, and 10 studies had a score of 6, which means that at least 6 out of 9 criteria were fulfilled. Results will be here qualitatively presented based on the apparatus involved in the organic illness (see also Tables 1, 2). Symptoms severity levels were taken directly from respective studies and may refer to the use of different cut-offs for the same scale of assessment. No meta-analysis was conducted due to the methodological . About 35% of atrial fibrillation patients showed high levels of anxiety, in particular women (49%). Patients having "bad" relations with nursing staff had 6.58 times higher possibility to experience anxiety than patients having "very good" relations (p Among patients with angiographically normal or near normal coronary arteries, 34% met criteria for current PD and 11% presented specific phobia (Beitman et al., 1989 * ).
Clinically significant anxiety was also found in 31.8% of patients with incidental pulmonary nodules (Li et al., 2020b * ) and in 12% of those with interstitial lung diseases (Holland et al., 2014 * ).
Central Nervous System Diseases
The rate of anxiety among stroke patients was found to increase significantly between 6 months (17%) and 5 years after the event (29%) (Lincoln et al., 2013 * ). Point prevalence of DSM-III-R anxiety disorders was higher in the stroke survivors than in healthy controls. The most common diagnoses were: PD (24 vs. 8%) and GAD (27 vs. 8%). The prevalence of any anxiety disorder was higher in the stroke group (42 vs
Gastrointestinal Diseases
Anxiety was present in 28% of gastroenterological patients (Alosaimi et al., 2014 * ). Severe anxiety was observed in 27% of patients with chronic digestive system diseases. Patients with digestive system tumors had the highest rate of anxiety (55. The incidence of severe anxiety was significantly higher in the non-erosive reflux disease group (16.5%) than in the reflux esophagitis one (10.4%) (Yang et al., 2015 * ). Among gastroesophageal reflux disease patients, a current clinical anxiety symptomology was found in 20.7% of cases (On et al., 2017 * ). Among gastroesophageal reflux disease patients, anxiety disorders were diagnosed in 30% of cases: 11.6% met the diagnostic criteria for specific phobias, 5.6% for lifetime social phobia, 5.6% for other phobic anxiety disorders, 3.8% for GAD, and 3.8% for agoraphobia with panic attacks ( In patients with 79 different rare diseases, the rate of anxious symptoms was 23% and females presented significantly more severe symptomatology than males (Uhlenbusch et al., 2019 * ).
Patients with systemic hypertension and type 2 diabetes had mild anxiety in 32% of cases, moderate anxiety in 29%, and severe anxiety in 26% of cases (Teixeira et al Among inpatients with amenorrhea, the prevalence of GAD was 23.5% (Fava et al., 1984 *
DISCUSSION
Moderate or severe anxiety occurs particularly among patients with chronic kidney disease, end-stage renal disease, hip pathology, systemic lupus erythematosus patients, hereditary angioedema and chronic urticarial, metastatic breast cancer, bladder cancer. Severe anxiety had the highest rates among patients with chronic illnesses, atrial fibrillation, coronary artery bypass graft, chronic thromboembolic pulmonary hypertension, pulmonary hypertension, chronic rhino sinusitis, asthma, migraine, multiple sclerosis, epilepsy, digestive system tumors, liver cirrhosis, irritable bowel syndrome, obesity, type 2 diabetes, hyperprolactinemia, COVID-19 infection.
The most common anxiety disorder was GAD, which was observed among patients with cardiovascular, respiratory, CNS, dermatologic diseases, and also in cancer, primary aldosteronism, amenorrhea, COVID-19 infection. Patients with cardiovascular, respiratory, or dermatology diseases also presented PD. Social anxiety was described for cardiovascular, respiratory, rheumatoid diseases. Specific phobias were relatively common in irritable bowel syndrome, gastroesophageal reflux, end-stage renal disease.
The present results should be considered as an overview of such clinical phenomenon which is in need of being further explored clarifying, among the others, temporal or causal relationships between anxiety and organic illnesses. In addition, we found studies referring to different levels of severity (i.e., mild/moderate/severe) of anxious symptoms that should be taken into account since the various levels of severity may impact the discomfort and the functioning of patients differently. In addition, such heterogeneity of severity, which was measured via different tools and in some cases also using different cut-offs for the same tool, suggest caution in interpreting the results and in using them to draw conclusions in comparability. Future studies increasing the body of evidence for each level of severity of anxiety in each medical disease are warranted to overcome this limitation. Also research aimed at disentangling between a physiological anxious reaction to the physical illness and a pathological, thus for instance maladaptive, response to the status of being medically ill are needed.
Anxiety represents a major challenge in medical settings, being highly represented either as a symptom or as a disorder (39). Nowadays, anxiety can be properly assessed in the medically ill via clinician-or self-reported measures (40,41). This may provide information on the overall health condition, also according to a longitudinal view of development of disorders (42,43), thus demarcating major prognostic and therapeutic differences among patients who otherwise might seem to be deceptively similar since they share the same diagnoses. It also allows to catch the possible interplay between mental and organic disease, for instance clarifying if there is a primary/secondary relationship (44,45). It can help verifying whether the patient is at risk of developing depression, which often coexists with anxiety (46,47) and worsen its prognosis. It allows to investigate other areas associated to anxiety. Among them, it allows to investigate illness behavior, the ways in which individuals experience, perceive, evaluate, and respond to their health status (48,49). This is a transdiagnostic core characterization (50), with multiple expressions (51,52), providing an explanatory model for clinical phenomena (8). Relevant information can be obtained also assessing mental pain (53,54), which captures a feeling state characterized by emotional pain, emptiness, and internal perturbation (55), sometimes at the core of the suicidal process (56). Finally, a comprehensive assessment may also include evaluating specific positive features, i.e. psychological well-being (57,58), which can be eventually empowered to cope with anxiety (59) and the organic disease (60,61).
Recognition and proper assessment of anxiety represent the necessary steps for its appropriate management. Clinicians boast a large and effective armamentarium to treat anxiety, which include both pharmacological (e.g., benzodiazepines) (62,63) and non-pharmacological (e.g., wellbeing therapy) interventions (59), they need to use it.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. | 2022-06-05T15:19:47.015Z | 2022-06-03T00:00:00.000 | {
"year": 2022,
"sha1": "1cb2a3511f5909c7ddcca6a917bddd46eb835ea3",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2022.873126/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e976a873c0aca956389ea7050aa4b8334897b042",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
246280361 | pes2o/s2orc | v3-fos-license | A Comprehensive Overview of the Parathyroid Tumor From the Past Two Decades: Machine Learning-Based Bibliometric Analysis
Introduction Parathyroid tumor, in particular carcinoma, is fairly rare among neoplasms of the endocrine system, unlike its benign counterpart. However, there is no bibliometric analysis in the field of parathyroid tumors comprehensively summarizing and discussing a large number of publications by a machine learning-based method. Materials and Methods Parathyroid tumor-related publications in PubMed from January 2001 to December 2020 were searched using the MeSH term “parathyroid neoplasms”. Latent Dirichlet allocation was adopted to identify the research topics from the abstract of each publication using Python. Results A total of 3,301 parathyroid tumor-associated publications were identified from the past 20 years, and included in further analyses. Research articles and case reports occupied the most proportion of publications, while the number of clinical studies and clinical trials decreased, especially in recent years. Technetium Tc 99m sestamibi was most studied among the diagnosis-related MeSH terms, while parathyroidectomy was among the treatment-related MeSH terms. The Latent Dirichlet allocation analyses showed that the top topics were 99mTc-MIBI imaging, parathyroidectomy, gene expression in the cluster of diagnosis research, treatment research, and basic research. Notably, scarce connections were shown between the basic research cluster and the other two clusters, indicating the requirements of translational study turning basic biological knowledge into clinical practice. Conclusion The annual scientific publications on parathyroid tumors have scarcely changed during the last two decades. 99mTc-MIBI imaging, parathyroidectomy, and gene expression are the most concerned topics in parathyroid tumor research.
INTRODUCTION
Parathyroid tumor (PT), in particular carcinoma, is pretty rare among neoplasms of the endocrine system, unlike its benign counterpart (1). Unlike thyroid cancer, which shows a predominance in women, parathyroid carcinoma has a consistent incidence in both sexes (2). While most PTs are sporadic, various genetic diseases may also cause PT, including hyperparathyroidism-jaw tumor syndrome and multiple endocrine neoplasias. The diagnosis of PT is often made by serum biomarkers and radiological examinations; however, it can be hard to preoperatively distinguish the malignancy from the adenoma. Surgery remains the only curative therapy of PT; however, recurrence is shown in 23%-50% of patients who previously received surgery (3).
Accumulative articles have reported significant progress and developments in the field of PT, and the research hotspots, as well as future research directions, can be reflected by these publications. Bibliometric analysis is often used to summarize the contributions of publications. To our knowledge, there is no bibliometric analysis in the field of PT, comprehensively summarizing and discussing a large number of publications.
Besides the regular bibliometric analysis methods, machine learning is also developed to analyze human language, such as natural language processing. Among these methods, latent Dirichlet allocation (LDA) is most frequently applied in the scientific publication analysis by identifying specific themes and dividing documents into these themes (4).
The objective of this study is to map the research landscape of PT through analyses of scientific publications in the past two decades. Furthermore, by applying a machine learning method, this study may also contribute to recognizing features of specific research topics in the field of PT.
MATERIALS AND METHODS
The MeSH term "parathyroid neoplasms" was used to identify PT-related publications in PubMed from January 2001 to December 2020. An R package "Bibliometrix" was used for extracting associated data (5), including the publication year, the publication type, MeSH terms, and abstract. To simplify the MeSH terms analysis, MeSH terms appearing less than 10 times were excluded. Additionally, ethical approval was waived because of the nature of the bibliometric analysis.
To recognize the research topics of each publication in detail, the abstract was analyzed by LDA using the Python platform. A feature glossary of terms was established by the coexistence frequency of vocabulary words in the publication series, and the two most probable research topics would be calculated for each publication, depending on the appearance frequency of these glossary words. Subsequently, the Louvain algorithm was applied for cluster analyses to clarify the associations between topics.
RESULTS
A total of 3301 PT-associated publications were identified from the past 20 years and included in further analyses. Even with the massive growth of overall scientific publications, Figure 1 showed that the annual publication number remained scarcely changed during the past two decades with the highest as 194 in 2006 and the lowest as 135 in 2018. Figure 2 showed the Turkey (317 publications), and India (262 publications). Additionally, Table 1 listed the top 10 affiliations with the most contribution to PT-associated productions.
MeSH Term Analyses
Further analyses were performed based on 438 MeSH terms found in obtained publications with a total of 31,588 times of occurrence. Table 2 showed some general issues of PT-associated studies, including study subject, age group, and study design. Notably, compared with human-based studies, the number of studies on animals and cells was very limited, suggesting potential vulnerabilities in comprehensive mechanism investigations. We subsequently investigated most focused diagnosis-related MeSH terms ( Figure 4A) and treatment-related MeSH terms ( Figure 4B). Among the diagnosis-related terms, technetium Tc 99m sestamibi was the most studied. Multiple imaging examinations showed significance in diagnosing PT, including ultrasonography, computed tomography (CT), single-photon emission computerized tomography (SPECT), and magnetic resonance imaging (MRI). Blood biomarkers, including parathyroid hormone (PTH) and calcium, were also frequently involved. In addition, differential diagnosis was another significant issue when diagnosing PT. In terms of treatments, parathyroidectomy was the most concerned therapy, while treatment outcome was the most concerning issue. Except for surgery, targeted therapy and radiotherapy were also developed to suppress PT, while combined modality therapy was under development. Moreover, cinacalcet was also highly concerned in PT-associated publications as an essential method of controlling blood calcium in advanced PT.
LDA Analyses
Further analyses were performed by a machine learning method (LDA) using abstracts from publications. While excluding 652 publications without an abstract, the 30 hottest research topics were extracted by LDA analyses using abstracts of the remaining 2,649 publications, and a topic network was built to illustrate these topics and associations between them ( Figure 5). These topics were allocated into three clusters according to Louvain algorithm, including diagnosis research (in green), treatment research (in purple), and basic research (in red). The focalization of a topic and the weight of the connection between topics were also shown as the size of the circle and the thickness of the line, respectively.
In the cluster of diagnosis research, 99m Tc-MIBI imaging and parathyroid adenoma were the top two topics, emphasizing the value of differential diagnosis between parathyroid cancer and parathyroid adenoma. Serum calcium level and ultrasonography were also hot topics, which was consistent with the results of MeSH term analyses. Particularly, 99m Tc-MIBI imaging showed wide connections with other topics in the cluster of diagnosis research. In terms of treatment research, parathyroidectomy was the top topic, followed by hyperparathyroidism, endoscopic parathyroidectomy, mediastinal parathyroid adenomas, and recurrence. A strong connection was shown between parathyroidectomy and hyperparathyroidism. Furthermore, gene expression, parafibromin, and calcimimetics took the majority proportion of the basic research cluster. Notably, scarce connections were shown between the basic research cluster and the other two clusters, indicating the requirements of translational study turning basic biological knowledge into clinical practice.
DISCUSSION
For the very first time, this machine learning-based bibliometric study summarized 3301 publications of PT from the past two decades. Despite the rapid expansion of scientific publications, there was very limited change in the number of PT-related publications, suggesting more attention needs to be paid to the research field of PT. Meantime, the number of clinical studies or trials decreased in recent years, suggesting more attention is needed on the clinical management of PT. The most concerning research topics were 99m Tc-MIBI in the diagnosis section, parathyroidectomy in the treatment section, and gene expression in the basic research section. Meanwhile, more efforts should be paid to translation through preclinical researches to clinical practices. Overall, this study demonstrated patterns and trends of the past and prevailing topics in PT research, which may provide new in-depth directions for both researchers and practitioners.
The most frequently used imaging examinations for diagnosing PT are ultrasonography and 99m Tc-MIBI imaging. With a low economic cost and high practicability, ultrasonography plays an important role in preoperatively detecting PT. However, the poor capability of ultrasonography is shown in distinguishing the malignant from the benign (7). 99m Tc-MIBI imaging, including scintigraphy and SPECT, is also commonly used to identify PT based on the different retention levels. Although 99m Tc-MIBI imaging was recognized as a method without abilities of differential diagnosis for a very long time, recent research reported that the peak of retention index may contribute to the preoperative differential diagnosis of parathyroid malignancy (8). Moreover, choline PET is a novel diagnostic method for PT, which this study failed to highlight due to its novelty. It was reported to bear the potential as a significantly more sensitive method to replace traditional imaging (9). However, even with the development of novel diagnostic methods, different diagnosis still remains the priority issue in the area of PT diagnosis. So far, surgery remains to be the best chance of cure for PT. The simple parathyroidectomy is considered suitable for most benign PT, while far from enough for parathyroid cancer. The gold surgical procedures contain an en-bloc resection, an ipsilateral thyroidectomy, and resection of involved surrounding tissues (10). However, because of lacking valid diagnostic methods to preoperatively distinguish malignancy from benign PT, it is still hard to perform proper procedures in the first operation, which may be responsible for most tumor recurrence. Although the overall patient prognosis is favorable, recurrences are frequent in PT, worsening the prognosis (3). Future progress of preoperative differential diagnosis is expected to guide the choice of the surgical procedures, thus contributing to a better prognosis of patients with PT. Among most PT patients, the cause of death is often not tumor burden, but uncontrollable hypercalcemia caused by hyperparathyroidism. Many therapies are being developed to manage hypercalcemia in PT, especially in inoperable PT, including bisphosphonates, RANK ligand antibody, calcitonin, and calcimimetics (11). As the newest generation of calcimimetic, cinacalcet was proven to be effective in patients with inoperable parathyroid cancer by increasing the affinity of calcium-sensing receptors and reducing the secretion of parathyroid hormone (12).
CONCLUSION
The annual scientific publications on PT scarcely changed during the last two decades. 99m Tc-MIBI imaging, parathyroidectomy, and gene expression are the most concerned topics in PT research. More efforts should be paid in gene expression pattern detection through preclinical research to clinical diagnosis and treatments. | 2022-01-26T14:15:49.380Z | 2022-01-26T00:00:00.000 | {
"year": 2021,
"sha1": "2fcf7f853b78a37ea415c2d21384ec4e95ce2fb5",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.811555/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "2fcf7f853b78a37ea415c2d21384ec4e95ce2fb5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253383978 | pes2o/s2orc | v3-fos-license | Laboratory observation of ion acceleration via reflection off laser-produced magnetized collisionless shocks
Fermi acceleration by collisionless shocks is believed to be the primary mechanism to produce high energy charged particles in the Universe,where charged particles gain energy successively from multiple reflections off the shock front.Here,we present the first direct experimental evidence of ion energization from reflection off a supercritical quasi perpendicular collisionless shock,an essential component of Fermi acceleration in a laser produced magnetized plasma. We observed a quasi monoenergetic ion beam with 2,4 times the shock velocity in the upstream flow using time of flight method. Our related kinetic simulations reproduced the energy gain and showed that these ions were first reflected and then accelerated mainly by the motional electric field associated with the shock. This mechanism can also explain the quasi monoenergetic fast ion component observed in the Earth's bow shock.
Here, we report on experimental results of ion acceleration in a supercritical quasiperpendicular collisionless shock formed when a laser-produced supersonic plasma flow impact on a magnetized ambient plasma.Quasi-monoenergetic ions with 2-4 times the shock velocity are observed in the upstream of shock, which are produced mainly by the motional electric field acceleration during specular reflection from the shock.It's the first direct laboratory experimental evidence of ion acceleration from single reflection off a collisionless shock.The experimental feature of quasi-monoenergetic ion distribution is in well agreement with the fast ion component observed in the Earth's bow shock.
The experiments were conducted at the Shenguang II (SG II) laser facility.A sketch of the experimental setup is shown in Fig. 1a.A weaker precursor laser beam (~1×10 13 W/cm 2 ) ablated a plastic (CH2) planar target to create the ambient plasma, which was magnetized by a 4-6 T external background magnetic field [40] via an anomalously fast magnetic diffusion process [37,[41][42][43].An intense drive laser beam (~8×10 13 W/cm 2 ) irradiated another plastic (CH2) target with a focus spot of 0.5 × 0.5 mm 2 to produce supersonic plasma flow as a piston.The piston plasma flow drove a quasi-perpendicular collisionless shock in the magnetized ambient plasma.The profile of the shock and the ambient plasma density were characterized with optical diagnostics.The ion velocity spectrum was measured by the time-of-flight method using a Faraday Cup (see Methods for further details).
The electron density of the ambient plasma varies from ~1×10 18 /cm3 to 5×10 18 /cm 3 with a gradient scale length of ~1 mm (Fig. 1c), and the electron temperature is estimated to be ~40±10 eV [37,44].The piston plasma with a higher electron temperature of ~200 eV [37,44] drives a quasi-hemispherical magnetized collisionless shock (Fig. 1b), which is asymmetric due to the inhomogeneity of the ambient plasma (Fig. 1c).The angle between the shock normal and the upstream magnetic field θBn in our experiments is approximately 90°; therefore, it is a nearly perpendicular shock (see Methods for further details).The shock velocity is vshock~400 km/s over the span of measurement, which is slightly slower than that without an external magnetic field (Supplementary Fig. S4), yet still within the measurement error.A strongly compressed zone is formed within the plasma, which exhibits typical structures of a "foot", a "ramp", et al (Fig. 1d, Supplementary Fig. S3) [36,38,39,45].current through a set of magnetic field coils.Ambient plasma was generated after the plastic CH2 target (left) was ablated by a weaker precursor beam (100 J/1 ns/351 nm).After 12 ns (at time t0) that the ambient plasma was magnetized, an intense drive beam (260 J/1 ns/351 nm) irradiated another plastic CH2 target (top) to produce a supersonic piston plasma flow, which drove the collisionless shock in the magnetized ambient plasma.The density profiles of the shock and the ambient plasma were characterized with optical diagnostics.The acceleration of ions was measured by the time-offlight method using a Faraday Cup (directed along the x-axis).b, The imaging of shock measured by optical interferometry (blue) and dark-field schlieren method (red) (line-integrated along y direction), taken at time t0 + 4 ns, formed in the ambient plasma with a 5T external magnetic field.The bright refractive fringes in the optical dark-field schlieren imaging (red), which are the first derivative of the line integrated plasma density, indicate the discontinuity surfaces around the shock.The inhomogeneous ambient plasma results in an asymmetric quasi-hemispherical shock.c, Electron density profile for the ambient plasma, taken at time t0 + 4 ns along the yellow line in (b) at x=3 mm, under the experimental condition without a piston plasma flow, which varies from ne0 ~1×10 18 /cm 3 to 5×10 18 /cm 3 with a gradient scale length of ~1 mm in the shock traveling zone.d, Line-integrated electron density profile of shock taken along the green line in (b) at z=2 mm.L is the plasma size in y direction.The electron densities in upstream and downstream are approximately 1-5×10 18 /cm 3 and 0.5-1.5×10 19/cm 3 (see details in Supplementary Fig. S3), respectively, which indicate a compression ratio of >3.
Under our experimental parameters, the magnetized shock is approximately collisionless.The ion-ion collisional mean free path is approximately 4 mm, which is much larger than the ion Larmor radius of ~800 μm and the shock thickness of ~500 μm.
The >3× density compression factor approximately satisfies the hydrodynamic Rankine-Hugoniot (RH) jump condition of shock [45].The shock Alfvenic and sonic Mach numbers are MA~7-11 and Ms~7-10, respectively, and the ambient plasma beta value is β~0.3-1.2.Therefore, the shock conditions probed in our experiments are relevant to the Earth's bow shock, where the typical shock Alfvenic Mach number is MA~3-10 [46-52], as illustrated in Table 1. the early time before shock formation, onset of shock formation (~ωci-H -1 =1.71 ns, which is the upstream H + ion gyroperiod), and shock formation on ion scales that separated from the piston (t=t0+4.79ns> 2ωci-H -1 ), respectively.d, e indicate the shock reformation after distinctly separating from the piston (see details in Supplementary Fig. S8).The proton-to-electron mass ratio is set as mp/me=100.
One-dimensional (1-D) and two-dimensional (2-D) particle-in-cell (PIC) simulations are conducted to study the shock formation in piston-driven magnetized ambient plasma under conditions similar to our experimental parameters (see details in Methods), as illustrated in Fig. 2. At the beginning of the interaction, the piston acts like a snowplow with a speed of ~400 km/s and sweeps up the ambient ions and magnetic field (Fig. 2a), which produces density and magnetic field compression around the piston-ambient plasma interface.The particle trajectories indicate that the ions from the ambient and piston plasmas penetrate each other since the ions are effectively collisionless.Within t0+1.71ns (ωci-H -1 ~1.71 ns, the upstream H + ion gyroperiod), the compressed steepened magnetic structure is strong enough to reflect the ambient H + ions, at which time the shock begins to form (onset of shock, Fig. 2b) [53].After distinct separation from the piston, at approximately t0+4.79 ns, a shock on ion scales is formed with a speed of 415 km/s and MA~8.3 (Fig. 2c).Consistent with our experimental results, the shock in the simulation reproduces the characteristic feature of a "foot", a "ramp", and the compression ratio is >3.In the following several gyroperiods, the shock reformation is observed in the shock "foot" region, and the C 5+ ions form another shock behind the H + ions shock (Fig. 2d&e, and Supplementary Fig. S8&14).
Ion acceleration is observed in our experiments accompanied by the formation of the magnetized collisionless shock.The time-of-flight signal of ion flux (Fig. 3a), collected along the symmetric axis of the piston flow by the Faraday cup, presents two peaks in the ion velocity spectra (Fig. 3b).The first peak corresponds to the particles coming from the piston plasma, and the velocity is vpiston~300-800 km/s, which is close to the shock speed (vshock~400 km/s).The second peak with the velocity Vfast_ions~1100-1800 km/s, generated by the accelerated fast ions, is found to have a quasi-monoenergetic spectrum and is approximately 2-4 times the shock speed, similar to the fast ion component observed in the Earth's bow shock by satellites [46,51,52,54].We have also changed the strength of the external magnetic field in the experiments and found that the fast ion peak becomes more pronounced with increasing external magnetic field (Fig. 3(b)).Even in the absence of external magnetic field, we still can observe the fast ion peak probably due to the selfgenerated magnetic field of approximately 1 T [37] (see Supplementary Fig. S5 for further details).
The PIC simulations of the experimental piston-ambient interaction, which also exhibit two peaks in the ion velocity spectra (Fig. 3c), confirming the ion acceleration capability of shock.The first peak of slow ions is provided by the piston plasma downstream of the shock.The second peak is the fast reflected ions upstream with approximately 2-3 times the shock speed.H + ions picked up from the ambient plasma dominate the fast ions and are accelerated during reflection by the shock (see Supplementary Section IV).Shock formation and ions acceleration are not observed in simulations with approximately zero external magnetic field.Notably, the detailed characteristics of the ion velocity spectra in our simulation cannot be straightforwardly compared with experiments for the following reasons.Firstly, the experiments results are temporally and spatially integrated with ions escaping from the 2-D hemispherical shock with an inhomogeneous background profile, while the simulation is just 1-D or 2-D homogeneous background with reduced proton-to-electron mass ratio to lessen computational burden.Secondly, the magnetized ambient plasma has finite size of ≤ 10mm in experiments (Fig. 1, and Supplementary Fig. S1).Thus, the reflected ambient ions can escape into vacuum before gyrating back into downstream when the shock reach the boundary of the magnetized ambient plasma, and move ballistically into detector (see Methods and Supplementary Fig. S6 for further details).S5).The slow ions with velocity v~300-700 km/s come from the piston plasma.The fast ions with velocity v~1100-1800 km/s, with approximately 2-4 times the shock speed, are the population from ambient ions accelerated by the shock.c, Ion velocity spectrum collected in the foot region of the shock (x> 8mm region at t0+11ns, Supplementary Fig. S8) from the simulation with an external magnetic field of 6 T, which also exhibits two peaks.The velocity of the slow ions is ~400 km/s, while that of the fast ions is ~900- scatter plots of the H + ions at t0+5.13 ns (normalized, including ambient and piston plasma), along with the profile of the magnetic field (blue line).c, The Ex (blue) and Ez (red) electric fields at t0+5.05 ns.d, The trajectory (black) of a typical reflected H + ion originating from ambient plasma overlaid on the profile of the magnetic field strength (color bar).e, The time history of the potential gain of the reflected H + ion φx (olive), φz (pink), and φt (black) ( , , , and the total potential The H + ion trajectory in vz -vx space.The external magnetic field By is 6T.The interface between shock and piston is labeled approximately with dash line in a-c.In d-f, the reflection and acceleration stage is indicated by the orange shaded region, while the moments of ion reflection and that ion gyrates back into downstream are labeled with lines/circles I and II, respectively. As indicated in our simulations, there exist two components of electric fields Ex and Ez associated with the shock (Fig. 4c).The electric field Ex is an electrostatic field caused by motional electric field and charge separation, while the electric field Ez is only a motional electric field [53,55] (~vshockBd, where Bd is the magnetic field downstream).
Our simulations indicate that 99.9% of the accelerated ions experience single reflection, and more than 73% of them undergo shock drift acceleration (Supplementary Fig. S10&11).Most of the accelerated ions are H + , and C 5+ ions ratio is less than 1%.The reflection efficiency of the ambient ions is about 20%-26% in 1-D and 2-D simulation.
By following the trajectory of a randomly chosen typical single reflected drift accelerated H + ion described in Fig. 4d-e, we can identify the particle energization around the shock, which is dominated by the motional electric field (Supplementary Fig. S9), can be approximately separated into two stages.In the first stage of "reflection and acceleration" (the orange shaded region in fig.4d), the H + ion slides into the shock "foot" (~6.0 ns), and gets accelerated by the Ex and Ez field.At ~7.2 ns, the H + ion is reflected toward upstream, followed by further acceleration until escape from shock transition layer into upstream region.Then the reflected H + ion starts the second stage of "gyromotion" at ~8.7 ns in upstream region with little energization.Subsequent to energization, part of the reflected H + ions gyrate into the downstream region and dissipate energy in it, while the remaining H + ions are still in the upstream region, which can escape into vacuum when the shock moves to the boundary of the magnetized ambient plasma of finite size (Supplementary Fig. S6), and produces the quasi-monoenergetic fast ion peak collected by the Faraday cup in our experiments.Assuming that the acceleration timescale in the motional electric field is approximately one gyroperiod m/(qBave) (Bave is the average magnetic field that the reflected ions are experienced around the shock), the velocity gain of the reflected ions in the z direction can be estimated as Δvz ~vshockBd/Bave ~(1-3)vshock.
Therefore, the reflected ions have a speed of approximately v ~∆ 2)vshock, consistent with our experiments.This mechanism is in well agreement with the satellite observations that a fast ion component exists in the Earth's bow shock with approximate 2 times the shock velocity.We found that a small fraction (<0.1%) of the earlier reflected ions can undergo multiple reflections and acceleration between upstream and shock front, producing higher energy ions with a continuous spectrum that ends up in the downstream region (Supplementary Fig. S12), similar to that has been observed recently [39], and potentially start the Fermi energization cycle.While the higher-energy ions are 3-4 orders of magnitude weaker than the quasi-monoenergetic ion peak in our experiments, thus it will be hidden under our experimental noise baseline.
In conclusion, our results provide the first direct laboratory experimental evidence of ion energization from single reflection off a supercritical quasi-perpendicular collisionless shock, which are consistent with the satellite observations of the quasimonoenergetic fast ion component in the Earth's bow shock [46,51,52,54].Repeated reflections from collisionless shock, accompanied by successive small energy increments, have the potential to push charged-particle energies up to very high values for initiating the Fermi acceleration cycle and producing the high-energy charged particles in the Universe.This opens the path for controlled laboratory experiments that can greatly complement remote sensing and spacecraft observations and help validate particle acceleration models.
Fig
Fig. 1| Laser-driven magnetized collisionless shock experiments.a, Sketch of the experimental setup: A 4-6 T external magnetic field (along the y direction) was applied by a pulsing
Fig. 2|
Fig. 2| Formation of a shock structure and the associated ion dynamics in the 1-D PIC simulation.The vpx-x phase space scatter plots of the ambient (blue, 1st row) and piston (red, 2nd row) H + ions to present the ion dynamics associated with shock formation.(3rd row) The magnetic field (blue) and the electron number density (red) profiles are displayed to show the formation of the piston-driven shock.The time steps of t0+0.68 ns (a), t0+1.71ns (b) and t0+4.79 ns (c)correspond to
Fig. 3|
Fig. 3| Ion velocity spectra in experiments and 1-D PIC simulations.a, Time-of-flight trace of ion flux in the experiments recorded by Faraday Cup along the symmetric axis of the piston plasma flow.After the precursor negative peak of the noise baseline (0-0.1 μs), the fast ions arrive at the Faraday Cup first at ~0.16 μs, followed by the slow ions (piston) at ~0.4 μs.b, Ion velocity spectra in the experiments that transform the time-of-flight trace of ion flux (shown in (a)) to the collected ion density profile in a Faraday cup (see methods and Supplementary Fig. S5).The slow ions with | 2022-11-08T06:42:53.814Z | 2022-11-06T00:00:00.000 | {
"year": 2022,
"sha1": "78e34a84e7188357131783e71d39301785dd62f7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "78e34a84e7188357131783e71d39301785dd62f7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
219709122 | pes2o/s2orc | v3-fos-license | Using High-Level Synthesis to Implement the Matrix-Vector Multiplication on FPGA
This work presents how to implement the Matrix-Vector Multiplication (MVM) onto FPGA through the QuickPlay High-Level Synthesis flow. The motivations arise from the Adaptive Optics field, where the MVM is the core of the real-time control algorithm which controls the mirrors of a telescope to compensate for the effects of the atmospheric turbulence. The proposed implementation of the MVM exploits four different levels of parallelism: spatial and pipeline parallelism are used both at the fine (scalar instructions) and at the coarse (vector instructions) levels. To characterize the architecture being developed, a performance model has been developed and validated through the actual results obtained from runs on a prototype board based on the Intel ARRIA10 FPGA. Some details are given to describe how the algorithm has been implemented using the QuickPlay HLS flow. Performance results are presented, in terms of sustained computational speed and resources used in the hardware implementation.
Introduction
In the framework of the research project Green Flash [1], we developed the work presented in this paper, aimed at efficiently implementing the Matrix-Vector Multiplication (MVM) on the FPGA technology. As discussed in [2][3][4][5], in Adaptive Optics (AO) the effect of the atmospheric turbulence is compensated using the mobile mirrors in the telescope, which are moved according to a given real-time control algorithm. The dominating part of such an algorithm, see technical annex of the project [1], is the execution of two MVMs, namely s k * = Mv k and w k = Rs pol k . In this paper we illustrate how we used QuickPlay [6], a High-Level Synthesis (HLS) design flow, to efficiently implement the MVM on FPGA. Representing one of the Level-2 BLAS functions [7], MVM is the basis for many algebraic computations and it is fundamental in many application domains. We underline that we see the presented work as a template of the methodology to be adopted when using HLS.
We start describing the problem to be solved, together with the constraints imposed by the challenges on the architecture to be implemented. Next, we present the formulation of the solution, explaining how parallelism should be exploited to obtain an efficient implementation. The implementation we propose in this paper uses four levels of parallelism: as the MVM is a collection of many independent scalar products, we introduce pipeline and spatial parallelism both at the coarse level (parallelization among different scalar products) and at the fine level (parallelization within the computation of one scalar product). A performance model is derived to quantify the performance achievable through the proposed implementation: this phase is crucial to validate the performance of the HLS. When using HLS, it is crucial the preliminary determination of what can be achieved, checking after the synthesis that the results produced by the automated synthesis process comply with expectations: in lack of this modeling phase, we should rely only on comparisons with other implementations to (indirectly) evaluate the implementation produced by HLS. In this paper, the emphasis is put mainly on the evaluation of the quality of the implementation derived from the HLS flow, as we are not trying to assess the superiority of a given technology against another: discussing FPGA vs GPU is not the aim of this paper. For this reason, we put much effort into the modeling of the performance which can be theoretically achieved, to have an absolute criterion to evaluate the quality of the FPGA implementation: the closer is the performance to the theoretical forecast, the better it is.
The document is concluded with the presentation of the results, in terms of performance achieved in actual runs (GFlop/s) and resource used (LUT, memory blocks, DSP).
Related Work
Due to its relevance in many domains, the implementation of the MVM has been widely investigated; in particular, how to efficiently implement the operation on the FPGA technology has been investigated. In [8] the authors present a comparison of the implementation of the MVM, the gaxpy operation, on FPGA, GPU and CPU. They describe the FPGA implementation, organizing the internal dual-ported memories as V row banks which store the rows of the matrix; each of these banks is composed by B banks which store in an interleaved way the rows mapped into the row bank; thanks to this organization, at each clock cycle V × B elements can be read and written from and to the memory. These elements can feed Q ≤ V pipelined modules, each one computing a B-size scalar product. The work is further improved in [9], where the management of large external memory is added. In [10,11] the FPGA implementation of the BLAS operations is discussed, with a special focus on the implementation of the reduction circuit needed in the accumulation involved in each BLAS operation. The authors in [12] report the FPGA implementation of the MVM and matrix-matrix product with a detailed analysis of the error propagation in the accumulation phase. Considering that the MVM problem is I/O bound and there is no benefit in increasing the parallelism beyond the I/O saturation, the authors propose to use some logic to implement the group-alignment based floatingpoint summation [13], which increases the numerical accuracy of the computation. The FPGA implementation of the BLAS is reported in [14]. In this work, while relying on the OpenCL framework [15] for the actual FPGA implementation, the authors give a detailed performance model to drive the selection of the parameters determining the tradeoff between speed and resource performance. Using the selected parameters, some code generators are activated to generate the OpenCL description of the optimized BLAS routine. The reader interested in the implementation of the MVM on GPU technology can refer to [16], which presents an analysis of the MVM implementation on GPU, together with a detailed performance model.
Problem Definition
The MVM is the basic operation to perform the Wavefront Reconstruction control algorithm; its usage is well known in the Adaptive Optics community and dates back to the late '80 s [17] and has been successively improved many times [2]. In our implementation, using single-precision floating-point arithmetic, we have to multiply two matrices M[N means , N rec ] and R[N rec , N means ] with the vectors v k [N rec ], s k [N means ], being N means = 9232 and N rec = 6316.
Due to their size, M and R are stored in external memory. M and R do not change for a quite long time and must be multiplied many times by vectors v k and s k ; processing step (k + 1) can start only when the k th step has finished.
Once the bandwidth BW to access the external memory is fixed, an upper bound for the speed of the computation is determined. To perform the MVM, the matrix must be read from the memory; when we refer to a generic matrix M[n, m] and we indicate with D the floating-point data size expressed in bytes (in single-precision D = 4, in double-precision D = 8), the matrix size is M s = nmD [Bytes] and the time to read the matrix from external memory is t R = nmD/BW. (1) As the number of operations performed in the MVM is n ops = 2 nm and the overall computing time cannot be smaller than t R , the computing speed S C cannot be larger than n ops /t R i.e., Using single-precision floating-point, D = 4, the speed can never be greater than half of the available memory BW.
In the following sections, we will analyze how the MVM should be implemented to be as close as possible to the previous limit.
Guidelines for Implementation: Exploiting Coarse-Grained Parallelism
The Let's implement, in an optimized way, a kernel SP which performs a certain number of scalar products between one vector a and several vectors read from the external memory; if we have p external memory banks, we can partition 1 M in p equal parts M p , each containing n/p different matrix lines m p,i with p = 0, 1, …, p − 1 and i = 0, 1, …, n/p − 1 (each line is an m-sized vector), storing each M p into a different memory bank. We instantiate p replicas of the SP scalar product kernel and we distribute a copy of the a vector, to be read once, to all the SP kernels. Each SP kernel computes a portion b p of the b result vector. The final vector is obtained properly merging (i.e., concatenating) all the b p sub-vectors.
The degree of parallelism p is selected to make (nearly) equal the BW requirement with the BW available toward the external memory (BW ExtMem ); let's indicate with BW req the memory bandwidth requested by the SP kernel (BW req will be quantified in the following).
The memory bandwidth required by the p SP kernels is p × BW req and must be large enough to saturate BW ExtMem i.e., BW ExtMem ≈ p × BW req which gives In the following, when giving numerical examples, we use the parameters characterizing the µXComp board, developed by MicroGate and equipped with an Intel ARRIA 10 GX1150 FPGA [18]. Referring to the previous example and to the four Hyper Memory Cube (HMC) banks present in the µXComp board (each HMC bank has a peak BW of 17 GB/s), BW ExtMem = 68 GB/s. In our implementation of the SP kernel BW req = 19.2 GB/s, so the degree of parallelism that can be efficiently supported is given by Eq. (4) which yields p ≈ 4. Therefore, four SP kernels can be instantiated, each one accessing a different HMC bank.
The Scalar Product: Basic Pipelined Implementation
As a consequence of the discussion of the previous section, we recognize the scalar product as our coarse grain unit of parallelism. The scalar product can be implemented with one pipelined MADD (one multiplier and one adder) which iteratively computes the recurrence The computation of the next MADD operation is dependent on the completion of the previous operation, so a new MADD cannot start until the previous has finished, thus waiting for the latency L of the MADD.
To avoid paying this penalty, we can exploit the commutativity and associativity of the ADD operation (let us neglect the effects of the limited precision). Under the commutative and associative hypothesis for the ADD and assuming m to be an integer multiple of L, we can rewrite the scalar product as in the following where -vectors a and b have been partitioned into L sub-vectors a i and b i , -L partial scalar products are computed (expression in brackets) and finally -the result is derived by summing the L partial scalar products (external sum).
In the previous formulation, each partial scalar product has to be updated every L clock cycles; during its processing (requiring L cycles), the other L-1 partial scalar products will be processed, each one being at a different stage of the pipeline. Only the final (i.e., the external) sum requires the accumulation of values where the dependence cannot be completely hidden, thus imposing the payment of some pipeline penalty.
Following the previous approach, we can compute the scalar product in N clk clock cycles, as follows where (m − 1) + L, are the cycles needed to compute the m MADD operations and O(L A *log(L)) are the cycles needed to perform the final sum of the L partial scalar products (L A is the latency of the pipelined add operator) using L/2 adders; if m L, N clk ≈ m. In our case m L, so we compute the 2 m operations required by the scalar product in N clk ≈ m clock cycles, thus sustaining 2 FP operations per cycle. The sustained speed of the computation is S C = 2f ck = 300 MFlop/s for f ck = 150 MHz.
As seen in the previous section, to sustain the speed of the computation S C we must have a BW toward the memory which is at least twice the numerical value of S C (Eq. 2)). In this case, the memory BW required by the kernel would be BW req = 2 * S C = 2 * 300 = 600 MB/s. Referring to the BW of the HMC memory we are using (≈6 GB/s), to saturate the memory BW we should put p = 68/0.6 = 112 kernels in parallel, which would require 112 ports to access the external memory module: this huge number of ports is not realistic, so we have to find a way to increase the computational speed of the kernel which performs the basic scalar product, in order to use, with the BW req of a single kernel, a significant portion of the available memory BW.
The Scalar Product: Exploiting Spatial Parallelism
To increase the computational speed and the BW req of the kernel which computes the scalar product, we could further partition each of the L sub-vectors into P sub-vectors so that, at each cycle, we can start computing P independent partial scalar products.
Let's rewrite the Eq. (5) as in the following where vectors a and b have been partitioned into LP sub-vectors, each with m/(LP) elements; the generic sub-vector v ij is defined as Once partitioned a and b into the LP sub-vectors a ij and b ij , we compute the LP partial scalar products s ij (expression in brackets in (7)), then we sum all the LP partial values to obtain the final result.
Using P MADDs, if we can read 2P floating-point values per cycle, the number of cycles to determine the LP partial scalar products is given by In fact, after L clock cycles, P MADD results are produced; the remaining (m -P) MADD results are produced in the following (m -P)/P cycles, as P new results are produced at every cycle.
Once generated the N = LP s ij values, they must be summed together to obtain the final scalar product.
As already discussed, we can use N/2 adders to perform the sum of N numbers in [log 2 (N )]L A clock cycles. If we use P A < N adders, in each layer we can parallelize the sums among all the P A adders. It's easy to verify that the number of cycles to compute the sum of N = LP numbers using P A pipelined adders is given by The number of cycles NCycles SP necessary to compute the scalar product of two vectors of size m using P pipelined MADD modules, with latency L, and P A pipelined adders, with latency L A , is given by From (8), (9) and (10) we get From the previous expression, we can compute the sustained speed of the computation (expressed in operations/cycle) as In previous equation L and L A are fixed by the technology (for instance, with the current version of QuickCompiler and for the ARRIA10 FPGA, L = 8 and L A = 3), m is fixed by the problem, P and P A are the parameters of the architecture that must be determined to maximize the sustained speed. P must satisfy the following requirements: -must be a power of 2, i.e. P = 2 k , because it determines the width of the internal memory used by the SP kernel (width of the memory must be a power of 2), -must be large enough to nearly saturate the memory BW.
In our example, f ck = 150 MHz and the BW to one bank of the HMC memory is 17 GB/s. Thus, the width W to saturate the BW is given by . As W has to be a power of 2, we can set W = 128 [Byte] (the closest to 113), thus fixing the MADD parallelism to 32 (32 MADDs must read 64 floats/cycle; 32 floats come from the buffer memory connected to the HMC and storing a row of the matrix M and 32 floats come from the buffer memory connected to the input stream and storing the vector a, read only once at the very beginning).
If we set P A = 4 (adder parallelism), the number of cycles to sum all the partial results is (ref. to Eq. (9)) With the previous values, the Eq. (9)
MVM: Coarse-Grained Pipelining
In the operation b = M × a, the result vector b can be computed through the following loop for (l=0; l<n;l++) b l =m l •a; // m l is the l-th row of M whose body can be decomposed in three basic operations: for (l=0; l<n; l++){ load m l from the external memory compute the LP partial scalar products s ij compute the final result b l = Σ i,j (s ij )
}
The loop can be repeated in different kernels when the matrix M is partitioned into p submatrices, as depicted in Fig. 1.
Regarding the time complexity (expressed in number of clock cycles), we can write the following relations -moving 4 m bytes from the external memory, accessible through a port with W = 4P bytes, to the internal multi-ported memory requires the number of cycles as the internal memory can accept 4P bytes/cycle; L m is the latency to access the external memory; if Wf ck = BW req > BW ExtMem , the actual number of cycles will be larger than N mem because the required bandwidth Wf ck is larger than the available memory bandwidth; -the number of cycles required to compute the LP partial scalar products is given by Eq. (8): -the sum of LP values using P A floating-point adders (with latency L A ) requires the number of clock cycles N Sum (P A ) given by Eq. (9).
As the iterations of the loop are independent, the loop can be pipelined, at a coarse grain, with three pipeline stages: -load vector m i , -compute the LP partial scalar products s ij , -sum the LP s ij .
The duration of each stage of this "macro-pipeline" is given by Being the loop fully pipelined, n + 2 "macro-pipeline" stages are required to process n matrix lines and to compute n scalar products. The number of cycles necessary to compute the whole MVM, using p equal SP kernels, is given by The sustained speed (operations/cycle) is given by the ratio Let's consider the case characterized by the following parameters: -m = n = 8192 (m: size of the vector, n: number of scalar products to be computed) -L A = 3, L = 8 and L m = 200 cycles (latencies of FP adder, MADD and HMC) -P = 32 (spatial parallelism, i.e., number of MADD operations performed in parallel) -P A = 1 (1 adder is used to sum the LP partial scalar products) p = 2 (kernel parallelism, i.e., number of equal kernels, each one performing the scalar product) Previous values, when inserted in the expressions derived above, give the following values: It's worth to be underlined that, when we ran on the µXComp board the test developed using previous values, we measured an overall speed of 10.6 [GFlop/s], in perfect agreement with the performance foreseen by the model (see Table 1, reported in the section related to performance).
FPGA Implementation of the MVM Through the QuickPlay HLS
In this section, we analyze the actual FPGA implementation of the MVM algorithm, based on the considerations illustrated in the previous sections. To achieve the FPGA implementation, we use the Accelize HLS framework (Quick-Play with its embedded QuickCompiler HLS engine [6], formerly produced by Accelize and to be shortly released as Open Source SW).
We refer to the architecture depicted in Fig. 1 and, in the following Fig. 2, we report the QuickPlay schematic representing that architecture, in the case of p = 4 SP kernels.
In the previous design, we can recognize the four VectorMatrixProduct kernels, each performing n/4 scalar products: they are connected to four different HMC memory banks. The first mySplit kernel is used to divide the input data coming from the input port in the Id of the computing kernels) and b) the data part (data are the values of the matrix M to be stored in the memory and the vector a to be multiplied with the matrix) which is sent to the 4 computing kernels through a streamCopy kernel.
The last BuildResultVector kernel is used to concatenate the results produced by the four VectorMatrixProduct kernels, generating the result vector.
The Scalar Product
As seen in the Sect. 6, the basic step to compute the scalar product between the l th row of the matrix (m l ) and the input vector a is the following which requires the computation of LP partial scalar products. The basic operation to implement these scalar products is the vector multiply-and-add pipelined function which takes as input P pairs of single-precision floating-point variables and produces P floating-point values (in our implementation P = 32), performing the computation The sketch of the QuickPlay C code to implement the vector pipelined MADD is the following /*#qp pipeline */ Void MADD(float a1,…a32, float b1,… b32,float &c1,… &c32) { c1 += a1*b1; … c32 += a32*b32; } Thanks to the /*#qp pipeline*/ directive the previous function is synthesized as a pipelined function which performs 2P = 64 floating-point operations per cycle (P add and P mul).
From the synthesis reports of QuickCompiler we know that previous function requires 7 cycles to produce the output results, so L MADD = 7 cycles; we use L = 8 to include the cycle needed to read the data from the memory. The MADD is implemented through the instantiation of 32 fp adders and 32 fp multipliers.
FastMemory
The FastMemory modules are the memories used by QuickCompiler to map internal arrays. They are implemented on embedded ram and are described by the tuple FastMemory = <W, G, N, DType, Size> -W is the width of the wide "external" port. -G is the number of independent groups, each group being formed by N ports; usually G = 2 (as the embedded Ram modules are dual-ported) -N is the number of typed ports in each of the G groups. Each port presents a data which has size DType; -DType is the size (in bytes) of the data type stored in the FastMemory. In QuickCompiler, each array is stored in a different FastMemory. -Size is the size of the memory, expressed in Bytes.
The large external port, whose width is W = N * DType, is used to transfer data to/from streams or to/from external memories through the qpReadStream(), qpWriteStream() and memcpy() functions. The bandwidth of read/write through this port is given by BW = W * f ck [Byte/s]; typical value is W = 128 [B], f ck = 150 MHz and BW = 19.2 GB/s. The latency to access external memories depends on the available memory controller; the HMC controller in the µXComp board is characterized by a latency L m = 200 cycles.
The G × N "internal" ports, whose size is DType, are accessed by the kernel. The internal BW, between the FastMemory and the computing kernel, is G times the BW of the external port. The latency to read a data from the FastMemory to the kernel is one cycle while writing a data from the kernel to the fast memory is accomplished in the same cycle.
Since W ≥ Dtype, each group of ports allows accessing N = W/DType elements of an array at the same clock cycle. As the memory is organized in word of W bytes, when the first port of a group is used it selects the memory word being accessed and it allows the other ports of its group to access the other array elements of the word.
The FastMemories a[m] and b [m] have been declared with the directive /*#qp ports 2 32*/ which specifies that the array, composed by m = 8K float elements, is stored in a memory which has G = 2 groups of N = 32 ports accessible in parallel, every port being four bytes wide (as they are float data type). Both a and b FastMemories are characterized by the tuple <W = 128, G = 2, N = 32, DType = 4, Size = 32768>.
This means that up to 64 floats can be read/written in parallel in one clock cycle.
In one iteration of the loop, the LP s ij values are updated; values s ij are mapped onto the variables si_j (i = 0, .., 7 and j = 0, .., 31).
The previous loop-code is scheduled by the QuickCompiler HLS engine as described in Sect. 6, with the performance given by Eq. (8).
Looking in the QuickCompiler timing report, we see that the execution of the module implementing the previous code requires 264 clock cycles, in perfect agreement with the formula derived from the analysis N comp = m/P + L MADD .
After having computed the LP s ij values, we must sum them together to obtain the result i.e., we must implement the expression The previous formula is very simply computed through the following (not pipelined) function float Sum(float s0_0,..., float s7_31) { float result; result =s0_0+s0_1+...+s0_31+s1_0+...+s7_31; //256 operands return result; } which is scheduled by QuickCompiler on one fp adder and requires 263 clock cycles to be executed, slightly better than the simplified model presented in the Sect. 6, Eq. (9), which was foreseeing 280 clock cycles (in our simplified model we are neglecting the possibility to start the computation of a new layer of sums in the tree adding scheme before terminating the previous layer).
Putting the things together, the number of cycles requested to compute the scalar product of two
MVM with a Coarse-Grained Pipeline
We use the just described scalar product module as a basic block to perform the MVM; the pseudo-code for the MVM is the following: As the number of floating-point operations to compute the MVM is N flop = 2 nm, the speed expressed in number of operations per cycle is given by Considering that each iteration of the computing loop is independent on the others, it is immediate to think to a pipelined scheme to overlap the three operations (Fig. 3): The speed-up of the pipelined implementation, with respect to the not-pipelined implementation, is given by S = N seq N pipe = 8.1 × 10 6 3.7 × 10 6 = 2.2 from which we can derive the expected speed for the pipelined implementation, in ops/cycle, through the following expression The scheduling described in Fig. 3 is enforced by the QuickPlay HLS when compiling the following code: In previous code we can recognize three sections: -the preamble, to fill the pipeline modules; in this section, we find the read (once for all) of the a vector, the read of the first three rows of M, two computations of the partial scalar results and one sum operation. -the loop, which implements the steady-state of the pipelined behavior; in this section we find three reads of rows of M, three computations of partial scalar result, three sum operations and the write to the output of three results i.e., the manual unroll of three complete processing of three rows of M; -the postamble, which empties the pipeline (no more matrix rows are read). In this phase it is finished the processing of the last three rows. It is the dual of the preamble; we have no read, one computation of the partial scalar products, two sum operations and three write of the results.
To ensure the parallel execution of the different functions accessing the same array, we used three different buffers to store the rows of matrix M.
The QuickPlay project which instantiates all the available 4 HMC memory modules, each connected to one Compute MatrixVectorProduct, is reported in Fig. 2.
Both the input and output ports have been mapped onto a PCIe interface.
The PCIe IP, HMC controller IP, clock and reset generator IP, as well as the copy IP and the FIFO IP are all part of the QuickPlay distribution and are instantiated by the tool in a transparent way (clock & reset generator, FIFO) or based on the configuration derived from the Visual Editor. The computing kernels are generated by QuickCompiler, the HLS engine of QuickPlay.
Performance Results
To show the performance achieved, in terms of both speed and resource usage, we report for the different designs developed (with 1, 2, 3 and 4 kernels, each performing the MVM on a portion of the matrix M) -the sustained speed [GFlop/s] measured on actual runs on the MicroGate board (equipped with one ARRIA 10 FPGA and 4 HMC memory banks), -the resource used (ALM -Arithmetic Logic Modules, memory modules M20K). The design presents nearly linear scaling for computational performance.
To understand how resources are used, we report, for the largest design using four equal MatrixVectorProduct kernels, the percentage of the resources (ALM, M20K) used to implement -the PCIe interfacing IP: ALM 2.7%, M2K 0.7% -the MatrixVectorProduct kernels: ALM 5.3%, M2K 8.1% each kernel -the HMC memory controllers: ALM 7.0%, M2K 5.7% each controller -the other auxiliary modules (reset and clock generators, FIFOs, mySplit and BuildResultVector modules, …): ALM 18.6%, M2K 22.9% When the FPGA board was configured with the design using four SP kernels, the power consumption of the board was 40 W, resulting in the energy efficiency of 0.53 GFlop/s/W.
Even if we think that comparison with other implementations is a weak way to evaluate an implementation, we report an alternative realization of the MVM to verify that the proposed solution is aligned with what is allowed by the current technology.
The work presented in [14] reports the implementation of several BLAS routines, including the MVM. The performance of this routine is reported in the case of a 1024 × 1024 matrix stored within the internal RAM, thus not requiring any communication with the DDR banks; in the case of vectorization width set to 64 (i.e., performing in parallel 64 multiply operations) it is reported a computing speed greater than 20 GFlop/s (both in single and in double precision). While giving an idea of the performance achievable by the hardware in the FPGA, such a figure would require a significantly large I/O BW to be sustained for larger matrices (as the 8Kx8k matrices used in our case): the proper buffering and macro-pipelining of the computation to sustain the traffic with the DDR memory is not addressed in [14], not being this the core of the FBLAS implementation.
Future Developments
Looking at the Gantt reported in Fig. 3, we see that the transfer of one line of matrix M from memory lasts longer (456 cycles) than the computation of the partial scalar products (264 cycles) and the final sum (263 cycles). This happens because QuickPlay HLS does not support outstanding read operations, which would allow overlapping different memory transfers. Could we use outstanding memory reads, the latency of the next transfer could be overlapped with the actual data transfer of the current, as in the following: In the previous figure, we decomposed the time to transfer data from HMC to the kernel into the latency L(m i ) and the actual data transfer. It's easy to verify that the number of cycles needed to perform the computation shown in Fig. 4 is given by i.e., 9.3 GFlop/s when f ck = 150 MHz, very close to the limit imposed by the memory BW.
Conclusions
The activities performed to implement on FPGA the MVM through the QuickPlay HLS flow have been described. We started formalizing the problem, describing how parallelism is a key factor to achieve the expected performance and we showed how parallelism could be introduced at 4 different levels: -(spatial) parallelization over the different rows of the matrix, computing in parallel the scalar products between the input vector and p different rows of the matrix M -parallelization (pipelining) of the basic scalar product, achieved thanks to the introduction of L different independent partial scalar products to break the data dependence characterizing the classical accumulation scheme (L is the latency of the basic Multiply and Add pipelined operation) -parallelization derived from the iteration of the previous decomposition, dividing each of the L sub-vectors into P smaller sub-vectors, thus performing in parallel P pipelined partial scalar products -coarse-grained pipelining, overlapping different phases of successive scalar products when multiplying the input vector by different rows of the matrix M (read from external memory, computation of the partial scalar products, sum of the partial results).
Some models to compute the expected performance of the algorithm we have implemented have been presented and discussed. We found a good agreement between the forecasted and the actual performance. This agreement demonstrates the good quality of the hardware generated by the HLS engine. | 2020-06-17T05:08:59.601Z | 2020-05-22T00:00:00.000 | {
"year": 2020,
"sha1": "deee10966b69d74abeaa912e5375426678041ae9",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-50743-5_13.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "deee10966b69d74abeaa912e5375426678041ae9",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
237339403 | pes2o/s2orc | v3-fos-license | An epigenetic basis of inbreeding depression in maize
Inbreeding depression is linked to persistent epigenetic and gene expression changes, which are reversible by random mating.
INTRODUCTION
Charles R. Darwin documented inbreeding depression as growth disadvantages from self-fertilization compared to outcrossing in many plants (1). Prevailing hypotheses suggest that inbreeding depression results from the exposure of deleterious recessive alleles and/or loss of overdominant alleles due to increased homozygosity (2,3) or reduced recombination frequency in some regions (4). However, yield in maize inbred lines continues to decline even after 10 generations of self-pollination (5). Moreover, although autotetraploids achieve homozygosity at a slower pace than diploids, inbreeding depression levels appear to be similar between diploid and tetraploid plants (6). Thus, the genetic models cannot completely explain inbreeding depression.
In the absence of genetic diversity, epigenetic variation may contribute to inbreeding depression (7). In pincushion flower (Scabiosa columbaria), inbreeding increases DNA methylation, and treatments with a DNA methylation inhibitor in the inbreds can reverse some depressive growth phenotypes, including reduced photosynthetic efficiency and leaf number (8). Inbreeding depression in Chinook salmon correlates with methylation changes in some specific genes (9). In Arabidopsis thaliana, growth and developmental abnormalities gradually accumulate in the selfing progeny of DNA methylation mutants, including ddm1 (decrease in DNA methylation1) (10), met1 (methyltransferase1) (11), and ros1 (repressor of silencing1) (12). These data suggest association of DNA methylation with inbreeding depression, but the underlying mechanism remains unknown.
In plants, DNA methylation occurs in CG, CHG (H = A, T, or C), and CHH context (13) and can affect binding affinity of transcription factors (14). Teosinte branched1/cycloidea/proliferating cell factor (TCP) proteins control plant growth and development (15,16), and their consensus binding sites are called TCP-binding sites (TBS), which are predicted to affect chromatin accessibility (17)(18)(19). In this study, we found that inbreeding in maize is accompanied by CHH hypermethylation of genomic regions predominately across TBS that correlated with increased H3K9me2, H3K27me2, and H3K27me3 levels and reduced chromatin accessibility of the TCP-target genes involved in photosynthesis (chloroplast), energy metabolism (mitochondrion), and ribosome biosynthesis. Increased methylation levels near the TBS motifs in promoters tended to reduce the binding affinity of ZmTCP proteins and down-regulate the expression of TCP-target genes, leading to reduced growth vigor. Random mating could restore these processes. These results collectively provide molecular evidence for epigenetic regulation of inbreeding depression in maize.
Increased CHH methylation near genes during maize inbreeding depression
Maize F 1 hybrids between two inbreds 6958/LX0001 and 1160/3999 were self-pollinated to the sixth generation (S6), from which one single plant was self-pollinated to produce an S7 population (Fig. 1A). Thirty plants in the S7 generation were randomly selected and continuously self-pollinated using the single-seed descent without selection bias to generate an S11 population with 28 lines, excluding two lost during propagation (Fig. 1A). In the S10 generation, 25 lines were randomly mated to generate a random mating (M11) population (n = 75) ( Fig. 1A and fig. S1A). The seedlings showed growth reduction from S7, S9, to S11 populations in the third leaf area (Fig. 1B), seedling dry weight, and 100-grain weight ( fig. S1, B and C). However, growth phenotypes in the M11 were higher than those in the S11 population and fell between S7 and S9 populations ( Fig. 1B and fig. S1, B and C), ranging from low (L) to high (H) variation (fig. S1A). These data were consistent with maize yield reduction during inbreeding (5) and yield regain after mating with the paired inbred lines (20).
Using the MaizeSNP6K array analysis (21), we found that the percentage of genomic homozygosity among different S7 individuals was 99.6 to 99.7%, which was higher than the expected 99.2% (P < 0.05, 2 test). Furthermore, the genetic uniformity from pairwise comparisons ranged from 99.6 to 99.9% between S7, S11, and M11 H and L lines (table S1). This suggests that phenotypic variation among these lines may result from the changes beyond the primary DNA sequence, which may involve DNA methylation and chromatin modifications (22). Using MethylC sequencing (Methylseq), we determined DNA methylomes in the third leaf of S7 (three biological replicates), S11 (line 6), and M11 H (H6) and L (L6) lines, each with two biological replicates ( Fig. 1A and table S2). The bisulfite conversion rates were >99.4% with a 5.2 or higher mean coverage of cytosines, while ~70% of cytosines were covered by at least one uniquely mapped read, and the cytosines present in both replicates were used for further analysis. We found that CG and CHG methylation levels were similar among S7, S11, and M11 H and L lines (fig. S1, D and E), whereas CHH methylation levels were higher in S11 than in S7 (similar to M11 L) lines but decreased in the M11 H line [P < 0.05, Fisher's least significant difference (LSD) test] (Fig. 1C and fig. S1, D and E). Similar trends of CHH methylation changes were also observed in another set of S11 (line 20) and M11 H20 and L20 lines during inbreeding and random mating ( fig. S1F). We also confirmed high levels of genetic uniformity (>99.93%) and homozygosity (>99.96%) among all the lines using a model-based caller analysis of bisulfite single-nucleotide polymorphism (SNP) (23), which was in the agreement with SNP array results. Thus, CHH methylation is increased during inbreeding and decreased after random mating.
These CHH hypermethylation (S11 versus S7) and hypomethylation (M11 H versus S11) sites were distributed primarily across generich regions on each chromosome ( fig. S2). Moreover, the increased CHH methylation in S11 lines and the reduced CHH methylation in M11 H lines occurred mainly in the 5′ and 3′ regions flanking the coding sequence (Fig. 1D), whereas CG and CHG methylation levels across the genes were similar among S7, S11, and M11 H and L lines (fig. S3, A and B). CHH methylation changes were also observed in transposable elements (TEs) close to (within 2-kb regions) of the genes (fig. S3, C to E), suggesting that CHH methylation changes around these genes and their adjacent TEs during inbreeding.
We identified 8879 CHH hyper-DMRs (differentially methylated regions) between S11 and S7 (table S3) using a threshold value of 5% (0.05) CHH methylation difference. This cutoff value for CHH methylation in plants can vary from 0.05 (24), 0.08 (25), 0.10 to twofold change (26) among different studies because of genomic backgrounds and methylation level differences. In this study, genomewide CHH methylation levels were ~2%; when the cutoff value was set to 0.05, it included the top 0.4% of all hypermethylated sites (fig. S4A), and most changes were greater than twofold ( fig. S4B). As expected, those DMRs at 0.05 included all the DMRs at 0.1 ( fig. S4C), while a subset of DMRs could be validated by another method (see below). Thus, we used the cutoff value of 5% for further analysis, unless noted otherwise.
Compared to CG and CHG methylation, the overall CHH methylation levels were relatively low and may exhibit variation between biological replicates and/or inbred lines. To test this, we compared Pearson's correlation coefficients and found clear separation between S7 and S11 DMRs, despite some variation among replicates within S7 or S11 population ( fig. S4D). Specifically, the difference between the inbred lines (S7 and S11) was greater than the difference between biological replicates (P < 1 × 10 −200 , Wilcoxon rank sum test) ( fig. S4E). We also compared S11 hyper-DMRs with variable DMRs between biological replicates in S7 and S11 and found little or no overlap between them ( fig. S4F). Using methylation-sensitive quantitative polymerase chain reaction (qPCR) (27), we further tested 15 randomly selected S11 hyper-DMRs and confirmed 14 of the 15 S11 hyper-DMRs examined among three biological replicates, which were higher in S11 than in S7 lines ( fig. S4G). During inbreeding, CHH methylation levels were consistently higher in another S11 (S11_20) than in S7 lines ( fig. S5A), and moreover, they gradually increased from S7, S9, S10, to S11 lines (P < 1 × 10 −200 , Wilcoxon rank sum test) ( fig. S5B) (A) Genetic materials for studying inbreeding depression in maize. Progenies from a single S6 plant were propagated by single-seed descent to generate S7 to S11 populations. Mating between different S10 lines from the common S6 parent generated a random mating (M11) population, with H and L lines indicating high and low/similar levels of growth vigor relative to their S11 lines.
Fig. 2. Chromatin accessibility and histone modifications of CHH hyper-DMRs with TBS. (A)
The MEME-motif 1 identified from CHH hyper-DMRs in S11 using MEME suites matched TCP and BMAL1 (Arntl) binding sites in TOMTOM analysis (34). (B) Distributions of CHH hyper-DMRs, nucleosome occupancy, MNase-hypersensitive (MNase HS) regions, and DNase I sensitivity across hyper-DMR-associated genes. (C) Distributions of nucleosome occupancy, chromatin accessibility, and histone modifications across regions from S11 hyper-DMRs to transcriptional start sites (TSS), including 1-kb upstream of the DMR and 1-kb downstream from TSS. The distances between DMRs and TSSs were normalized to the same length. The gray box and black dashed lines indicate DMR and TSS, respectively. (D) An example of the locus Zm00001d015377 shows TBS (red arrowhead), exons (black), introns (white), and 5′ and 3′ untranslated regions (UTRs; gray). In the locus, CHH hypermethylation of the DMR (gray box) correlated with increased levels of H3K27me2 and H3K27me3 and decreased levels of chromatin accessibility and gene expression in S11. and void of nucleosomes (Fig. 2B). Coincidently, these regions in maize are enriched with cis-regulatory elements and explain ~40% of heritable phenotypic variation in diverse complex traits (19,37). We predicted that these elements in the hyper-DMRs may affect chromatin accessibility during inbreeding.
To test the above prediction, we investigated chromatin accessibility using the assay for transposase-accessible chromatin sequencing (ATAC-seq) and ChIP-seq with antibodies against H3K27me2 and H3K27me3 (table S2), two repressive histone marks negatively associated with the chromatin accessibility. ATAC-seq analysis showed an obvious 200-bp periodicity of sequenced fragments, reproducible between biological replicates ( fig. S6B). Notably, CHH hyper-DMRs were localized in low-nucleosome density regions, where nucleosome occupancy and H3K27me2 and H3K27me3 levels were increased, but chromatin accessibility levels were decreased from S7 to S11 lines (Fig. 2C). For example, Zm00001d015377 encodes a pentatricopeptide repeat-containing protein and is one of three candidate loci for kernel row number (KRN) (39); TBS within the hyper-DMR had higher levels of CHH methylation and H3K27me2 and H3K27me3 modifications but lower levels of chromatin accessibility, correlating with lower Zm00001d015377 expression levels in S11 than in S7 lines (Fig. 2D).
CHH hypermethylation across TBS affects gene expression
At the genome-wide level, histone modifications and chromatin accessibility from S7 to S11 lines changed more in the genes associated with CHH hyper-DMRs than without DMRs ( fig. S6C), suggesting that these changes may affect the expression of DMR-associated genes. Using RNA-seq data (table S2), we found that 763 genes were down-regulated in the S11 relative to S7 lines and enriched in CHH hyper-DMR-associated genes (P < 1 × 10 −27 , Fisher's exact test) ( Fig. 3A and table S4). These hyper-DMRs were associated with TBS motifs within 0-to 500-bp upstream of the TSS, which were overrepresented among down-regulated genes relative to up-regulated and all genes (P < 5 × 10 −29 , Wilcoxon rank sum test) ( These CHH hyper-DMRs across TBS may reduce the expression of TCP-target genes. Using published RNA-seq data of the CMTdefective mutants zmet2 and zmet5 (table S2), we found that most genes with hyper-DMRs were expressed at higher levels with greater fold changes in all methylation-defective mutants tested than in WT ( fig. S6D). The data suggest that CHH hypermethylation correlates with down-regulation of TCP-target genes. These 763 down-regulated genes were overrepresented in 15 gene ontology (GO) terms (P < 0.01, a modified Fisher's exact test) using DAVID analysis (40). These GO terms were classified into six groups based on semantic and functional descriptions, nine of which belonged to three groups related to mitochondrial, chloroplast, and ribosome functions, respectively (Fig. 3E). Similarly, a higher threshold value of 10% (relative to 5%) for identifying DMRs also retained these three groups of genes among DMR-associated and down-regulated genes ( fig. S6E). Mitochondria and chloroplasts are critical for energy production and carbohydrate metabolism, while ribosomes are essential for protein biosynthesis, and expression of these genes is coregulated by TBS (41). TBS motifs were enriched in these organelle-related genes ( fig. S6F) and adjacent to CHH hyper-DMRs ( fig. S6G). Thus, CHH hyper-DMRs accompanied with chromatin accessibility and histone modifications in the TBS may regulate expression of these genes. Notably, CHH hypermethylation and its effect on gene expression were unlikely related to residual heterozygosity. Using Bis-SNP software (23) to identify SNPs from MethylC-seq data, we found only 8 (of 8879) S11 CHH hyper-DMRs and 33 genes in residual heterozygous regions, but they did not overlap with down-regulated and DMRassociated genes (tables S3 and S4). These 33 genes were not likely involved in DNA methylation or leaf development (table S5).
Inbreeding-induced hypermethylation is reversed by random mating
If CHH hyper-DMRs and expression of their associated genes are induced by inbreeding, then they should be reversible by random mating. Compared to S11 lines, DMR methylation levels of randomly mating (M11) lines decreased more in S11 hyper-DMRs of the H (P < 1 × 10 −200 , Wilcoxon rank sum test) or L (P < 1 × 10 −150 ) lines than in non-DMRs ( fig. S7A). More hyper-DMRs were demethylated (reversed) in M11 H (65%) than in M11 L (57%) lines, with a 40% overlap of reversed DMRs between H and L lines (Fig. 4A). Among them, 25% of hyper-DMRs were reversed only in the H line (reversed DMRs) (Fig. 4A) and were associated more with genes than other DMRs (fig. S7B). These reversed DMR-associated genes correlated with increased chromatin accessibility and gene expression levels in M11 H but not in M11 L lines ( fig. S7C).
Consistently, expression levels for 352 (~49%) DMR-associated genes were also reversed in M11 H but not in M11 L lines (Fig. 4B). These expression reversal genes were also overrepresented in biological processes of mitochondria, chloroplasts, and ribosome biosynthesis (P < 0.05, a modified Fisher's exact test) (Fig. 4C), consistent with GO enrichment terms of DMR-associated genes after inbreeding (S11 lines) (Fig. 3E). In contrast, the genes with reversed methylation but not reversed expression in the M11 H line were related to four GO groups with no clear biological inference to growth depression ( Fig. 4B and fig. S7D). The data suggest that random mating can reverse DNA methylation and gene expression levels for a subset of CHH hyper-DMR-associated genes in inbred lines. In addition, we identified 1723 DMRs with hypomethylation (hypo-DMRs) in M11 H but not in M11 L lines, and 42% of these DMRs were not hypermethylated during inbreeding ( fig. S7E), suggesting that random mating may also induce hypomethylation in some genomic regions.
Biological functions of CHH hypermethylation during inbreeding
DNA methylation can quantitatively alter the binding affinity of transcription factors to their binding sites (14). Using electrophoretic mobility shift assay (EMSA), we determined whether CHH methylation affects the binding affinity of ZmTCP proteins to TBS motifs. Maize has 49 ZmTCP genes that belong to class I (proliferating cell factor, PCF, clade) and class II (42), while the class II genes are further divided into two subclasses, CIN and TB1/CYC ( fig. S8). On the basis of RNA-seq data from this study (table S2) and published data of B73 (43), TB1/CYC-related genes were not expressed in leaves. We found that six ZmTCP genes including three PCF-like (ZmPCF1, ZmTCP44, and ZmTCP21) and three CIN-like (Zm00001d018806, ZmTCP10, and ZmTCP22) were highly expressed in leaves ( fig. S8). We cloned each of them into a vector to express glutathione S-transferase (GST)-ZmTCP fusion proteins that were subsequently purified (Materials and Methods). EMSA showed that four GST-ZmTCP proteins (ZmPCF1, ZmTCP21, Zm00001d018806, and ZmTCP10) tested preferred to bind unmethylated TBS fragments ( Fig. 4D and fig. S9), whereas the other two (ZmTCP44 and ZmTCP22) preferred to bind methylated TBS fragments. These results indicate that ZmTCP proteins bind to TBS motifs and exhibit sensitivity to methylated TBS in vitro.
We further determined whether hypermethylation in S11 lines alters binding affinity of TBS using DNA affinity purification (DAP)-qPCR assays (44). GST-ZmTCP proteins were used to pull down genomic DNA by affinity in S7 and S11 (gS7 and gS11) lines, a fraction of which was amplified by PCR, resulting in amplified DNA (ampS7 and ampS11) with no methylation. We tested promoter fragments containing TBS motifs and hyper-DMRs in five TCP-target genes ( Fig. 4E and fig. S10). ZmPCF1 and ZmTCP21 bound to promoters of Zm00001d015449 and Zm00001d044353 (GST-ZmTCP/GST ≥ 5), and the other TCP factor Zm00001d018806 bound to promoters of Zm00001d044353 and Zm00001d036698. In agreement with the EMSA results, these six ZmTCP-TBS interactions showed that methylation on native genomic DNA can reduce the binding of ZmTCP proteins to TBS. For example, the binding affinity of GST-ZmPCF1 proteins to the native genomic DNA of Zm00001d015449 was lower than that to the PCR-amplified DNA (gS7 < ampS7, P < 0.05 and gS11 < ampS11, P < 0.0005, Tukey's test) (Fig. 4E). Three of six ZmTCP-TBS interactions showed lower binding affinity of the GST-ZmTCP proteins to native genomic targets in S11 than in S7 (gS11 < gS7, P < 0.05, Tukey's test) (Fig. 4E and fig. S10). Thus, methylation changes in TBS during inbreeding may alter binding affinity of ZmTCP transcription factors, depending on activators or repressors, to alter gene expression. At the genome-wide level, CHH hyper-DMRs near the TBS motifs correlated negatively with expression levels of their TCP-target genes involved in mitochondrial, chloroplast, and ribosome functions (P < 0.05, Wilcoxon rank sum test) (Fig. 5A). For example, Arabidopsis ADENOSINE DIMETHYL TRANSFERASE 1A (DIM1A) is predicted to mediate nuclear ribosome biogenesis, and in the dim1A mutant, leaf size and root length are reduced (45). Zm00001d015449 (ZmDIM1A) is a maize homolog of DIM1A. During inbreeding from S7 to S11, CHH methylation levels were increased near the two TBS motifs (Fig. 5B), while the binding affinity of ZmPCF1 was decreased (Fig. 4E), and ZmDIM1A expression was down-regulated (Fig. 5B). Moreover, 24-nt siRNAs accumulated near the TBS of ZmDIM1A, which prompted us to use the inverted repeat (IR) sequence (Fig. 5B) for virus-induced gene silencing (VIGS) (46), to modify DNA methylation by RdDM in its promoter region and analyze its biological function (47). Among nine independent VIGS-IR plants tested, we found that DNA methylation levels in TBS were increased ( Fig. 5C and fig. S11A), and expression levels of ZmDIM1A were reduced (Fig. 5D). As a result, these VIGS-IR seedlings grew smaller ( Fig. 5E and fig. S11B) with reduced height, leaf area, and root length (fig. S11C) compared to VIGS-GFP control plants. As a positive control for VIGS, the seedlings that expressed VIGS of ISPH/Zebra7 (Zb7) displayed yellowing leaf symptoms (Fig. 5E), as observed in the zb7 mutant (48). Furthermore, relative CHH methylation levels in the ZmDIM1A promoter ( fig. S11D) were negatively correlated with gene expression levels ( fig. S11E) and third-leaf areas (fig. S11F) from S7 (n = 30), S9 (n = 25), to S11 (n = 25) populations. Together, these results suggest that inbreeding-induced CHH hypermethylation of TBS can down-regulate expression of DIM1A and possibly other TCP-target genes that are associated with inbreeding depression in maize.
DISCUSSION
Results from our study can explain DNA methylation-mediated inbreeding depression in maize (Fig. 5F). Inbreeding increases CHH methylation across TBS and other motifs to reduce the binding affinity of ZmTCP transcription factors, which are accompanied with reduced chromatin accessibility and increased nucleosome occupancy and H3K27me3 and H3K27me2 modifications. As a result, expression of TCP-target genes involved in mitochondrial, chloroplast, and ribosome functions is down-regulated, leading to reduced growth vigor. Conversely, random mating within an inbreeding population may restore the growth vigor by reversing these epigenetic processes in subsets of these TCP-target genes. Notably, other transcription factors may also regulate expression of TCP-target genes involved in mitochondrion, chloroplast, and ribosome biosynthesis functions (41). This is probably because complex TBS motifs consist of binding sites for other transcription regulators such as BMAL1 in mice to remodel chromatin (35). In Arabidopsis, TCP proteins also target circadian clock regulators (22,49,50). For example, CIRCADIAN-CLOCK ASSOCIATED1 (CCA1) is a TCP-target gene, and its expression is regulated by CHH methylation across TBS, which correlates with the parent-of-origin effect on biomass heterosis (51). In addition, CCA1 and H3K4me3 modifications comprise a feedback loop for transcriptional regulation (52). In maize, overexpression of ZmCCA1b From S7 (left) to S11 (right) generations, increased DNA (CHH) methylation levels across TBS and other cis-regulatory elements during selfing (S11 lines) reduces the binding affinity of ZmTCP transcription factors, which is accompanied by reduced chromatin accessibility and increased nucleosome occupancy and H3K27me3 and H3K27me2 levels, and down-regulation of TCP-target genes involved in mitochondrial, chloroplast, and ribosome functions, thereby reducing growth vigor. CHH methylation increase during inbreeding depends on H3K9me2 levels. Arrows (+++ and +) represent high and low expression levels, respectively. Random mating can reverse these effects (green arrow). Mt, mitochondrion; Cp, chloroplast; Rb, ribosome biosynthesis.
(a homolog of Arabidopsis CCA1) reduces plant height (53). These results imply that CHH methylation of TBS may also regulate inbreeding depression through modulating circadian clock and its output genes, many of which are involved in photosynthesis and starch metabolism (49). The link between TCP regulators and circadianmediated growth reduction remains to be investigated. As circadian clock regulation is conserved across plant and animal kingdoms (22,54,55), we speculate that DNA methylation and chromatin changes in BMAL/RELA-target genes may also occur during inbreeding in mammals.
In maize, DNA methylation is found to correlate with paramutation of the r1 locus, encoding a helix-by-helix transcription factor in the anthocyanin biosynthesis pathway (56) and silencing of the Mutator (Mu) transposon in a metastable allele of the bronze 2 (bz2) locus, encoding a GST for anthocyanin biosynthesis (57). During inbreeding, metastable r1 expression (58) and Mu activity (59) are decreased, suggesting a possible connection between increased DNA methylation and reduced expression of r1 and Mu loci during inbreeding. We found that inbreeding induces hypermethylation across TEs, which may be silenced to counteract their expansion. However, methylated TEs may, in turn, silence proximal genes and disrupt expression of neighboring genes in Arabidopsis (60) and rice (61). Thus, TEs may affect changes in DNA methylation and expression of TE-associated genes because of inbreeding depression. Last, although sequence variation between high-and low-vigor random mating lines is very limited, inbred lines with high-quality resequencing genome or high-quality reference genome such as B73 should be used to precisely discern relative roles of sequence variation and epigenetic modification in inbreeding depression. In summary, our findings provide molecular evidence that inbreeding is accompanied with reversible epigenetic regulation of TCP-target genes that mediate growth vigor in maize and possibly other plants.
Maize inbred populations
The F 1 hybrids between two maize inbred lines, 6958/LX0001 and 1160/3999, were continuously self-pollinated for six generations (to S6) at Shandong Academy of Agricultural Sciences, and the plants with stable growth traits were selected to produce an elite inbred line. A single individual from the S6 line was self-pollinated to produce an S7 population (Fig. 1A), 30 individuals of which were randomly selected for continuous selfing (single-seed decent) to obtain an S11 population with 28 inbred lines, while two lines were lost during propagation. In each generation, one single individual was randomly chosen from seven individuals planted to ensure that no selection bias was imposed.
In the S10 generation, mating between different lines from the common S6 parent was used to generate a random mating (M11) population (n = 75) (Fig. 1A and fig. S1A). For example, pollens from the line 6 in S10 generation were artificially pollinated to the stigma of itself and other lines, respectively, to generate the S11_6 line (line 6) and its corresponding multiple M11 lines ( fig. S1A). A M11 line was classified as the high-vigor (H) line due to significantly increases (P < 0.05, Student's t test, n = 4 to 6) in leaf area, aerial dry weight, and 100-grain weight compared to its corresponding S11 line. In contrast, a M11 line with low or similar trait performance (lower means or P > 0.95) relative to the S11 line was classified as the low-vigor (L) line.
The populations including S7 (60 plants), S9 (n = 29, 4 to 6 plants per line), S11 (n = 28, 4 to 6 plants per line), and M11 (n = 75, 4 to 6 plants per line) were simultaneously grown in a growth chamber (day, 12 hours, 28°C; night, 12 hours, 22°C, 60% relative humidity) at the Nanjing Agricultural University. Plants were randomly grown in plug trays filled with pindstrup substrate (0 to 10 mm; Pindstrup, Denmark), and positions of plug trays were randomized. The third leaf area and aerial dry weight of each plant were examined at 2 weeks after planting. The third leaves were harvested and immediately frozen in liquid nitrogen for RNA and DNA preparations. Another set of these populations was simultaneously planted in the experimental field of the Nanjing Agricultural University to measure the 100-grain weight.
DNA and RNA extraction
Leaf tissue was flash-frozen in liquid nitrogen and disrupted at least three times using the Tissuelyser-48 at 30 Hz for 30 s (Qiagen, Germantown, MD). DNA and RNA were simultaneously isolated from aliquots of the same sample using a DNeasy plant mini kit (QIAGEN) and a plant DNA/RNA kit (Omega), respectively, following the manufacturer's recommendations. Unless noted otherwise, each genomic experiment was performed using two to three biological replicates as shown in table S2, and uniquely mapped or normalized data present in two or more replicates were used for further analyses.
MethylC-seq, mRNA-seq, and sRNA-seq libraries
For methylC-seq library construction (62), total genomic DNA (~2 g) was sonicated to ~300 bp using the Covaris M220 Focusedultrasonicator (Woburn, MA). DNA after end repair was added an "A" base to the 3′-end and ligated with the methylated sequencing adapters. Ligation products were purified using VAHTS DNA Clean Beads (Vazyme, Nanjing, China), followed by bisulfite treatment using an EZ DNA Methylation-Gold kit (Zymo Research, Irvine, CA) according to the manufacturer's guidelines. Library enrichment was performed by PCR amplification, and products were purified and subjected to size selection. mRNA-seq library was constructed as follows. mRNA was isolated from total RNA treated by DNase I using magnetic beads with oligo (dT). Then, complementary DNA (cDNA) was synthesized using the mRNA that has been fragmented into short fragments as templates. Double-stranded cDNA was purified for end preparation, single-nucleotide A addition, and connection with adapters. After size selection, purified fragments were amplified for library enrichment. sRNA-seq library construction was made following a published protocol (63). An aliquot of 20 g of total RNA was subjected to 15% urea-polyacrylamide gel electrophoresis, and sRNAs in the size range of 18 to 30 nt were excised and ligated with adapters and used for library construction.
Nuclei isolation and ATAC-seq library construction
We isolated nuclei as described previously (64). Briefly, 0.5 g of fresh leaves was ground into powder in liquid nitrogen, and the powder was transferred to 10 ml of ice-cold nuclei purification buffer (NPB). This suspension was filtered with a 70-m cell strainer and then centrifuged at 1200g for 10 min at 4°C. The pellet after discarding the supernatant was resuspended with 1 ml of ice-cold NPB, and this suspension was centrifuged at 1200g for 10 min at 4°C. After removal of supernatant, the pellet was resuspended in 1 ml of ice-cold NEB2. The suspension was centrifuged at 12,000g for 10 min at 4°C. After the supernatant was carefully removed, the pellet was resuspended thoroughly in 300 l of ice-cold NEB3. The suspension was layered on top of 300 l of NEB3 in a new 1.5-ml tube, and the tube was centrifuged at 16,000g for 10 min at 4°C. The nuclei pellet after removing the supernatant was resuspended in 1 ml of ice-cold NPB and kept on ice. The hemocytometer (catalog no. 3100, Hausser Scientific, Horsham, PA) was used to count and determine the total yield of the nuclei. Approximately 50,000 purified nuclei were used in each 50 l of transposase integration reaction to construct the ATAC-seq library using the TransNGS Tn5 DNA Library Prep Kit for Illumina (TransGen Biotech, Beijing, China) according to the manufacturer's instructions. After amplification for 12 PCR cycles, the products were purified and size-selected using VAHTS DNA Clean Beads (Vazyme, Nanjing, China).
ChIP and ChIP-seq library construction
ChIP experiments were performed as previously described (65), with minor modifications. Briefly, the third leaves from 2-week-old seedlings were cross-linked in 1% formaldehyde under vacuum two times for 10 min, after which the cross-linked leaves were homogenized in liquid nitrogen. Chromatin was extracted and sheared to 200-to 500-bp fragments by sonication (Covaris M220 Focusedultrasonicator, Woburn, MA). Antibodies against histone H3K9me2 (D85B4, Cell Signaling Technology, Danvers, MA), H3K27me2 (Ab24684, Abcam, Cambridge, UK), and H3K27me3 (Ab6002, Abcam) were added to the sheared chromatin and incubated overnight at 4°C with gentle rotation. Immunoprecipitated DNA was extracted using Protein A/G Magnetic Beads (Thermo Fisher Scientific, Waltham, MA), reversed cross-links at 65°C overnight, and purified using the QIAquick PCR purification Kit (QIAGEN, Germantown, MD). For ChIP-seq, all libraries were constructed using the NEBNext Ultra II DNA Library Prep Kit for Illumina [New England Biolabs (NEB), Ipswich, MA] according to the manufacturer's instructions. After amplification for 16 PCR cycles, the products were purified and size-selected to obtain the appropriate size for sequencing.
MethylC-seq data analysis
After removing adapter sequences and low-quality reads from raw reads and quality controlling using FastQC (Babraham Institute, Cambridge, UK), clean reads were aligned to the maize B73 reference genome version 4 (B73_RefGen_v4) using Bismark with default parameters (−X 1000). In addition, the unmapped reads were trimmed according to the reports from FastQC using cutadapt (read_1: -u 10 -u -20; read_2: -u 10 -u -50) and then realigned to the genome. Only unique mapping reads were retained for further analysis. The deduplication was carried out using deduplicate_bismark, a script of Bismark package. Methylated cytosines were extracted from Bismark bam files using bismark_methylation_extractor, and some bases were ignored according to the M-bias reports (−comprehensive --ignore 7 --ignore_r2 7 --ignore_3prime 1; and -comprehensive --ignore_ 3prime 3 --ignore_r2 2). Only the cytosines covered by at least three reads in all compared materials were used to determine mean methylation levels and considered for further analysis.
We used a strategy of fixed C count for creating windows to identify the DMRs for CHH methylation. In brief, we first created sliding windows every 25 CHHs along maize genome. The mean and median of the sliding window length were 200 and 89 bp, respectively. Windows longer than 400 bp (5.38%) were further split though merging CHHs with distances less than 30 bp. Windows with coverage outliers were removed based on a BoxWhisker Outlier Filter (median + the interquartile range × 15). The mean methylation level was calculated for each window containing at least 12 overlapping CHHs in all compared materials. Analysis of variance (ANOVA) tests (P < 0.05) and differences of methylation levels (cutoff value of >0.05) were then performed for windows to identify DMRs. Similarly, Kruskal-Wallis tests were performed to identify the DMRs between the biological replicates (26). DMRs between S11 and S7 lines were validated by methylation-sensitive qPCR (27). Briefly, genomic DNA (~1 g) was digested with NlaIII (NEB, Ipswich, MA) or glycerol (mock treatment). NlaIII cuts CATG and is inhibited when cytosine is methylated (11). The difference between the mock C t and digest C t was calculated to estimate relative CHH methylation levels.
SNP calling from MethylC-seq data
The reads after Bismark deduplication were used to identify SNPs for all plants using Bis-SNP software tool as described previously (23). Briefly, the EMIT_ALL_CONFIDENT_SITES mode of Bis-SNP (−T BisulfiteGenotyper -stand_call_conf 10 -mbq 0 -trim5 7 -trim3 3) was used to obtain all of sites above confident threshold to calculate the genomic homozygosity and the genetic identity between plants compared in pairs. The numbers of visited sites and confident sites were 2,135,082,794 and 1,273,763,252, respectively. In addition, we used the DEFAULT_FOR_TCGA mode to identify all SNP sites above the threshold and then used the VCFpostprocessWalker commend with the default threshold to filter out false positives. SNPs with genotype quality greater than 20 were classified as homozygous and heterozygous, respectively. Residual heterozygous regions were identified by merging continuously heterozygous SNPs within a region of 500 kb (66).
ATAC-seq data analysis
ATAC-seq reads were removed 3′-end adapters and low-quality reads using Trim Galore (--stringency 3 --trim-n --max_n 7). Clean reads were then aligned to B73_RefGen_v4 using Bowtie2 with the parameters (-k 21 -X 1000 -N 1 --no-mixed --no-discordant). Mapped reads were filtered to obtain uniquely mapped reads and multimapped reads (1 < aligned number ≤ 20 and mapping quality ≥10). Multimapped reads were weighted and assigned to all locations according to the frequencies of uniquely mapped reads as previously described unique-weighting method (68). After assignment of multimapped reads, the reads mapped in either the chloroplast or mitochondrion (<20.62%) were removed and the remaining reads were used for further analysis.
To generate the nucleosome occupancy, reads less than 100 bp were considered nucleosome-free, and reads between 146 and 247 bp, 315 and 473 bp, and 558 and 615 bp were considered to be mono-, di-, and trinucleosomes, respectively ( fig. S6B) (69). Di-/trinucleosome reads were split into two/three reads. Reads were analyzed using DANPOS-2.2.2 with the parameters (-m 1 -a 1 -u 0 --mifrsz 0) (70). Nucleosome-free reads were negative weighted and used as backgrounds. In addition, 5′-ends of ATAC-seq reads were used to calculate reads per kilobase per million mapped reads as chromatin accessibility.
ChIP-seq, MNase-seq, and DNase-seq data analysis Raw sequencing reads were quality controlled and removed adapter sequences. Clean reads were then aligned to B73_RefGen_v4 using Bowtie2. Mapped reads were filtered to obtain reads with mapping quality of ≥20. Histone modifications and nucleosome occupancy were calculated as fragments per kilobase per million mapped fragments. DNase sensitivity indicated reads per kilobase per million mapped reads calculated by 5′-ends of DNase-seq reads. MNase HS regions were identified by subtracting the value of the heavy digest from those of the light digest (cutoff value of ≥6) as previously described (37).
Phylogenetic analysis
The MEGA 6 software was used to construct the midpoint-rooted neighbor-joining tree using ZmTCP amino acid sequences aligned by ClustalX. The bootstrap values based on 1000 replicates were shown at the branch points. Using the Poisson correction method, the evolutionary distances were calculated on the basis of the number of amino acid substitutions per site.
Expression and purification of recombinant ZmTCP proteins
The native coding sequences of ZmTCP transcription factors were amplified and cloned into the SalI site of pGEX-4T-3 vector. The recombinant vectors were transformed into Escherichia coli BL21 cells, and the expression of the recombinant GST-tagged proteins was induced by 1 mM isopropyl--d-thiogalactopyranoside (IPTG). The GST-ZmTCP fusion proteins were purified by using Pierce Glutathione Magnetic Agarose Beads (Thermo Fisher Scientific, Waltham, MA) according to the manufacturer's instructions. Last, the GST-ZmTCP proteins were quantified by NanoDrop 2000 (Thermo Fisher Scientific) and SDS-polyacrylamide gel electrophoresis analysis. All proteins were stored at −80°C before assays.
Electrophoretic mobility shift assay EMSA was performed using the Lightshift Chemiluminescent EMSA Kit (Thermo Fisher Scientific, Waltham, MA). For each reaction, the biotin-labeled unmethylated DNA probes (20 fmol) containing TBS motifs were mixed with the purified GST-tagged ZmTCP proteins (2 g) in the reaction buffer [1× lightshift binding buffer, 2.5% glycerol, 5 mM MgCl 2 , poly-dIdC (50 ng/l), and 0.05% NP-40]. Excessive of unlabeled competitor (unmethylated probes, methylated probes, unmethylated mutant probes, or methylated mutant probes) was added for competition assays. Before adding the biotin-labeled unmethylated probes, the mixture was incubated at room temperature for 10 min and incubated for another 20 min after adding the probes. The products of binding reaction were resolved by electrophoresis on a 6.5% nondenaturing polyacrylamide gel and then transferred onto a nylon membrane presoaked with 0.5× tris-borate ethylenediaminetetraacetic acid (TBE) buffer. The membrane was photographed by the ChemiDoc Touch Imaging System (Bio-Rad Laboratories, Hercules, CA).
DAP-qPCR assay
DAP experiments were performed as described previously (44), with minor modifications. Briefly, we used the NEBNext Ultra II DNA Library Prep Kit for Illumina (NEB, Ispwich, MA) to generate genomic DNA and PCR-amplified genomic DNA samples for affinity purification of GST-ZmTCP proteins. Genomic DNA (~5 g) extracted from maize leaves was sonicated to 200-to 500-bp fragments, ligated with adapters, and purified by VAHTS DNA Clean Beads (Vazyme, Nanjing, China) to obtain the genomic DNA sample. To generate the PCR-amplified genomic DNA sample, 15 ng of purified DNA was amplified by PCR (11 cycles) to remove methylation. Next, the GST-ZmTCP proteins immobilized on Pierce Glutathione Magnetic Agarose Beads (Thermo Fisher Scientific, Waltham, MA) were incubated with the above DNA samples, respectively. The purified GST protein was used as a negative control. The eluted and recovered DNA was used for quantitative PCR assays using SYBR Green Realtime PCR Master Mix (Toyobo, Osaka, Japan).
Virus-induced gene silencing
We performed VIGS experiments as described previously (46). The IR (table S6) designed according to the promoter sequence of Zm00001d015449 was synthesized by GenScript company and cloned into the cucumber mosaic virus (CMV) VIGS (pCMV201) vector to construct the VIGS-IR plasmid. Plasmids pCMV101, pCMV301, VIGS-IR, VIGS-GFP, and VIGS-ISPH were transformed into the Agrobacterium tumefaciens strain EHA105 by electroporation. Equal amounts of different bacterial infection solutions (pCMV101 and pCMV301 and VIGS-IR or VIGS-GFP or VIGS-ISPH) were infiltrated into leaves of Nicotiana benthamiana. After 4 days, the infiltrated leaves were harvested and ground in 0.1 M phosphate buffer (pH 7.0), and the mixture was centrifuged at 3000g for 3 min at 4°C. Maize seeds were soaked in water at room temperature for 40 min and then inoculated with the supernatant containing virus crude extract using the vascular puncture inoculation method (46). The inoculated maize seeds were kept at 25°C in the dark for 2 days and then transferred into pots, and the seedlings were grown in the growth chamber under the light/dark cycle of 16/8 hours at 20°/18°C (day/night) with 60% humidity. VIGS-GFP plants and VIGS-ISPH plants were used as the control and the positive control for infection, respectively.
Reverse transcriptase qPCR
Reverse transcriptase qPCR (RT-qPCR) assays were carried out using the SYBR Green Realtime PCR Master Mix (Toyobo) in a Bio-Rad CFX96 machine (Bio-Rad Laboratories, Herclues, CA) according to the previous description (71). Gene expression levels were analyzed using the relative quantification method (C t ), and maize 18S rRNA was used as the internal control. Primers used for RT-qPCR are provided in table S6.
Bisulfite sequencing for individual genes
Genomic DNA was extracted from maize 3-week-old seedlings using the DNeasy Plant Mini Kit (QIAGEN). About 500 ng of genomic DNA was used for bisulfite conversion using the EZ DNA Methylation-Gold kit (Zymo Research, Irvine, CA) according to the manufacturer's instructions. Bisulfite-treated DNA was then amplified by PCR using Zymo Taq DNA polymerase (Zymo Research) and primers (table S6) targeting the Zm00001d015449 promoter. The purified amplicons were cloned into pGEM-T vectors (Promega, Madison, WI). For VIGS-GFP and VIGS-IR plants, at least 14 clones were randomly chosen for sequencing. Bisulfite DNA sequences and the levels of DNA methylation at the Zm00001d015449 promoter were analyzed using the online Kismeth program (http://katahdin.mssm. edu/kismeth/revpage.pl) (72). | 2021-08-29T06:16:18.864Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "afbecac62c0e72b5c215526abca4213eccde9b48",
"oa_license": "CCBYNC",
"oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.abg5442?download=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3335c89ad3d666c4dc3e1feb4d2a5d09d62e6de5",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53405712 | pes2o/s2orc | v3-fos-license | The Planck Power – A Numerical Coincidence or a Fundamental Number in Cosmology? The Planck Power – A Numerical Coincidence or a Fundamental Number in Cosmology?
The Planck system of units has been recognized as the most fundamental such system in physics ever since Dr. Max Planck first derived it in 1899. The Planck system of units in general, and especially the Planck power in particular, suggest a simple and interesting cosmological model. Perhaps this model may at least to some degree represent the real Universe; even if it does not, it seems interesting conceptually. The Planck power equals the Planck energy divided by the Planck time, or equivalently the Planck mass times c 2 divided by the Planck time. We show that the nongravitational mass-energy of our local region (L-region) of the Universe is, at least approximately, to within a numerical factor on the order of 2, equal to the Planck power times the elapsed cosmic time since the Big Bang. This result is shown to be consistent, to within a numerical factor on the order of 2, with results obtained via alternative derivations. We justify employing primarily L-regions within an observer’s cosmological event horizon, rather than O-regions (observable regions) within an observer’s cosmological particle horizon. Perhaps this might imply that as nongravitational mass-energy leaves the cosmological event horizon of our L-region via the Hubble flow, it is replaced at the rate of the Planck power and at the expense of negative gravitational energy. Thus the total mass-energy of our L-region, and likewise of all L-regions, is conserved at the value zero. Some questions concerning the Second Law of Thermodynamics and possible thwarting of the heat death of the Universe predicted thereby, whether via Planck-power input or via some other agency, are discussed. Abstract The Planck system of units has been recognized as the most fundamental such system in physics ever since Dr. Max Planck first derived it in 1899. The Planck system of units in general, and especially the Planck power in particular, suggest a simple and interesting cosmological model. Perhaps this model may at least to some degree represent the real Universe; even if it does not, it seems interesting conceptually. The Planck power equals the Planck energy divided by the Planck time, or equivalently the Planck mass times c 2 divided by the Planck time. We show that the nongravitational mass-energy of our local region (L-region) of the Universe is, at least approximately, to within a numerical factor on the order of 2, equal to the Planck power times the elapsed cosmic time since the Big Bang. This result is shown to be consistent, to within a numerical factor on the order of 2, with results obtained via alternative derivations. [We justify employing primarily L-regions within an observer’s cosmological event horizon, rather than O-regions (observable regions) within an observer’s cosmological particle horizon.] Perhaps this might imply that as nongravitational mass-energy leaves the cosmological event horizon of our L-region via the Hubble flow, it is replaced at the rate of the Planck power and at the expense of negative gravitational energy. Thus the total mass-energy of our L-region, and likewise of all L-regions, is conserved at the value zero. Some questions concerning the Second Law of Thermodynamics and possible thwarting of the heat death of the Universe predicted thereby, whether via Planck-power input or via some other agency, are discussed. We then give a brief review of the Multiverse, and of some alternative viewpoints. regions), O-regions (observable regions), comoving frame, Second Law of Thermodynamics, heat death, Planck power versus heat death, low-entropy boundary conditions versus heat death, kinetic versus thermodynamic control, kinetic control versus heat death, minimal Boltzmann brains, extraordinary observers. favor ever-expanding alternative model of is in in alternative model of an oscillating Universe investigated in this a Big Rip is to be consistent with indeed part life is closer of one. A oscillating Λ Second Law of Thermodynamics. This kinetic as opposed to thermodynamic control could similarly obtain at the initial creation in accordance with Eq. (8) of an oscillating Universe with two-time low-entropy boundary conditions at the
Introduction
In Sect. 2 we define and distinguish between local regions (L-regions) within an observer's cosmological event horizon and observable regions (O-regions) within an observer's cosmological particle horizon, of the Universe, and justify primarily employing L-regions. In Sect. 3 we discuss the importance of the Planck system of units, which has been recognized as the most fundamental such system in physics ever since Dr. Max Planck first derived it in 1899. We then consider a possibly important role of the Planck system of units, especially of the Planck power, in cosmology. Perhaps the ensuing cosmological model may at least to some degree represent the real Universe; even if it does not, it seems interesting conceptually. The Planck power equals the Planck energy divided by the Planck time, or equivalently the Planck mass times c 2 divided by the Planck time. In Sect. 3 we show that the nongravitational mass-energy of our local region (L-region) of the Universe is, at least approximately, to within a numerical factor on the order of 2, equal to the Planck power times the elapsed cosmic time since the Big Bang. This result is shown to be consistent, to within a numerical factor on the order of 2, with results obtained via alternative derivations. We consider the possible inference that as nongravitational mass-energy leaves the cosmological event horizon of our L-region via the Hubble flow, it is replaced at the rate of the Planck power and at the expense of negative gravitational energy. The problem of consistency with astronomical and astrophysical observations is discussed in Sect. 4. In Sects. 3 and 4 we consider only nonoscillating cosmologies (except for brief parenthetical mentions of oscillating ones in the second-to-last paragraph of Sect. 4). In Sects. 5-8 we consider both nonoscillating and oscillating cosmologies. Some questions concerning the Second Law of Thermodynamics and possible thwarting of the heat death predicted thereby are discussed with respect to the Planck power in Sect. 4, with respect to cosmology in general and minimal Boltzmann brains in particular in Sect. 5, with respect inflation to in Sect. 6, and with respect to kinetic versus thermodynamic control in Sects. 4 and 7. (We discuss possible thwarting of the heat death with respect to kinetic versus thermodynamic control mainly as regards the Planck power in particular in Sect. 4 but more generally in Sect. 7.) A brief review concerning the Multiverse, and some alternative viewpoints, are given in Sect. 8.
L-regions and O-regions
In this chapter we will consider primarily local regions or L-regions of the Universe rather than observable regions or O-regions [1] thereof, although we will also consider O-regions as necessary [1]. 1 We now define and distinguish between L-regions and O-regions, and justify primarily employing L-regions, as opposed to O-regions only occasionally, as necessary [1]. Let R be the radial ruler distance or proper distance [2] to the boundary of our L-region, that is to our cosmological event horizon [3], where the Hubble flow is at c, the speed of light in vacuum; beyond this horizon it exceeds c. Thus if the Hubble constant H (τ) does not vary with cosmic time [4,5] τ and is always equal to its present value H 0 , then light emitted at the present cosmic time [4,5] τ 0 by sources beyond our cosmological event horizon [2,3] and hence beyond our L-region can never reach us. Likewise, light emitted at the present cosmic time [4,5] τ 0 by us can never reach them. Also, if the Hubble constant H (τ) does not vary with cosmic time [4,5] τ and is always equal to its present value H 0 , then our cosmological event horizon [2,3] is always at fixed ruler distance R 0 = c/H 0 away and hence our L-region
Introduction
In Sect. 2 we define and distinguish between local regions (L-regions) within an observer's cosmological event horizon and observable regions (O-regions) within an observer's cosmological particle horizon, of the Universe, and justify primarily employing L-regions. In Sect. 3 we discuss the importance of the Planck system of units, which has been recognized as the most fundamental such system in physics ever since Dr. Max Planck first derived it in 1899. We then consider a possibly important role of the Planck system of units, especially of the Planck power, in cosmology. Perhaps the ensuing cosmological model may at least to some degree represent the real Universe; even if it does not, it seems interesting conceptually. The Planck power equals the Planck energy divided by the Planck time, or equivalently the Planck mass times c 2 divided by the Planck time. In Sect. 3 we show that the nongravitational mass-energy of our local region (L-region) of the Universe is, at least approximately, to within a numerical factor on the order of 2, equal to the Planck power times the elapsed cosmic time since the Big Bang. This result is shown to be consistent, to within a numerical factor on the order of 2, with results obtained via alternative derivations. We consider the possible inference that as nongravitational mass-energy leaves the cosmological event horizon of our L-region via the Hubble flow, it is replaced at the rate of the Planck power and at the expense of negative gravitational energy. The problem of consistency with astronomical and astrophysical observations is discussed in Sect. 4. In Sects. 3 and 4 we consider only nonoscillating cosmologies (except for brief parenthetical mentions of oscillating ones in the second-to-last paragraph of Sect. 4). In Sects. 5-8 we consider both nonoscillating and oscillating cosmologies. Some questions concerning the Second Law of Thermodynamics and possible thwarting of the heat death predicted thereby are discussed with respect to the Planck power in Sect. 4, with respect to cosmology in general and minimal Boltzmann brains in particular in Sect. 5, with respect inflation to in Sect. 6, and with respect to kinetic versus thermodynamic control in Sects. 4 and 7. (We discuss possible thwarting of the heat death with respect to kinetic versus thermodynamic control mainly as regards the Planck power in particular in Sect. 4 but more generally in Sect. 7.) A brief review concerning the Multiverse, and some alternative viewpoints, are given in Sect. 8.
L-regions and O-regions
In this chapter we will consider primarily local regions or L-regions of the Universe rather than observable regions or O-regions [1] thereof, although we will also consider O-regions as necessary [1]. 1 We now define and distinguish between L-regions and O-regions, and justify primarily employing L-regions, as opposed to O-regions only occasionally, as necessary [1]. Let R be the radial ruler distance or proper distance [2] to the boundary of our L-region, that is to our cosmological event horizon [3], where the Hubble flow is at c, the speed of light in vacuum; beyond this horizon it exceeds c. Thus if the Hubble constant H (τ) does not vary with cosmic time [4,5] τ and is always equal to its present value H 0 , then light emitted at the present cosmic time [4,5] τ 0 by sources beyond our cosmological event horizon [2,3] and hence beyond our L-region can never reach us. Likewise, light emitted at the present cosmic time [4,5] τ 0 by us can never reach them. Also, if the Hubble constant H (τ) does not vary with cosmic time [4,5] τ and is always equal to its present value H 0 , then our cosmological event horizon [2,3] is always at fixed ruler distance R 0 = c/H 0 away and hence our L-region of the Universe is always of fixed size. [We denote the value of a given quantity Q today (at the present cosmic time τ 0 ) by Q 0 and its value at general cosmic time τ by Q (τ).] Light emitted at past cosmic times τ < τ 0 (but not too far in the past) by sources now beyond (but not too far beyond) our cosmological event horizon [R 0 = c/H 0 always if H (τ) = H 0 always] and hence beyond our L-region but still within our O-region [1][2][3] can reach us, because when this light was emitted these sources were still within our L-region. Likewise, light emitted in the past τ < τ 0 (but not too far in the past) by us can reach them. The boundary of our O-region of the Universe is our cosmological particle horizon [1][2][3]. The boundary of our O-region (our cosmological particle horizon) is further away than the boundary of our L-region (our cosmological particle horizon) [1][2][3]. If H (τ) = H 0 always, not only is the boundary of our O-region currently at ruler distance R 0 > R 0 = c/H 0 but R (τ) gets further away with increasing cosmic time τ [4,5], while the boundary of our L-region R (τ) always remains fixed at R 0 = c/H 0 . The fixed size of our (or any) L-region given constant H (τ) = H 0 simplifies our discussions. More importantly, all parts of our (or any) L-region are always in casual contact, while outer parts of our (or any) O-region beyond the limit of the corresponding L-region were but no longer are in causal contact. Hence we will primarily employ L-regions rather than O-regions.
Hubble flow exceeding c may seem to violate Special Relativity. But General Relativitynot Special Relativity -is applicable in cosmology [6]. Special Relativity is applicable only within local inertial frames, and any given observer is not -indeed cannot be -in the same local inertial frame as this observer's cosmological event horizon [1][2][3][4][5][6] (and even less so as this observer's cosmological particle horizon [1][2][3][4][5][6]). Thus Hubble flow exceeding c does not violate General Relativity [6]. It should also be noted that the Hubble flow is motion with space rather than through space -every object in the Hubble flow is at rest in the comoving frame [7]. An object's motion, if any, relative to the comoving frame [7] is its peculiar motion. 2 At the 27th Texas Symposium on Relativistic Astrophysics [8], values of the Hubble constant today H 0 from the upper 60s to the low 70s (km / s) / Mpc were given [8], so H 0 ≈ 70 (km / s) / Mpc splits the difference [8]. These values were essentially unchanged from those obtained shortly preceding this Symposium [9,10]. The Planck 2015 results [11] state a value of H 0 = 68 (km / s) / Mpc [11], but this Planck 2015 work [11] also cites other recent results that range from the low 60s (km / s) / Mpc to the low 70s (km / s) / Mpc. Thus the value H 0 = 68 (km / s) / Mpc [11] not only is the most reliable and most recent one as of this writing, but it also splits the difference of the range of other recent results cited in this Planck 2015 work [11]. Hence we take the Hubble constant today to be H 0 .
= 68 (km / s) / Mpc ≈ 2.2 × 10 −18 (km / s) / km = 2.2 × 10 −18 s −1 [11]. 3 2 (Re: Entry [7], Ref. [2]) An observer in the comoving frame (ideally in intergalactic space as far removed as possible from local gravitational fields such as those of galaxies, stars, etc.) sees the 2.7 K cosmic background radiation as isotropic (apart from fluctuations of fractional magnitude F ≈ 10 −5 , which can be "smoothed out" via, say, computer processing to yield a uniform background). But even Earth is a fairly good approximation to the comoving frame: Earth's peculiar motion ≈ 380 km / s c (see p. 352 of Ref. [2]) with respect to the cosmic background radiation is fairly slow, and local gravitational fields are fairly weak (vescape c).
The Planck power in cosmology
The Planck system of units has been recognized as the most fundamental such system in physics ever since Dr. Max Planck first derived it in 1899 [12][13][14][15]. It is based on Planck's reduced constanth ≡ h/2π (or Planck's original constant h), the speed of light in vacuum c, and the universal gravitational constant G, with Boltzmann's constant k usually also included. These four fundamental physical constants are seen by everything, corresponding to the Planck system of units encompassing universal domain. By contrast, for example, the fundamental electric charge is seen only by electrically-charged particles. 4 The Planck system of units in general, and especially the Planck power in particular, suggest a simple and interesting cosmological model. Perhaps this model may at least to some degree represent the real Universe; even if it does not, it seems interesting conceptually.
The Planck power in cosmology
The Planck system of units has been recognized as the most fundamental such system in physics ever since Dr. Max Planck first derived it in 1899 [12][13][14][15]. It is based on Planck's reduced constanth ≡ h/2π (or Planck's original constant h), the speed of light in vacuum c, and the universal gravitational constant G, with Boltzmann's constant k usually also included. These four fundamental physical constants are seen by everything, corresponding to the Planck system of units encompassing universal domain. By contrast, for example, the fundamental electric charge is seen only by electrically-charged particles. 4 The Planck system of units in general, and especially the Planck power in particular, suggest a simple and interesting cosmological model. Perhaps this model may at least to some degree represent the real Universe; even if it does not, it seems interesting conceptually.
Multiply the Planck mass m Planck = (hc/G) 1/2 by c 2 to obtain the Planck energy E Planck = hc 5 /G 1/2 [12][13][14][15]. Divide the Planck energy by the Planck time t Planck = hG/c 5 1/2 to obtain the Planck power P Planck = c 5 /G . = 3.64 × 10 52 W ⇐⇒ P Planck /c 2 = c 3 /G . = 4.05 × 10 35 kg / s [12][13][14][15]. [The dot-equal sign ( . =) means "very nearly equal to."] Note that -unlike the Planck length, mass, energy, time, and temperature T Planck = E Planck /k = hc 5 for the mass of our L-region (not considering the negative gravitational energy). But M 0 ≈ 1.8 × 10 53 kg is of order-of-magnitude agreement with an estimate of M 0 assuming that the mass-energy density of of our L-region of the Universe [1,3] equals the critical density ρ crit [16], as seems to be the case if not exactly then at least to within a very close approximation. The density critical density ρ crit corresponds to the borderline between ever-expanding and oscillating Universes given vanishing cosmological constant, i.e., Λ = 0, and to spacetime being flat, and hence space Euclidean, on the largest scales, i.e., to the spatial curvature index being 0 rather than +1 or −1, given any value of Λ [16-20]. 5 The critical density is Applying the most recent and best result for H 0 , namely H 0 ≈ 68 (km / s) / Mpc ≈ 2.2 × 10 −18 (km / s) / km = 2.2 × 10 −18 s −1 [11], yields as an estimate of M 0 In Eq.
(3) we assume that the volume of our L-region is given by the Euclidean value 4πR 3 0 /3. But since astronomical observations indicate that spacetime is flat, and hence space is Euclidean, on the largest scales, i.e., that the spatial curvature index is 0 rather than +1 or −1, this assumption seems justified [11,[16][17][18][19][20]. Is the order-of-magnitude agreement between Eqs. (1) and (3) merely a numerical coincidence? Or does it suggest that the Planck power plays a fundamental role in cosmology -entailing a link between the smallest (Planck-length and Planck-time) and largest (cosmological) scales?
While there is order-of magnitude agreement between Eqs. (1) and (3), there is a discrepancy between them by a factor of ≈ 2. That is, Planck-power input as per Eq. (1) seems to imply ρ ≈ 2ρ crit . Since in this era of precision cosmology all quantities in Eqs. (1)-(3) are known far more accurately than to within a factor of 2, it seems that this factor of ≈ 2 cannot simply be dismissed. But we admit that we have no explanation for this factor of ≈ 2. Furthermore, we will see that Eqs. (5)-(7) seem to imply a discrepancy with Eq. (1) by a factor of ≈ 3/2 in the opposite direction, i.e., that Planck-power input as per Eq. (1) seems to imply ρ ≈ 2ρ crit /3. Such discrepancies by numerical factors on the order of 2 may prove our Planck-power hypothesis to be wrong. At the very least they prove that even if it is right in general it is only an introductory hypothesis whose details still need to be understood. Then again, perhaps because there is consistency to within a small numerical factors of O ∼ 2, our Planck-power hypothesis may be correct in general as an introductory hypothesis, even though, even if correct in general, its details still need to be understood. Do our considerations so far in this Sect. 3 suggest that, even though the Universe certainly began with the Big Bang, there has been since the Big Bang mass-energy input, at least on the average, at the Planck power, into our L-region of the Universe? We list several alternative proposals for such input (this list probably is not exhaustive): (a) steady-state-theory mass-energy input ex nihilo [21-23], (b) mass-energy input ex nihilo via other means [24,25], (c) mass-energy input at the expense of negative gravitational energy [26][27][28][29][30][31][32] rather than ex nihilo, or (d) mass-energy input at the expense of nongravitational negative energy, for example, at the expense of the negative-energy C field in some versions of the steady-state theory [33][34][35]. If at the expense of negative gravitational energy as per proposal (c), then forever the total (mass plus gravitational) energy of our L-region, and likewise of any L-region, of the Universe, and hence of the Universe as a whole, is conserved at the value and to spacetime being flat, and hence space Euclidean, on the largest scales, i.e., to the spatial curvature index being 0 rather than +1 or −1, given any value of Λ [16-20]. 5 The critical density is Applying the most recent and best result for H 0 , namely H 0 ≈ 68 (km / s) / Mpc ≈ 2.2 × 10 −18 (km / s) / km = 2.2 × 10 −18 s −1 [11], yields as an estimate of M 0 In Eq.
(3) we assume that the volume of our L-region is given by the Euclidean value 4πR 3 0 /3. But since astronomical observations indicate that spacetime is flat, and hence space is Euclidean, on the largest scales, i.e., that the spatial curvature index is 0 rather than +1 or −1, this assumption seems justified [11,[16][17][18][19][20]. Is the order-of-magnitude agreement between Eqs. (1) and (3) merely a numerical coincidence? Or does it suggest that the Planck power plays a fundamental role in cosmology -entailing a link between the smallest (Planck-length and Planck-time) and largest (cosmological) scales?
While there is order-of magnitude agreement between Eqs. (1) and (3), there is a discrepancy between them by a factor of ≈ 2. That is, Planck-power input as per Eq. (1) seems to imply ρ ≈ 2ρ crit . Since in this era of precision cosmology all quantities in Eqs. (1)-(3) are known far more accurately than to within a factor of 2, it seems that this factor of ≈ 2 cannot simply be dismissed. But we admit that we have no explanation for this factor of ≈ 2. Furthermore, we will see that Eqs. (5)-(7) seem to imply a discrepancy with Eq. (1) by a factor of ≈ 3/2 in the opposite direction, i.e., that Planck-power input as per Eq. (1) seems to imply ρ ≈ 2ρ crit /3. Such discrepancies by numerical factors on the order of 2 may prove our Planck-power hypothesis to be wrong. At the very least they prove that even if it is right in general it is only an introductory hypothesis whose details still need to be understood. Then again, perhaps because there is consistency to within a small numerical factors of O ∼ 2, our Planck-power hypothesis may be correct in general as an introductory hypothesis, even though, even if correct in general, its details still need to be understood. Do our considerations so far in this Sect. 3 suggest that, even though the Universe certainly began with the Big Bang, there has been since the Big Bang mass-energy input, at least on the average, at the Planck power, into our L-region of the Universe? We list several alternative proposals for such input (this list probably is not exhaustive): (a) steady-state-theory mass-energy input ex nihilo [21-23], (b) mass-energy input ex nihilo via other means [24,25], (c) mass-energy input at the expense of negative gravitational energy [26][27][28][29][30][31][32] rather than ex nihilo, or (d) mass-energy input at the expense of nongravitational negative energy, for example, at the expense of the negative-energy C field in some versions of the steady-state theory [33][34][35]. If at the expense of negative gravitational energy as per proposal (c), then forever the total (mass plus gravitational) energy of our L-region, and likewise of any L-region, of the Universe, and hence of the Universe as a whole, is conserved at the value zero [26][27][28][29][30][31][32]. (There are "certain 'positivity' theorems ... which tell us that the total energy of a system, including the 'negative gravitational potential energy contributions' ..., cannot be negative [32]." But positivity theorems do seem to allow the total energy of a system, including the negative gravitational energy, to be strictly zero. Also, perhaps positivity theorems need necessarily apply only for isolated sources in asymptotically-flat spacetime.) In this chapter we will mainly presume proposal (c) from the immediately preceding list, for the following reasons: (i) Unlike proposals (a) and (b), proposal (c) entails no violation of the First Law of Thermodynamics (conservation of mass-energy). (ii) Negative gravitational energy is known to exist, unlike the negative-energy C field of proposal (d), which was perhaps introduced at least partially ad hoc to render the steady-state theory consistent with the First Law of Thermodynamics (conservation of mass-energy). Moreover, unlike gravity, the C field not only has never been observed, but also entails difficulties of its own [34,35]. (iii) We will show that proposal (c) need not be inconsistent with the observed features of the Universe.
The Universe clearly shows evolutionary rather than steady-state [21-23, [33][34][35] behavior since the Big Bang. But it could stabilize to a steady state in the future. It could already now be thus stabilizing or even thus stabilized in the very recent past with as yet no or at most very limited observational evidence that might be suggestive of such stabilization. Thus even if there is steady-state-type creation of mass-energy since the Big Bang at the rate of the Planck power (we presume, in light of the immediately preceding paragraph, most likely at the expense of the Universe's negative gravitational energy), perhaps this might be compatible with the observed evolutionary behavior of the Universe since the Big Bang. (This point and related ones will be discussed in more detail in Sect. 4.) Although General Relativity is required for an accurate consideration of the Universe's gravity, the following Newtonian approximation may be valid as an order-of-magnitude estimate [26][27][28][29][30][31][32]. Such an estimate is suggestive in favor of Planck-power input at the expense of negative gravitational energy [26][27][28][29][30][31][32], which does not require a violation of the First Law of Thermodynamics (conservation of mass-energy) [26][27][28][29][30][31][32], as opposed to Planck-power input ex nihilo [21][22][23][24][25], which would require such a violation, or via C-field input, the C field never having been observed and also entailing its own difficulties [34,35]. In accordance with the last paragraph of Sect. 2, we take the Hubble constant today to be H 0 .
= 68 (km / s) / Mpc ≈ 2.2 × 10 −18 (km / s) / km = 2.2 × 10 −18 s −1 [8][9][10][11]. Thus neglecting any variation of H (τ) with τ, τ 0 = 1/H 0 ≈ 4.5 × 10 17 s consistently with the previously given value, and the ruler radius of our L-region of the Universe is R 0 = cτ 0 = c/H 0 ≈ 1.4 × 10 23 km = 1.4 × 10 26 m. The positive mass-energy of our L-region of the Universe within our cosmological event horizon is M 0 c 2 and the negative Newtonian gravitational energy of our L-region is ≈ −GM 2 0 /R 0 . Hence in the Newtonian approximation setting the total energy equal to zero yields [26][27][28][29][30][31][32] zero [26][27][28][29][30][31][32]. (There are "certain 'positivity' theorems ... which tell us that the total energy of a system, including the 'negative gravitational potential energy contributions' ..., cannot be negative [32]." But positivity theorems do seem to allow the total energy of a system, including the negative gravitational energy, to be strictly zero. Also, perhaps positivity theorems need necessarily apply only for isolated sources in asymptotically-flat spacetime.) In this chapter we will mainly presume proposal (c) from the immediately preceding list, for the following reasons: (i) Unlike proposals (a) and (b), proposal (c) entails no violation of the First Law of Thermodynamics (conservation of mass-energy). (ii) Negative gravitational energy is known to exist, unlike the negative-energy C field of proposal (d), which was perhaps introduced at least partially ad hoc to render the steady-state theory consistent with the First Law of Thermodynamics (conservation of mass-energy). Moreover, unlike gravity, the C field not only has never been observed, but also entails difficulties of its own [34,35].
In accordance with the last paragraph of Sect. 2, we take the Hubble constant today to be H 0 ≈ 68 (km / s) / Mpc ≈ 2.2 × 10 −18 (km / s) / km = 2.2 × 10 −18 s −1 [8][9][10][11]. Thus neglecting any variation of H (τ) with τ, τ 0 = 1/H 0 ≈ 4.5 × 10 17 s consistently with the previously given value, and the ruler radius of our L-region of the Universe is R 0 = cτ 0 = c/H 0 ≈ 1.4 × 10 23 km = 1.4 × 10 26 m. The positive mass-energy of our L-region of the Universe within our cosmological event horizon is M 0 c 2 and the negative Newtonian gravitational energy of our L-region is ≈ −GM 2 0 /R 0 . Hence in the Newtonian approximation setting the total energy equal to zero yields [26][27][28][29][30][31][32] (4) is fulfilled as closely as we can expect, especially given that our Newtonian approximation should be expected to provide only order-of-magnitude estimates, and also perhaps because (even after an initial fast inflationary stage) H (τ) may not be strictly constant.
There is yet another order-of-magnitude result that is consistent with our Planck-power hypothesis. Applying Eq. (1), rate of Planck-power mass input into our L-region is Letting ρ be the average density of our L-region, the rate of Hubble-flow mass-export from our L-region is In Eq. (6) we assume that the surface area bounding our L-region is given by the Euclidean value 4πR 2 0 . But since astronomical observations indicate that spacetime is flat, and hence space is Euclidean, on the largest scales, i.e., that the spatial curvature index is 0 rather than +1 or −1, this assumption seems justified [11,[16][17][18][19][20]. For steady-state to obtain we must have The numerical value for ρ obtained in the last line of Eq. (7) is in order-of-magnitude agreement with ρ crit as per Eq. (2), as well as in order-of-magnitude agreement with observations.
Recalling the second paragraph following that containing Eqs. (1)-(3), note that Eqs. (5)-(7) seem to imply a discrepancy with Eq. (1) by a factor of ≈ 3/2 in the opposite direction from the discrepancy with Eq. (1) by a factor of ≈ 2 implied by Eq. (3). Planck-power input as per Eq. (1) seems to imply ρ ≈ 2ρ crit , while Eqs. (5)-(7) seem to imply ρ ≈ 2ρ crit /3. Since in this era of precision cosmology all quantities in Eqs. (1)-(3) and (5)-(7) are known far more accurately than to within a factor of 2, such discrepancies by numerical factors of O ∼ 2 may prove our Planck-power hypothesis to be wrong. At the very least they prove that even if it is (4) is fulfilled as closely as we can expect, especially given that our Newtonian approximation should be expected to provide only order-of-magnitude estimates, and also perhaps because (even after an initial fast inflationary stage) H (τ) may not be strictly constant.
There is yet another order-of-magnitude result that is consistent with our Planck-power hypothesis. Applying Eq. (1), rate of Planck-power mass input into our L-region is Letting ρ be the average density of our L-region, the rate of Hubble-flow mass-export from our L-region is In Eq. (6) we assume that the surface area bounding our L-region is given by the Euclidean value 4πR 2 0 . But since astronomical observations indicate that spacetime is flat, and hence space is Euclidean, on the largest scales, i.e., that the spatial curvature index is 0 rather than +1 or −1, this assumption seems justified [11,[16][17][18][19][20]. For steady-state to obtain we must have The numerical value for ρ obtained in the last line of Eq. (7) is in order-of-magnitude agreement with ρ crit as per Eq. (2), as well as in order-of-magnitude agreement with observations.
Recalling the second paragraph following that containing Eqs. (1)-(3), note that Eqs. (5)-(7) seem to imply a discrepancy with Eq. (1) by a factor of ≈ 3/2 in the opposite direction from the discrepancy with Eq. (1) by a factor of ≈ 2 implied by Eq. (3). Planck-power input as per Eq. (1) seems to imply ρ ≈ 2ρ crit , while Eqs. (5)-(7) seem to imply ρ ≈ 2ρ crit /3. Since in this era of precision cosmology all quantities in Eqs. (1)-(3) and (5)-(7) are known far more accurately than to within a factor of 2, such discrepancies by numerical factors of O ∼ 2 may prove our Planck-power hypothesis to be wrong. At the very least they prove that even if it is right in general it is only an introductory hypothesis whose details still need to be understood. Then again, perhaps because there is consistency to within small numerical factors of O ∼ 2, our Planck-power hypothesis may be correct in general as an introductory hypothesis, even though, even if correct in general, its details still need to be understood.
While, even accepting discrepancies by a factor of O ∼ 2, the fulfillment of Eqs. (1)- (7) does not constitute proof of Planck-power input, it at least seems suggestive. Could Planck-power input, if it exists, be a classical process independent of quantum effects, if not absolutely then at least via opposing quantum effects canceling out, ash cancels out in the division P Planck = E Planck /t Planck ? Note that perhaps similar canceling out obtains with respect to the Planck speed l Planck /t Planck = c: c is the fundamental speed in the classical (nonquantum) theories of Special and General Relativity.
(The difficulty in squaring inflation with the Second Law of Thermodynamics, and a possible resolution of this difficulty, will be discussed in Sect. 6.) Observational evidence that in early 2014 initially seemed convincing for inflation in general [41,42], albeit possibly ruling out a few specific types of inflation [41,42,47], has been questioned [43][44][45][46][47][48][49][50], but not disproved [43][44][45][46][47][48][49][50]. 6 Moreover, even if an inflationary model is correct, recent observational findings disfavor simple models of inflation, such as quadratic and natural inflation [47]. Even if such one-time initial mass-energy inputs [36-40] occurred, could sustained mass-energy input then continue indefinitely such that the Planck power is at least a floor below which the average rate of mass-energy input into our L-region of the Universe cannot fall? It at least appears not to have fallen below this floor [16]. By the cosmological principle [51], if this is true of our L-region of the Universe then it must be true of any L-region thereof.
Thus our Planck-power hypothesis at least appears to entail a link between the smallest (Planck mass and Planck time) and largest (cosmological) scales, rather than being merely a numerical coincidence.
Planck power and kinetic control versus heat death: Big-Bang-initiated evolution merging into steady state?
In the simplest ever-expanding cosmologies, the Universe begins with a Big Bang and expands forever, with flat geometry on the largest scales, and with the Hubble constant H (τ) not varying with cosmic time [4,5] τ and always equal to its present value H 0 . As the Universe expands the Hubble flow carries mass-energy past the cosmological event horizon of our L-region of the Universe. But this "loss" is replaced by new positive mass-energy continually created within our L-region of the Universe forever at the rate of the Planck power and at the expense of our L-region's negative gravitational energy. This compensates right in general it is only an introductory hypothesis whose details still need to be understood. Then again, perhaps because there is consistency to within small numerical factors of O ∼ 2, our Planck-power hypothesis may be correct in general as an introductory hypothesis, even though, even if correct in general, its details still need to be understood.
While, even accepting discrepancies by a factor of O ∼ 2, the fulfillment of Eqs. (1)- (7) does not constitute proof of Planck-power input, it at least seems suggestive. Could Planck-power input, if it exists, be a classical process independent of quantum effects, if not absolutely then at least via opposing quantum effects canceling out, ash cancels out in the division P Planck = E Planck /t Planck ? Note that perhaps similar canceling out obtains with respect to the Planck speed l Planck /t Planck = c: c is the fundamental speed in the classical (nonquantum) theories of Special and General Relativity.
(The difficulty in squaring inflation with the Second Law of Thermodynamics, and a possible resolution of this difficulty, will be discussed in Sect. 6.) Observational evidence that in early 2014 initially seemed convincing for inflation in general [41,42], albeit possibly ruling out a few specific types of inflation [41,42,47], has been questioned [43][44][45][46][47][48][49][50], but not disproved [43][44][45][46][47][48][49][50]. 6 Moreover, even if an inflationary model is correct, recent observational findings disfavor simple models of inflation, such as quadratic and natural inflation [47]. Even if such one-time initial mass-energy inputs [36-40] occurred, could sustained mass-energy input then continue indefinitely such that the Planck power is at least a floor below which the average rate of mass-energy input into our L-region of the Universe cannot fall? It at least appears not to have fallen below this floor [16]. By the cosmological principle [51], if this is true of our L-region of the Universe then it must be true of any L-region thereof.
Thus our Planck-power hypothesis at least appears to entail a link between the smallest (Planck mass and Planck time) and largest (cosmological) scales, rather than being merely a numerical coincidence.
Planck power and kinetic control versus heat death: Big-Bang-initiated evolution merging into steady state?
In the simplest ever-expanding cosmologies, the Universe begins with a Big Bang and expands forever, with flat geometry on the largest scales, and with the Hubble constant H (τ) not varying with cosmic time [4,5] τ and always equal to its present value H 0 . As the Universe expands the Hubble flow carries mass-energy past the cosmological event horizon of our L-region of the Universe. But this "loss" is replaced by new positive mass-energy continually created within our L-region of the Universe forever at the rate of the Planck power and at the expense of our L-region's negative gravitational energy. This compensates for "losses" streaming past the cosmological event horizon of our L-region of the Universe via the Hubble flow -and does so consistently with the First Law of Thermodynamics (conservation of mass-energy) [26][27][28][29][30][31][32]. Because by the cosmological principle [51] our L-region is nothing special, the same is true of any L-region [3] of the Universe. Thus forever the total (mass plus gravitational) energy of our L-region, and likewise of any L-region, of the Universe, and hence of the Universe as a whole, is conserved at the value zero [26][27][28][29][30][31][32].
There is needed a mechanism whereby a sufficiently large fraction f of the Planck-power mass-energy input within the cosmological event horizon of our L-region, and of any L-region, of the Universe is produced in the form of hydrogen [12], and in conventional units in Appendix A in the inside front cover of Ref. [14] and in Table 8.1 on p. 219 of Ref. [10].) This small value f ∼ 10 −5 is sufficient to sustain star formation forever not only at the periphery but even at the center [58] of our island Universe [1] and likewise of every other island Universe [1] in the Multiverse [52 -58]. The remainder of the Planck-power input would be in forms other than hydrogen (perhaps traces of heavier elements, elementary particles of normal and/or dark matter, dark energy, etc.?).
But perhaps the simplest mode of Planck-power input is initially in the form of the simplest possible type of dark energy, corresponding to positive constant Λ -a positive cosmological constant. Constancy of Λ is required for constancy of Planck-power input initially in the form of Λ. Positivity of Λ seems to be required for positivity of Planck-power input being initially in the form of Λ, because negative Λ corresponds to contraction of space and hence to diminution of Λ-mass-energy. Thus the simplest possible type of dark energy, corresponding to positive constant Λ -a positive cosmological constant -is perhaps the type of dark energy that is most easily reconcilable with Planck-power input, in particular with for "losses" streaming past the cosmological event horizon of our L-region of the Universe via the Hubble flow -and does so consistently with the First Law of Thermodynamics (conservation of mass-energy) [26][27][28][29][30][31][32]. Because by the cosmological principle [51] our L-region is nothing special, the same is true of any L-region [3] of the Universe. Thus forever the total (mass plus gravitational) energy of our L-region, and likewise of any L-region, of the Universe, and hence of the Universe as a whole, is conserved at the value zero [26][27][28][29][30][31][32].
There is needed a mechanism whereby a sufficiently large fraction f of the Planck-power mass-energy input within the cosmological event horizon of our L-region, and of any L-region, of the Universe is produced in the form of hydrogen [12], and in conventional units in Appendix A in the inside front cover of Ref. [14] and in Table 8.1 on p. 219 of Ref. [10].) This small value f ∼ 10 −5 is sufficient to sustain star formation forever not only at the periphery but even at the center [58] of our island Universe [1] and likewise of every other island Universe [1] in the Multiverse [52 -58]. The remainder of the Planck-power input would be in forms other than hydrogen (perhaps traces of heavier elements, elementary particles of normal and/or dark matter, dark energy, etc.?).
But perhaps the simplest mode of Planck-power input is initially in the form of the simplest possible type of dark energy, corresponding to positive constant Λ -a positive cosmological constant. Constancy of Λ is required for constancy of Planck-power input initially in the form of Λ. Positivity of Λ seems to be required for positivity of Planck-power input being initially in the form of Λ, because negative Λ corresponds to contraction of space and hence to diminution of Λ-mass-energy. Thus the simplest possible type of dark energy, corresponding to positive constant Λ -a positive cosmological constant -is perhaps the type of dark energy that is most easily reconcilable with Planck-power input, in particular with constancy of Planck-power input. Moreover, constant Λ -a cosmological constant -is the only, unique, choice for Λ that can be put on the left-hand (geometry) side of Einstein's field equations without altering their symmetric and divergence-free form [65][66][67], "belonging to the field equations much as an additive constant belongs to an indefinite integral [65 -67]. 8 Nevertheless the current trend is to put Λ on the right-hand (mass-energy-stress) side of Einstein's field equations, which allows more freedom [66]. But if Λ is put on the right-hand side "the rationale for its uniqueness then disappears: it no longer needs to be a divergence-free 'geometric' tensor, built solely from the g µν ... the geometric view of Λ ... is undoubtedly the simplest [66]". Thus we might speculate about a link between constancy of Λ as a (positive) cosmological constant [65-67] and constancy of (positive) Planck-power input: Perhaps Planck-power input occurs initially as (positive) cosmological-constant Λ, with f ∼ 10 −5 thereof, then hopefully, somehow, via an as-yet-unknown mechanism, being transformed into hydrogen. It is important to note that -unlike equilibrium blackbody radiation -(positive) cosmological-constant-Λ dark energy seems to be at less than, indeed at far less than, maximum entropy. Thus there seems to be more than enough entropic "room" for f ∼ 10 −5 of positive-cosmological-constant-Λ dark energy to decay into hydrogen, without requiring decay all the way to iron. Positive-cosmological-constant-Λ Planck-power input thus seems to offer the benefits but not the liabilities of the steady-state theory [21-23, [33][34][35] [violation of mass-energy conservation without the C field (which has never been observed) and which also entails other difficulties with it [34,35] -recall the paragraph immediately following that containing Eq. (1)]. Positive cosmological-constant Λ also implies, or at least is consistent with, constant H (τ) = H 0 at all cosmic times τ, and hence a fixed size of our L-region, with its boundary (event horizon [2,3]) R (τ) always fixed at R 0 = c/H 0 . Thus to sum up this paragraph, the simplest model overall seems to entail (a) positive-cosmological-constant Λ, (b) Planck-power input initially as positive-cosmological-constant Λ at the expense of negative gravitational energy, with (c) f ∼ 10 −5 of Planck-power input, then hopefully, somehow, via an as-yet-unknown mechanism, being transformed into hydrogen. We note that the most reliable and most recent astronomical and astrophysical observations and measurements as of this writing are consistent with positive cosmological-constant-Λ dark energy [68,69], indeed possibly or even probably more consistent with positive cosmological-constant-Λ dark energy than with any other alternative [68,69]. But, of course, this issue is far from being definitely decided [68,69]. Even though our main point in this chapter most naturally based on positive constant Λ, in Sects. 5-7 some other possibilities for Λ will be qualitatively considered.
We cannot help but notice that temperature fluctuations in the cosmic background radiation have a typical fractional magnitude of F ≈ 10 −5 [70,71]. The observed and measured value F ≈ 10 −5 [70,71] is obviously far more certain than the speculated value f ∼ 10 −5 ; hence the distinction between the ≈ symbol as opposed to the ∼ symbol. Although it is unlikely that there is a connection between F ≈ 10 −5 [70,71] and f ∼ 10 −5 , it doesn't seem to hurt if we at least mention this numerical concurrence -just in case there might be a connection.
But the following question arises: Even if there is Planck-power input, why is not all of it in a thermodynamically-most-probable maximum-entropy form such as (iron + equilibrium constancy of Planck-power input. Moreover, constant Λ -a cosmological constant -is the only, unique, choice for Λ that can be put on the left-hand (geometry) side of Einstein's field equations without altering their symmetric and divergence-free form [65][66][67], "belonging to the field equations much as an additive constant belongs to an indefinite integral [65 -67]. 8 Nevertheless the current trend is to put Λ on the right-hand (mass-energy-stress) side of Einstein's field equations, which allows more freedom [66]. But if Λ is put on the right-hand side "the rationale for its uniqueness then disappears: it no longer needs to be a divergence-free 'geometric' tensor, built solely from the g µν ... the geometric view of Λ ... is undoubtedly the simplest [66]". Thus we might speculate about a link between constancy of Λ as a (positive) cosmological constant [65-67] and constancy of (positive) Planck-power input: Perhaps Planck-power input occurs initially as (positive) cosmological-constant Λ, with f ∼ 10 −5 thereof, then hopefully, somehow, via an as-yet-unknown mechanism, being transformed into hydrogen. It is important to note that -unlike equilibrium blackbody radiation -(positive) cosmological-constant-Λ dark energy seems to be at less than, indeed at far less than, maximum entropy. Thus there seems to be more than enough entropic "room" for f ∼ 10 −5 of positive-cosmological-constant-Λ dark energy to decay into hydrogen, without requiring decay all the way to iron. Positive-cosmological-constant-Λ Planck-power input thus seems to offer the benefits but not the liabilities of the steady-state theory [21-23, [33][34][35] [violation of mass-energy conservation without the C field (which has never been observed and which also entails other difficulties [34,35]) -recall the second paragraph following that containing Eqs. blackbody radiation) and none of it as hydrogen -why is not f = 0 [52-57]? If this were the case then the heat death predicted by the Second Law of Thermodynamics [59-63] would not be thwarted even with Planck-power input. While we are not sure of an answer to this question, we can venture what prima facie at least seems to be a reasonable guess: (a) Planck-power input (if it exists) generates equal nonzero quantities of both positive mass-energy and negative gravitational energy starting from (zero positive energy + zero negative energy = zero total energy), and the entropy of (zero positive energy + zero negative energy = zero total energy) is perforce zero. (b) Planck-power input is a steady-state but nonequilibrium process that does not allow enough time for complete thermalization of the input from the initial value of zero entropy of (zero positive energy + zero negative energy = zero total energy) to the maximum possible positive entropy of (nonzero positive energy + nonzero negative energy = zero total energy) in a form such as (iron + equilibrium blackbody radiation). That is, Planck-power input is kinetically rather than thermodynamically controlled [72][73][74][75][76][77]. 9 Thus even though, thermodynamically, Planck-power input should be in a maximum-entropy form such as (iron + equilibrium blackbody radiation), kinetically the reaction zero positive energy + zero negative energy = zero total energy −→ nonzero positive energy + nonzero negative energy = zero total energy. (8) occurs too quickly to allow thermodynamic equilibrium = maximum entropy to be attained. Yet even Planck-power input initially as positive-cosmological-constant Λ, with a fraction f ∼ 10 −5 of Planck-power input hopefully, somehow, via an as-yet-unknown mechanism, being transformed into hydrogen, entails some entropy increase. The entropy increase ∆S that it does entail is sufficient to render the probability of its reversal as per Boltzmann's relation between entropy and probability, Prob (∆S) = exp (−∆S/k), equal to zero for all practical purposes. Thus we are justified in placing only a forward arrow (no reverse arrow) at the beginning of the second line of Eq. (8). Thus Planck-power input entails enough entropy increase to stabilize it and prevent its reversal. But it occurs quickly enough to allow kinetic control [72][73][74][75][76][77] to prevent it from entailing maximal entropy increase.
To recapitulate out considerations thus far in Sect. 4: Perhaps the simplest possible Planck-power input is initially as positive-cosmological-constant Λ. Positivity of Λ is required for positivity of Planck-power input, and constancy of Λ is required for constancy of Planck-power input. [75]. Reference [73] does not render Ref. [72] obsolete, because Ref. [72] discusses aspects not discussed in Ref. [73], and vice versa. Likewise, Reference [77] does not render Ref. [76] obsolete, because Ref. [76] discusses aspects not discussed in Ref. [77], and vice versa. 63] would not be thwarted even with Planck-power input. While we are not sure of an answer to this question, we can venture what prima facie at least seems to be a reasonable guess: (a) Planck-power input (if it exists) generates equal nonzero quantities of both positive mass-energy and negative gravitational energy starting from (zero positive energy + zero negative energy = zero total energy), and the entropy of (zero positive energy + zero negative energy = zero total energy) is perforce zero: There is only one way for there to be nothing (Ω = 1), and hence by Boltzmann's relation between entropy and probability S = k ln Ω = k ln 1 = 0. (b) Planck-power input is a steady-state but nonequilibrium process that does not allow enough time for complete thermalization of the input from the initial value of zero entropy of (zero positive energy + zero negative energy = zero total energy) to the maximum possible positive entropy of (nonzero positive energy + nonzero negative energy = zero total energy) in a form such as (iron + equilibrium blackbody radiation). That is, Planck-power input is kinetically rather than thermodynamically controlled [72][73][74][75][76][77]. 9 Thus even though, thermodynamically, Planck-power input should be in a maximum-entropy form such as (iron + equilibrium blackbody radiation), kinetically the reaction zero positive energy + zero negative energy = zero total energy −→ nonzero positive energy + nonzero negative energy = zero total energy. (8) occurs too quickly to allow thermodynamic equilibrium = maximum entropy to be attained. Yet even Planck-power input initially as positive-cosmological-constant Λ, with a fraction f ∼ 10 −5 of Planck-power input hopefully, somehow, via an as-yet-unknown mechanism, being transformed into hydrogen, entails some entropy increase. The entropy increase ∆S that it does entail is sufficient to render the probability of its reversal as per Boltzmann's relation between entropy and probability, expressed in the form Prob (∆S) = exp (−∆S/k), equal to zero for all practical purposes. Thus we are justified in placing only a forward arrow (no reverse arrow) at the beginning of the second line of Eq. (8). Thus Planck-power input entails enough entropy increase to stabilize it and prevent its reversal. But it occurs quickly enough to allow kinetic control [72][73][74][75][76][77] to prevent it from entailing maximal entropy increase.
To recapitulate out considerations thus far in Sect. 4: Perhaps the simplest possible Planck-power input is initially as positive-cosmological-constant Λ. Positivity of Λ is required for positivity of Planck-power input, and constancy of Λ is required for constancy of Planck-power input. Constancy of Λ is requisite for Λ to be most simply encompassed within Einstein's field equations [65-67], besides correlating with constancy of Planck-power input. Positive-cosmological-constant Λ also implies, or at least is consistent with, constant (event horizon [2,3]) R (τ) always fixed at R 0 = c/H 0 . But we wish for a fraction f ∼ 10 −5 of Planck-power input hopefully, somehow, via an as-yet-unknown mechanism, being transformed into hydrogen. Hydrogen, so that stars can have fuel. But why hydrogen? Why not a thermodynamically dead form such as (iron + equilibrium blackbody radiation)? Because kinetically, it would be much more difficult for positive-cosmological-constant Λ to be transformed into a complex atom such as iron than into the simplest one -hydrogen. Thus while thermodynamic control would favor iron, if kinetic control wins then hydrogen is favored [72][73][74][75][76][77]. Note that kinetic control is vital not only in initial creation of hydrogen, but also in then preserving hydrogen long enough for it to be of use. It is owing to kinetic control that the Sun and all other main-sequence stars fuse hydrogen only to helium, not to iron, and are restrained to doing so slowly enough to give them usefully-long lifetimes. Main-sequence fusion of hydrogen to iron is thermodynamically favored, but kinetically its rate of occurrence is for all practical purposes zero. Thus kinetic control wins, limiting main-sequence fusion to helium and at a slow enough rate to give stars usefully-long lifetimes [72][73][74][75][76][77]. Indeed it is owing to kinetic control that not only hydrogen, but also all other elements except iron, do not instantaneously decay to iron. Kinetic control may also argue against positive-cosmological-constant Λ being completely transformed into equilibrium blackbody radiation (without iron). A single hydrogen atom can be created at rest with respect to the comoving frame [7]. By contrast, to conserve momentum, at least two photons must be created simultaneously, which may impose a bottleneck that diminishes the rate of such a process kinetically. Hence f ∼ 10 −5 of the Planck-power input in the positive-but-much-less-than-maximal-entropy form of hydrogen may at least prima facie seem plausible. Again it doesn't seem to hurt to at least mention the numerical concurrence between F ≈ 10 −5 [70,71] and f ∼ 10 −5 , even if any connection is unlikely.
Note that a zero value for the initial entropy for would also obtain if Planck-power input were ex nihilo [21][22][23][24][25] or at the expense of a negative-energy C field (despite its never having been observed and its other difficulties [34,35]) or other negative-energy field rather than at the expense of negative gravitational potential energy: the entropy of (zero positive energy + zero negative energy = zero total energy) would still perforce be zero. Thus our considerations of this Sect. 4, including that of dominance of kinetic over thermodynamic control [72][73][74][75][76][77], would still be applicable.
Our L-region and O-region clearly manifest evolutionary behavior, for example increasing metallicity [52-55] and a decreasing rate of star formation [52][53][54][55]. But our Planck-power hypothesis seems to suggest that its evolutionary behavior could gradually merge towards steady-state behavior. Early in the history of our L-region and O-region, star formation occurred at a much faster rate than now, and stars were on the average much more massive and hence very much faster-burning [hydrogen-burning rate ∼ (mass of star) 3 ]. Thus hydrogen was consumed faster than a conversion of f ∼ 10 −5 of Planck-power input could replace it: Stars were burning capital in addition to (Planck-power) incomeindeed more capital than income. But with decreasing rate of star formation and decreasing average stellar mass, perhaps a steady-state balance between hydrogen consumption and its replacement via f ∼ 10 −5 of Planck-power input could be approached, with stars living solely on (Planck-power) income. Merging of evolutionary towards steady-state behavior could already be beginning or could have even begun in the very recent past with as yet no or at most very limited observational evidence that might be suggestive of it. If such merging exists then both metallicity and star formation rate could stabilize in the future.
H (τ) = H 0 at all cosmic times τ, and hence a fixed size of our L-region, with its boundary (event horizon [2,3]) R (τ) always fixed at R 0 = c/H 0 . But we wish for a fraction f ∼ 10 −5 of Planck-power input hopefully, somehow, via an as-yet-unknown mechanism, being transformed into hydrogen. Hydrogen, so that stars can have fuel. But why hydrogen? Why not a thermodynamically dead form such as (iron + equilibrium blackbody radiation)? Because kinetically, it would be much more difficult for positive-cosmological-constant Λ to be transformed into a complex atom such as iron than into the simplest one -hydrogen. Thus while thermodynamic control would favor iron, if kinetic control wins then hydrogen is favored [72][73][74][75][76][77]. Note that kinetic control is vital not only in initial creation of hydrogen, but also in then preserving hydrogen long enough for it to be of use. It is owing to kinetic control that the Sun and all other main-sequence stars fuse hydrogen only to helium, not to iron, and are restrained to doing so slowly enough to give them usefully-long lifetimes. Main-sequence fusion of hydrogen to iron is thermodynamically favored, but kinetically its rate of occurrence is for all practical purposes zero. Thus kinetic control wins, limiting main-sequence fusion to helium and at a slow enough rate to give stars usefully-long lifetimes [72][73][74][75][76][77]. Indeed it is owing to kinetic control that not only hydrogen, but also all other elements except iron, do not instantaneously decay to iron. Kinetic control may also argue against positive-cosmological-constant Λ being completely transformed into equilibrium blackbody radiation (without iron). A single hydrogen atom can be created at rest with respect to the comoving frame [7]. By contrast, to conserve momentum, at least two photons must be created simultaneously, which may impose a bottleneck that diminishes the rate of such a process kinetically. Hence f ∼ 10 −5 of the Planck-power input in the positive-but-much-less-than-maximal-entropy form of hydrogen may at least prima facie seem plausible. Again it doesn't seem to hurt to at least mention the numerical concurrence between F ≈ 10 −5 [70,71] and f ∼ 10 −5 , even if any connection is unlikely.
Note that a zero value for the initial entropy as per the paragraph containing Eq. (8) would also obtain if Planck-power input were ex nihilo [21][22][23][24][25] or at the expense of a negative-energy C field (despite its never having been observed and its other difficulties [34,35]) or other negative-energy field rather than at the expense of negative gravitational potential energy: the entropy of (zero positive energy + zero negative energy = zero total energy) would still perforce be zero. Thus our considerations of this Sect. 4, including that of dominance of kinetic over thermodynamic control [72][73][74][75][76][77], would still be applicable.
Our L-region and O-region clearly manifest evolutionary behavior, for example increasing metallicity [52-55] and a decreasing rate of star formation [52][53][54][55]. But our Planck-power hypothesis seems to suggest that its evolutionary behavior could gradually merge towards steady-state behavior. Early in the history of our L-region and O-region, star formation occurred at a much faster rate than now, and stars were on the average much more massive and hence very much faster-burning [main-sequence hydrogen-burning rate ∼ (mass of star) 3 ]. Thus hydrogen was consumed faster than a conversion of f ∼ 10 −5 of Planck-power input could replace it: Stars were burning capital in addition to (Planck-power) incomeindeed more capital than income. But with decreasing rate of star formation and decreasing average stellar mass, perhaps a steady-state balance between hydrogen consumption and its replacement via f ∼ 10 −5 of Planck-power input could be approached, with stars living solely on (Planck-power) income. Merging of evolutionary towards steady-state behavior could already be beginning or could have even begun in the very recent past with as yet no or at most very limited observational evidence that might be suggestive of it. If such Perhaps they even could already now be stabilizing or have even already begun stabilizing in the very recent past with as yet no or at most very limited observational evidence that might be suggestive of such stabilization in particular, or of such merging in general. This stabilization, if it exists, would require only a small fraction f ∼ 10 −5 of Planck-power as hydrogen to maintain the current status quo in our L-region and O-region. This would allow star formation to continue forever not merely at the peripheries of island Universes, but even in their central regions [58]. We note that there is observational evidence that might at least be suggestive of "unexplained" hydrogen [78], which perhaps might qualify as such very limited suggestive observational evidence of merging towards steady-state behavior [78].
It should perhaps be re-emphasized that even Planck-power input as hydrogen entails some entropy increase and therefore is thermodynamically irreversible, consistently with the Second Law of Thermodynamics while still thwarting the heat death. The heat death is thus thwarted via dilution of entropy as an island Universe [1] Thus with only f ∼ 10 −5 of the Planck-power input as hydrogen, the heat death predicted by the Second Law of Thermodynamics [59-63] of our L-region of our island Universe [1], and likewise of any L-region of any island Universe [1], is thwarted forever. The heat death is thwarted forever not only at the periphery but even at the center [58] of our and every other island Universe [1]. The heat death is thwarted consistently with, not in violation of, the Second Law of Thermodynamics [59][60][61][62][63]. Hubble flow export of entropy (along with mass-energy) out of our L-region of our island Universe [1], and likewise out of any L-region of any island Universe [1], as its expansion creates more volume forever, is compensated forever by creation of thermodynamically fresh but still positive-entropy mass-energy -most importantly, hopefully, the fraction f ∼ 10 −5 thereof as hydrogen -via Planck-power input.
Steady-state balance between Planck-power input and Hubble-flow expansion of space can allow both the entropy density and the nongravitational mass-energy density in our L-region of our island Universe [1], and likewise in any L-region of any island Universe [1], to remain constant, even as the total entropy and nongravitational mass-energy of the entire island Universe increase indefinitely. As mass-energy creation at the rate of the Planck power and at the expense of negative gravitational energy is matched by mass-energy dilution via an island Universe's expanding space, so is entropy production matched by entropy dilution. Thus the negative-energy gravitational field of an island Universe is an inexhaustible fuel (positive mass-energy and negative-entropy = negentropy = less-than-maximum-entropy) source. Gravity is a bank that provides an infinite line of credit and never requires repayment [79]. Planck-power input draws on this infinite line of credit [79], which never runs out -indeed which cannot run out. [Of course, if (positive) nongravitational mass-energy density remains constant, then so must (negative) gravitational energy density, if the balance of zero total energy is to be maintained. Thus Planck-power input if it exists is really equally of positive nongravitational mass-energy and negative gravitational energy simultaneously.] Additional questions bearing on the Second Law of Thermodynamics will be discussed in Sects. 5-7. In this chapter, whether concerning Planck-power input or otherwise, we limit ourselves to considerations of thwarting the heat death within the restrictions of merging exists then both metallicity and star formation rate could stabilize in the future. Perhaps they even could already now be stabilizing or have even already begun stabilizing in the very recent past with as yet no or at most very limited observational evidence that might be suggestive of such stabilization in particular, or of such merging in general. This stabilization, if it exists, would require only a small fraction f ∼ 10 −5 of Planck-power as hydrogen to maintain the current status quo in our L-region and O-region. This would allow star formation to continue forever not merely at the peripheries of island Universes, but even in their central regions [58]. We note that there is observational evidence that might at least be suggestive of "unexplained" hydrogen [78], which perhaps might qualify as such very limited suggestive observational evidence of merging towards steady-state behavior [78].
It should perhaps be re-emphasized that even Planck-power input as hydrogen entails some entropy increase and therefore is thermodynamically irreversible, consistently with the Second Law of Thermodynamics, while still thwarting the heat death. The heat death is thus thwarted via dilution of entropy as an island Universe [1] Thus with only f ∼ 10 −5 of the Planck-power input as hydrogen, the heat death predicted by the Second Law of Thermodynamics [59-63] of our L-region of our island Universe [1], and likewise of any L-region of any island Universe [1], is thwarted forever. The heat death is thwarted forever not only at the periphery but even at the center [58] of our and every other island Universe [1]. The heat death is thwarted consistently with, not in violation of, the Second Law of Thermodynamics [59][60][61][62][63]. Hubble flow export of entropy (along with mass-energy) out of our L-region of our island Universe [1], and likewise out of any L-region of any island Universe [1], as its expansion creates more volume forever, is compensated forever by creation of thermodynamically fresh but still positive-entropy mass-energy -most importantly, hopefully, the fraction f ∼ 10 −5 thereof as hydrogen -via Planck-power input.
Steady-state balance between Planck-power input and Hubble-flow expansion of space can allow both the entropy density and the nongravitational mass-energy density in our L-region of our island Universe [1], and likewise in any L-region of any island Universe [1], to remain constant, even as the total entropy and nongravitational mass-energy of the entire island Universe increase indefinitely. As mass-energy creation at the rate of the Planck power and at the expense of negative gravitational energy is matched by mass-energy dilution via an island Universe's expanding space, so is entropy production matched by entropy dilution. Thus the negative-energy gravitational field of an island Universe is an inexhaustible fuel (positive mass-energy and negative-entropy = negentropy = less-than-maximum-entropy) source. Gravity is a bank that provides an infinite line of credit and never requires repayment [79]. Planck-power input draws on this infinite line of credit [79], which never runs out -indeed which cannot ever run out. [Of course, if (positive) nongravitational mass-energy density remains constant, then so must (negative) gravitational energy density, if the balance of zero total energy is to be maintained. Thus Planck-power input if it exists is really equally of positive nongravitational mass-energy and negative gravitational energy simultaneously.] Additional questions bearing on the Second Law of Thermodynamics will be discussed in Sects. 5-7. In this chapter, whether concerning Planck-power input or otherwise, we the Second Law. Nonetheless we note that the universal validity of the Second Law of Thermodynamics has been seriously questioned [80][81][82][83][84], albeit with the understanding that even if not universally valid at the very least it has a very wide range of validity [80][81][82][83][84].
There are two difficulties that should at least be briefly mentioned and, even if only briefly and only incompletely, also addressed. 67], and thence of hydrogen atoms (and/or other entities) newly created therefrom via Planck-power input can be instantaneously "rubber-stamped" onto our entire L-region at once, rather than being transmitted as a "signal" from one place to another within our L-region. (ii) Even if an interaction, or any other process such as "rubber-stamping," can be infinitely fast or instantaneous -and hence universally simultaneous -it can be so in only one reference frame [86]. A superluminal phenomenon, even be it only the motion of a geometric point that possesses no mass-energy and carries no information (for example the intersection point of scissors blades) [86], can be infinitely fast and hence instantaneous -universally simultaneous -in only one reference frame [86] (as a subluminal phenomenon can be infinitely slow -at rest -in only one reference frame [86]). 10 But there is a natural choice for this frame: The comoving frame [7], in which the cosmic background radiation and Hubble flow are isotropic [7], even if not an absolute rest frame, is at least a preferred rest frame [87], indeed the preferred rest frame [87], of our L-region of the Universe. If any one reference frame can claim to be preferred, it is the comoving frame [7,87]. Since by the cosmological principle [51] there is nothing special about our L-region of the Universe, the same likewise obtains in any other L-region thereof. The existence of this universal preferred frame [7,87] implies the existence of a preferred, perhaps even absolute, cosmic time τ [4, 5,87] 11 A clock in the comoving frame also measures the longest possible elapsed time ∆τ corresponding to a given decrease in the temperature of the cosmic limit ourselves to considerations of thwarting the heat death within the restrictions of the Second Law. Nonetheless we note that the universal validity of the Second Law of Thermodynamics has been seriously questioned [80][81][82][83][84], albeit with the understanding that even if not universally valid at the very least it has a very wide range of validity [80][81][82][83][84].
There are two difficulties that should at least be briefly mentioned and, even if only briefly and only incompletely, also addressed. (i) In order for negative gravitational energy to balance positive mass-energy of a hydrogen atom (or of any other entity), a hydrogen atom (or other entity) newly created via Planck-power input would have to interact gravitationally infinitely fast or instantaneously [85,86] -and hence universally simultaneously [85,86] -with our entire L-region of the Universe within our cosmological event horizon [3]. But if a signal of mass-energy and/or information is not transmitted, no violation of relativity is required [85,86].
Perhaps this may be possible if, as suggested in the third paragraph of this Sect. 4, Planck-power input occurs initially as positive-cosmological-constant Λ [65-67], with f ∼ 10 −5 thereof, then hopefully, somehow, via an as-yet-unknown mechanism, being transformed into hydrogen. Perhaps the gravitational interaction of positive-cosmological-constant Λ [65-67], and thence of hydrogen atoms (and/or other entities) newly created therefrom via Planck-power input can be instantaneously "rubber-stamped" onto our entire L-region at once, rather than being transmitted as a "signal" from one place to another within our L-region. (ii) Even if an interaction, or any other process such as "rubber-stamping," can be infinitely fast or instantaneous -and hence universally simultaneous -it can be so in only one reference frame [86]. A superluminal phenomenon, even be it only the motion of a geometric point that possesses no mass-energy and carries no information (for example the intersection point of scissors blades) [86], can be infinitely fast and hence instantaneous -universally simultaneous -in only one reference frame [86] (as a subluminal phenomenon can be infinitely slow -at rest -in only one reference frame [86]). 10 But there is a natural choice for this frame: The comoving frame [7], in which the cosmic background radiation and Hubble flow are isotropic [7], even if not an absolute rest frame, is at least a preferred rest frame [87], indeed the preferred rest frame [87], of our L-region of the Universe. If any one reference frame can claim to be preferred, it is the comoving frame [7,87]. Since by the cosmological principle [51] there is nothing special about our L-region of the Universe, the same likewise obtains in any other L-region thereof. The existence of this universal preferred frame [7,87] implies the existence of a preferred, perhaps even absolute, cosmic time τ [4, 5,87] 5,87]. A clock moving at velocity v relative to the comoving frame [7,87] measures times shorter by a ratio of 1 − v 2 /c 2 1/2 [7,87]. Thus the existence of this universal preferred frame and hence of cosmic time [4,5] weakens [87] the concept of relativity of simultaneity [85] as obtains within "the featureless vacuum of Special Relativity" [4,5,85-87]: Events, even if spatially separated, can be considered absolutely simultaneous if they occur when -with "when" having an absolute meaning -the cosmic background radiation as observed in the comoving frame has the same temperature, this temperature currently decreasing monotonically with increasing cosmic time τ since the Big Bang [4, 5,87]. 12 (Simultaneity of non-spatially-separated events is absolute even in Special Relativity [85].) Also, the contribution to the total nongravitational mass of our L-region of the Universe of a body of rest-mass [95] m is equal to m only if it is at rest in the comoving frame; if it moves at velocity v relative to the comoving frame [7,87] then its contribution is m 1 − v 2 /c 2 −1/2 [7,87]. For a zero-rest-mass particle the contribution is m = E/c 2 where E is its energy as measured in the comoving frame (for example m = E/c 2 = hν/c 2 for a photon of frequency ν as measured in the comoving frame). Thus the total nongravitational mass-energy M 0 of our L-region as per Eq. (1) is that measured with respect to the comoving frame.
It should be noted that these two difficulties (i) and (ii) discussed in the immediately preceding paragraph [85,86] 98] with no higher entropy (or entropy density if they are infinite) than that of their precursor Universe could perhaps obtain for reasons similar to Planck-power input into our L-region being at positive but less-than-maximum entropy as per our considerations in this Sect. 4 -perhaps most importantly kinetic control winning over thermodynamic control [72][73][74][75][76][77]. False-vacuum high-energy-density-scalar-field regions -the inflaton field -of the Multiverse separating island Universes [1] inflate much faster than they decay to non-inflating true-vacuum regions. Hence while inflation had a beginning once begun it is eternal [99]. Within island Universes high-cosmological-"constant" regions play essentially the same role that inflationary regions play between island Universes: they double in size much faster than their half-life against decay, so each island Universe expands forever, albeit more slowly than inflationary regions separating island Universes [1,100]. Yet the 12 (Re: Entry [87]) The phrase "the featureless vacuum of Special Relativity" is a quote from a very thoughtful and insightful letter from Dr. Wolfgang Rindler, most probably in the 1990s, in reply to a question that I raised concerning relativity of simultaneity. [7,87]. For a zero-rest-mass particle the contribution is m = E/c 2 where E is its energy as measured in the comoving frame (for example m = E/c 2 = hν/c 2 for a photon of frequency ν as measured in the comoving frame). Thus the total nongravitational mass-energy M 0 of our L-region as per Eq. (1) is that measured with respect to the comoving frame.
The Planck power: One-time and two-time low-entropy boundary conditions, and minimal Boltzmann brains
It should be noted that these two difficulties (i) and (ii) discussed in the immediately preceding paragraph [85,86] 98] with no higher entropy (or entropy density if they are infinite) than that of their precursor Universe could perhaps obtain for reasons similar to Planck-power input into our L-region being at positive but less-than-maximum entropy as per our considerations in this Sect. 4 -perhaps most importantly kinetic control winning over thermodynamic control [72][73][74][75][76][77]. False-vacuum high-energy-density-scalar-field regions -the inflaton field -of the Multiverse separating island Universes [1] inflate much faster than they decay to non-inflating true-vacuum regions. Hence while inflation had a beginning once begun it is eternal [99]. Within island Universes high-cosmological-"constant" regions play essentially the same role that inflationary regions play between island Universes: they double in size much faster than their half-life against decay, so each island Universe expands forever, cosmological "constant" is not high everywhere in an island Universe [1]; in L-regions and O-regions such as ours regions it is sedate. As decay of the inflaton field gives birth to island Universes, within each island Universe decay of high-cosmological-constant-field regions gives birth to new sedate L-regions and O-regions such as ours. In these sedate L-regions and O-regions, the cosmological "constant" may eventually decay to negative values, resulting in a Big Crunch -and perhaps oscillatory behavior, even as entire island Universes expand forever and the spaces between them expand forever even faster. For simplicity, as noted in the first paragraph of this Sect. 5, we thus far in this chapter (except for brief parenthetical remarks in the second-to-last paragraph of Sect. 13 Shortly we will discuss Dr. Roger Penrose's central point [61,62] concerning entropy in the context of both ever-expanding and oscillatory behavior.
The Planck power: One-time and two-time low-entropy boundary conditions, and minimal Boltzmann brains
If Planck-power input is positive when our L-region expands, could it be negative if and when it contracts? Could this reduce or at least help to reduce the (nongravitational) mass-energy, and hence also entropy, during contraction, possibly to zero, by the time of the Big Crunch? If so, could a singularity at the Big Crunch thereby be evaded, thus ensuring a new thermodynamically fresh Big Bang to begin a new cycle? Moreover, since the Planck power (whether or not divided by c 2 ) does not containh, but only G and c, would or at least might this evading of a Big Crunch singularity be a classical process independent of 13 , but nevertheless seemed to favor an ever-expanding Universe. An alternative model of an oscillating Universe is discussed in the work in Ref. [63] cited in Entry [92]. In the alternative model of an oscillating Universe investigated in this work, even a Big Rip is shown to be consistent with and indeed part of an oscillating Universe's life cycle. The work in Ref. [63] cited in Entry [93] considers related issues. Also, we should mention that an oscillatory Universe is closer to Einstein's conception of cosmology than a nonoscillatory one. A closed oscillating Universe with Λ = 0, similar to that considered by Dr. Albert Einstein in the early 1930s, is discussed in the material from Ref. [12] cited in Entry [94]. albeit more slowly than inflationary regions separating island Universes [1,100]. Yet the cosmological "constant" is not high everywhere in an island Universe [1]; in L-regions and O-regions such as ours regions it is sedate. As decay of the inflaton field gives birth to island Universes, within each island Universe decay of high-cosmological-constant-field regions gives birth to new sedate L-regions and O-regions such as ours. In these sedate L-regions and O-regions, the cosmological "constant" may eventually decay to negative values, resulting in a Big Crunch -and perhaps oscillatory behavior, even as entire island Universes expand forever and the spaces between them expand forever even faster. For simplicity, as noted in the first paragraph of this Sect. 5 , but nevertheless seemed to favor an ever-expanding Universe. An alternative model of an oscillating Universe is discussed in the work in Ref. [63] cited in Entry [92]. In the alternative model of an oscillating Universe investigated in this work, even a Big Rip is shown to be consistent with and indeed part of an oscillating Universe's life cycle. The work in Ref. [63] cited in Entry [93] considers related issues. Also, we should mention that an oscillatory Universe is closer to Einstein's conception of cosmology than a nonoscillatory one. A closed oscillating Universe with Λ = 0, similar to that considered by Dr. Albert Einstein in the early 1930s, is discussed in the material from Ref. [12] cited in Entry [94]. quantum effects, if not absolutely then at least via opposing quantum effects canceling out, ash cancels out in the division P Planck = E Planck /t Planck [12][13][14][15]? Note again that perhaps similar canceling out obtains with respect to the Planck speed l Planck /t Planck = c: c is the fundamental speed in the classical (nonquantum) theories of Special and General Relativity.
But negative Planck-power input requires entropy reduction. Hence it seems to require two-time low-entropy boundary conditions [101][102][103][104][105] at the Big Bang and at the Big Crunch -although two-time, or one-time, low-entropy boundary conditions can also obtain in a "traditional" oscillating Universe without any (positive or negative) Planck In nonoscillating, ever-expanding, cosmologies, only one-time low-entropy boundary conditions can occur.) Two-time low-entropy boundary conditions require that not only the Big Bang but also the Big Crunch must be special [61,62,[101][102][103][104][105]. But even one-time low-entropy boundary conditions at the Big Bang that are required for our L-region and O-region to exist as it is currently observed are equally special [61,62]. We will not address the question of whether or not the decrease in entropy during the contracting phase of an oscillating universal cycle imposed by two-time low-entropy boundary conditions [101][102][103][104][105] should be construed as contravening the Second Law of Thermodynamics. It could perhaps be argued that, within the restrictions of the Second Law, given two-time low-entropy boundary conditions [101][102][103][104][105] there is no net decrease in entropy for an entire cycle, or that two-time low-entropy boundary conditions [101][102][103][104][105] impose such a tight constraint on an oscillating Universe's journey through phase space that there is no change in entropy from the initial and final low value during a cycle. In accordance with the third-to-last paragraph of Sect. 4, in ideas developed in this chapter per se (as opposed to brief descriptions of ideas developed in cited references) we limit ourselves to considerations of thwarting the heat death within the restrictions of the Second Law of Thermodynamics. Nonetheless we again note that the universal validity of the second law has been seriously questioned [80][81][82][83][84], albeit with the understanding that even if not universally valid at the very least it has a very wide range of validity [80][81][82][83][84].
The reduction of the (nongravitational) mass-energy of a contracting Universe to zero or at least close to zero at the Big-Crunch/Big-Bang = Big Bounce event might thus be a way, although not necessarily the only way [101][102][103][104][105], to ensure zero entropy -the entropy of nothing is perforce zero -or at least low entropy at the Big Bounce. It should be noted that a zero-or at least low-entropy state at the Big Bounce is imposed in models with two-time low-entropy boundary conditions [101][102][103][104][105]. [5], in ever-expanding cosmological models Poincaré fluctuations on the scale of galactic -or smaller, indeed, even minimal-Boltzmann-brain -dimensions in spite of the dissipation due to expansion would not be expected, because the energy of starlight and ultimately all energy would be irrevocably lost from each and every galaxy into infinitely-expanding space and (without compensating input via a Planck-power or other mechanism, which is not considered on p. 192 of Ref. [5]) never replaced. In nonoscillating, ever-expanding, cosmologies, only one-time low-entropy boundary conditions can occur.) Two-time low-entropy boundary conditions require that not only the Big Bang but also the Big Crunch must be special [61,62,[101][102][103][104][105]. But even one-time low-entropy boundary conditions at the Big Bang that are required for our L-region and O-region to exist as it is currently observed are equally special [61,62]. We will not address the question of whether or not the decrease in entropy during the contracting phase of an oscillating universal cycle imposed by two-time low-entropy boundary conditions [101][102][103][104][105] should be construed as contravening the Second Law of Thermodynamics. It could perhaps be argued that, within the restrictions of the Second Law, given two-time low-entropy boundary conditions [101][102][103][104][105] there is no net decrease in entropy for an entire cycle, or that two-time low-entropy boundary conditions [101][102][103][104][105] impose such a tight constraint on an oscillating Universe's journey through phase space that there is no change in entropy from the initial and final low value during a cycle. In accordance with the third-to-last paragraph of Sect. 4, in ideas developed in this chapter per se (as opposed to brief descriptions of ideas developed in cited references) we limit ourselves to considerations of thwarting the heat death within the restrictions of the Second Law of Thermodynamics. Nonetheless we again note that the universal validity of the Second Law has been seriously questioned [80][81][82][83][84], albeit with the understanding that even if not universally valid at the very least it has a very wide range of validity [80][81][82][83][84].
The reduction of the (nongravitational) mass-energy of a contracting Universe to zero or at least close to zero at the Big-Crunch/Big-Bang = Big Bounce event might thus be a way, although not necessarily the only way [101][102][103][104][105], to ensure zero entropy -the entropy of nothing is perforce zero [recall the paragraph containing Eq. (8) in Sect. 4] -or at least low entropy at the Big Bounce. It should be noted that a zero-or at least low-entropy state at the Big Bounce is imposed in models with two-time low-entropy boundary conditions [101][102][103][104][105]. Thus the cosmic time [4,5] interval from the Big Bang to the Big Crunch can be incomparably shorter than and is totally unrelated to the Poincaré recurrence time [108]. 14 point [61,62] concerning entropy survives unscathed. This point had been brought out previously [108][109][110], but Dr. Roger Penrose's more modern analysis [61,62] takes into consideration inflation, which was not generally recognized prior to the late 1970s [108][109][110]. (See Sect. 6 concerning the connection with inflation.) This point begins with but does not end with recognizing that the L-region and O-region of our Universe are not merely special. They are much more special than they have to be -their negentropy is much greater than is required for conscious observers to exist. By far the minimum negentropy consistent with conscious observation would be that required for the minimal existence of a single minimally-conscious observer -one and only one minimal Boltzmann brain [111][112][113][114][115][116][117][118] with no body or sense organs, and with zero information including zero sensory input even if fictitious [112] and zero memory even if fictitious [113], save only the minimal information that one exists and is conscious and even this minimal information only for most minimal fleeting split-second of conscious existence consistent with recognition that one exists and is conscious, in an otherwise maximum-entropy and therefore dead L-region and O-region of our Universe -no other observers, no Sun or other stars, no Earth or other planets, no Darwinian evolution, no nothing (at any rate no nothing worthwhile). Input of any sensory information even if fictitious [112], and/or any memory even if fictitious [113], is incompatible with the minimalness of a Boltzmann brain required by Boltzmann's exponential relation between negentropy σ ≡ S max − S and its associated probability Prob (σ) = exp (−σ/k).
[Note: Negentropy σ ≡ S max − S should not be confused with the entropy change ∆S associated with a given reaction or process introduced in the paragraph containing Eq. (8).] Even fictitious sensory input [112] or fictitious memory [113], as in a dream or in a simulated Universe, requires larger σ than none at all and hence is exponentially forbidden. Thus Boltzmann's exponential relation Prob (σ) = exp(−σ/k) allows not any Boltzmann brain but only a minimal Boltzmann brain -and only one of them. Based solely on Boltzmann's exponential relation Prob (σ) = exp(−σ/k) a lone minimal Boltzmann brain is not merely by far but exponentially by far the most probable type of observer to be and exponentially by far the most probable type of L-region and O-region of our Universe -or of any Universe in the Multiverse -to find oneself in: One should then expect not even fictitious sensory input [112], not even fictitious memory [113], but only the most fleeting split-second of conscious existence consistent with recognition that one is conscious.
But a basis solely on Boltzmann's relation Prob (σ) = exp(−σ/k) is incorrect, or at the very least incomplete. Boltzmann's relation Prob (σ) = exp(−σ/k) is valid only assuming thermodynamic equilibrium -that the ensemble of L-regions and O-regions corresponds to that at thermodynamic equilibrium. Probably the most powerful argument against this being the case is the vast disparity between our L-region and O-region that we actually observe and what one would observe as per the immediately preceding paragraph based solely on Boltzmann's relation Prob (σ) = exp(−σ/k). This disparity, the minimal-Boltzmann-brain disparity, by a factor of O ∼ 10 10 123 [61,62] [61,62] takes into consideration inflation, which was not generally recognized prior to the late 1970s [108][109][110]. (See Sect. 6 concerning the connection with inflation.) This point begins with but does not end with recognizing that the L-region and O-region of our Universe are not merely special. They are much more special than they have to be -their negentropy is much greater than is required for conscious observers to exist. By far the minimum negentropy consistent with conscious observation would be that required for the minimal existence of a single minimally-conscious observer -one and only one minimal Boltzmann brain [111][112][113][114][115][116][117][118] with no body or sense organs, and with zero information including zero sensory input even if fictitious [112] and zero memory even if fictitious [113], save only the minimal information that one exists and is conscious and even this minimal information only for most minimal fleeting split-second of conscious existence consistent with recognition that one exists and is conscious, in an otherwise maximum-entropy and therefore dead L-region and O-region of our Universe -no other observers, no Sun or other stars, no Earth or other planets, no Darwinian evolution, no nothing (at any rate no nothing worthwhile). Input of any sensory information even if fictitious [112], and/or any memory even if fictitious [113], is incompatible with the minimalness of a Boltzmann brain required by Boltzmann's exponential relation between negentropy σ ≡ S max − S and its associated probability Prob (σ) = exp (−σ/k).
[Note: Negentropy σ ≡ S max − S should not be confused with the entropy change ∆S associated with a given reaction or process introduced in the paragraph containing Eq. (8), even though Boltzmann's relation has the same exponential form for both.] Even fictitious sensory input [112] or fictitious memory [113], as in a dream or in a simulated Universe, requires larger σ than none at all and hence is exponentially forbidden. Thus Boltzmann's exponential relation Prob (σ) = exp(−σ/k) allows not any Boltzmann brain but only a minimal Boltzmann brain -and only one of them. Based solely on Boltzmann's exponential relation Prob (σ) = exp(−σ/k) a lone minimal Boltzmann brain is not merely by far but exponentially by far the most probable type of observer to be and exponentially by far the most probable type of L-region and O-region of our Universe -or of any Universe in the Multiverse -to find oneself in: One should then expect not even fictitious sensory input [112], not even fictitious memory [113], but only the most fleeting split-second of conscious existence consistent with recognition that one exists and is conscious.
But a basis solely on Boltzmann's relation Prob (σ) = exp(−σ/k) is incorrect, or at the very least incomplete. Boltzmann's relation Prob (σ) = exp(−σ/k) is valid only assuming thermodynamic equilibrium -that the ensemble of L-regions and O-regions corresponds to that at thermodynamic equilibrium. Probably the most powerful argument against this being the case is the vast disparity between our L-region and O-region that we actually observe and what one would observe as per the immediately preceding paragraph based solely on Boltzmann's relation Prob (σ) = exp(−σ/k). This disparity, the minimal-Boltzmann-brain disparity, by a factor of O ∼ 10 10 123 [61,62] may be significant, they are utterly dwarfed by the minimal-Boltzmann-brain disparity that constants. 15 In contrast to minimal Boltzmann brains, we are sometimes dubbed "ordinary observers" [115-117] -but based solely on Boltzmann's relation Prob (σ) = exp(−σ/k) dubbing us even as extraordinary observers would be a vast understatement. Indeed the same reasoning can be extended to extraordinary observers. For, based solely on Boltzmann's Prob (σ) = exp(−σ/k), exponentially by far the most probable extraordinary observer (say, a human with a typical life span) is a minimal extraordinary observer, and only one of these per L-region or O-region. While σ required for a lone minimal extraordinary observer greatly exceeds that required for a lone minimal Boltzmann brain, it is still utterly dwarfed by the actual σ of our L-region and O-region: The disparity of Prob (σ) = exp(−σ/k) between that corresponding to a lone minimal extraordinary observer and that corresponding to our observed L-region and O-region is still by a factor of O ∼ 10 10 123 [61,62]. We are privileged to be not merely minimal extraordinary observers but super-extraordinary observers -more correctly hyper-extraordinary observers -with an entire Universe to explore and enjoy [61,62].
There are many arguments against Boltzmann-brain hypotheses [111][112][113][114][115][116][117][118]. Indeed, if there exist (a) imposed one-time low-entropy boundary conditions, (b) imposed two-time low-entropy boundary conditions [16,[88][89][90][91][92][93][94][101][102][103][104][105] in an oscillating L-region and O-region, or (c) [33][34][35]) imposed low-entropy mass-energy input such as hydrogen in a nonoscillating one [78], then such imposition would preclude thermodynamic equilibrium. Indeed, given (b) or (c), thermodynamic equilibrium would not only be precluded but be precluded forever. Given (b) or (c), there would be no need to assume a decaying or finite-lived Universe [117] to help explain consistency with our observations. But even given (a) the heat death σ = 0 need not be the most probable current state of the L-region or O-region of our Universe and hence a minimal Boltzmann brain [111][112][113][114][115][116][117][118] need not be the most probable current observer therein, because at the current cosmic time decay to maximum entropy has not yet occurred. Since by the cosmological principle [51] our L-region and O-region are nothing special, this must likewise be true with respect to any L-region or O-region in our island Universe -and likewise with respect to those in any other island Universe in the Multiverse. Moreover, it has even been argued that low-entropy boundary conditions are not required to avoid minimal Boltzmann brains being exponentially by far the most probable type of observer, or even the most probable type of observer at all [116]. Also, it has been argued that special, i.e., low-entropy, conditions are not required at Big Bangs or Big Bounces [123,124]. [Clustering of matter at t = 0, which might typically be expected to increase entropy in the presence of gravity [125,126], does not do so because in this model [123,124] it is prevented owing to positive kinetic energy equaling negative gravitational energy in magnitude, so that the total energy (which in a Newtonian model excludes mass-energy) equals zero. But on pp. 3-4 of Ref. [124], friction, which generates entropy, is invoked during the time evolution of the system. Frictional damping, by degrading part of the macroscopic kinetic energy of any given pair of objects into microscopic kinetic energy (heat), facilitates their settling into a bound Keplerian-orbit state. But because friction thus generates entropy, this may correspond to a hidden, overlooked, pre-friction low-entropy assumption concerning the initial t = 0 state of this model [123,124] in either of its two directions of time [123,124]. But a Kepler pair can be formed without friction, for example via a three-body collision wherein a third body removes enough macroscopic obtains even given these requisite fundamental and effective laws of physics and physical constants. 15 In contrast to minimal Boltzmann brains, we are sometimes dubbed "ordinary observers" [115-117] -but based solely on Boltzmann's relation Prob (σ) = exp(−σ/k) dubbing us even as extraordinary observers would be a vast understatement. Indeed the same reasoning can be extended to extraordinary observers. For, based solely on Boltzmann's relation Prob (σ) = exp(−σ/k), exponentially by far the most probable extraordinary observer (say, a human with a typical life span) is a minimal extraordinary observer, and only one of these per L-region or O-region. While σ required for a lone minimal extraordinary observer greatly exceeds that required for a lone minimal Boltzmann brain, it is still utterly dwarfed by the actual σ of our L-region and O-region: The disparity of Prob (σ) = exp(−σ/k) between that corresponding to a lone minimal extraordinary observer and that corresponding to our observed L-region and O-region is still by a factor of O ∼ 10 10 123 [61,62]. We are privileged to be not merely minimal extraordinary observers but super-extraordinary observers -more correctly hyper-extraordinary observers -with an entire Universe to explore and enjoy [61,62].
There are many arguments against Boltzmann-brain hypotheses [111][112][113][114][115][116][117][118]. Indeed, if there exist (a) imposed one-time low-entropy boundary conditions, (b) imposed two-time low-entropy boundary conditions [16,[88][89][90][91][92][93][94][101][102][103][104][105] in an oscillating L-region and O-region, or (c) [33][34][35]) imposed low-entropy mass-energy input such as hydrogen in a nonoscillating one [78], then such imposition would preclude thermodynamic equilibrium. Indeed, given (b) or (c), thermodynamic equilibrium would not only be precluded but be precluded forever. Given (b) or (c), there would be no need to assume a decaying or finite-lived Universe [117] to help explain consistency with our observations. But even given (a) the heat death σ = 0 need not be the most probable current state of the L-region or O-region of our Universe and hence a minimal Boltzmann brain [111][112][113][114][115][116][117][118] need not be the most probable current observer therein, because at the current cosmic time decay to maximum entropy has not yet occurred. Since by the cosmological principle [51] our L-region and O-region are nothing special, this must likewise be true with respect to any L-region or O-region in our island Universe -and likewise with respect to those in any other island Universe in the Multiverse. Moreover, it has even been argued that low-entropy boundary conditions are not required to avoid minimal Boltzmann brains being exponentially by far the most probable type of observer, or even the most probable type of observer at all [116]. Also, it has been argued that special, i.e., low-entropy, conditions are not required at Big Bangs or Big Bounces [123,124]. [Clustering of matter at t = 0, which might typically be expected to increase entropy in the presence of gravity [125,126], does not do so because in this model [123,124] it is prevented owing to positive kinetic energy equaling negative gravitational energy in magnitude, so that the total energy (which in a Newtonian model excludes mass-energy) equals zero. But on pp. 3-4 of Ref. [124], friction, which generates entropy, is invoked during the time evolution of the system. Frictional damping, by degrading part of the macroscopic kinetic energy of any given pair of objects into microscopic kinetic energy (heat), facilitates their settling into a bound Keplerian-orbit state. But because friction thus generates entropy, this may correspond to a hidden, overlooked, pre-friction low-entropy assumption concerning the initial t = 0 state of this model [123,124] in either of its two directions of time [123,124]. But a Kepler pair can be formed without friction, for example via a three-body collision wherein a third body removes enough macroscopic kinetic energy from the other two (without degrading any into heat) that they can settle into a bound Keplerian-orbit state.] Perhaps we should also note that the fraction f ∼ 10 −5 of Planck-power input as hydrogen mentioned in Sect. 4 would maintain our L-region and O-region much farther from thermodynamic equilibrium than is required for existence of one and only one minimal-Boltzmann-brain. Thus if Planck-power input exists then f ∼ 10 −5 rather than f = 0 cannot be explained owing to our L-region being lucky: Boltzmann's exponential relation Prob (σ) = exp(−σ/k) on the one hand, and σ being a monotonically increasing function of f on the other, rules out any values of σ and f larger than the absolute minima that allow the existence of one and only one minimal-Boltzmann-brain obtaining by dumb luck. Thus if Planck-power input exists then perhaps there is an underlying principle or law of physics requiring f ∼ 10 −5 not only in our L-region but in accordance with the cosmological principle [51] in every L-region of our, and also every other, island Universe [1] in the Multiverse [52-58].
Dr. Roger Penrose's concerns: Both sides of the inflation issue
We still must consider Dr. Roger Penrose's difficulty with inflation per se, the evidence for inflation not yet being totally beyond doubt [36-50]. Dr. Penrose has shown that, as per Boltzmann's relation between entropy and probability Prob (σ) = exp(−σ/k), the probability Prob 1, per "attempt," of creation of a Universe as far from thermodynamic equilibrium as ours without inflation, while extremely small, is nevertheless enormously larger than the probability Prob 2 with inflation. That is Prob 2 ≪ Prob 1 ≪ 1. At the 27th Texas Symposium on Relativistic Astrophysics [8], I asked Dr. Penrose the following question (I have streamlined the wording for this chapter): No matter how much smaller Prob 2 is than Prob 1 (so long as Prob 2, however miniscule even compared to the already miniscule Prob 1, is finitely greater than zero), inflation has to initiate only once -after initiating once it will then overwhelm all noninflationary regions. Dr. Penrose provided a concise and insightful reply [128], and also suggested that I re-read the relevant sections of his book, "The Road to Reality [15,61,62]" I did so. Dr. Penrose's key argument seems to be centered on squaring inflation with the Second Law of Thermodynamics. Dr. Penrose's central point, already briefly discussed in Sect. 5, begins with but does not end with recognizing that our L-region and O-region are much more thermodynamically atypical -with much lower entropy -than is required for us to exist even as hyper-extraordinary observers, as opposed to only one of us as a minimal extraordinary observer, let alone only one of us as a minimal Boltzmann brain. Our L-region and O-region are thermodynamically extremely atypical not merely with respect to all possible L-regions and O-regions. They kinetic energy from the other two -without degrading any into heat -that they can settle into a bound Keplerian-orbit state.] Perhaps we should also note that the fraction f ∼ 10 −5 of Planck-power input as hydrogen mentioned in Sect. 4 would maintain our L-region and O-region much farther from thermodynamic equilibrium than is required for existence of one and only one minimal Boltzmann brain. Thus if Planck-power input exists then f ∼ 10 −5 rather than f = 0 cannot be explained owing to our L-region being lucky: Boltzmann's exponential relation Prob (σ) = exp(−σ/k) on the one hand, and σ being a monotonically increasing function of f on the other, rules out any values of σ and f larger than the absolute minima that allow the existence of one and only one minimal Boltzmann brain obtaining by dumb luck.
Thus if Planck-power input exists then perhaps there is an underlying principle or law of physics requiring f ∼ 10 −5 not only in our L-region but in accordance with the cosmological principle [51] in every L-region of our, and also every other, island Universe [1] in the Multiverse [52-58].
Dr. Roger Penrose's concerns: Both sides of the inflation issue
We still must consider Dr. Roger Penrose's difficulty with inflation per se, the evidence for inflation not yet being totally beyond doubt [36-50]. Dr. Penrose has shown that, as per Boltzmann's exponential relation between negentropy and probability Prob (σ) = exp(−σ/k), the probability Prob 1, per "attempt," of creation of a Universe as far from thermodynamic equilibrium as ours without inflation, while extremely small, is nevertheless enormously larger than the probability Prob 2 with inflation. That is Prob 2 ≪ Prob 1 ≪ 1.
At the 27th Texas Symposium on Relativistic Astrophysics [8], I asked Dr. Penrose the following question (I have streamlined the wording for this chapter): No matter how much smaller Prob 2 is than Prob 1 (so long as Prob 2, however miniscule even compared to the already miniscule Prob 1, is finitely greater than zero), inflation has to initiate only onceafter initiating once it will then overwhelm all noninflationary regions. Dr. Penrose provided a concise and insightful reply [128], and also suggested that I re-read the relevant sections of his book, "The Road to Reality [15,61,62]" I did so. Dr. Penrose's key argument seems to be centered on squaring inflation with the Second Law of Thermodynamics. Dr. Penrose's central point, already briefly discussed in Sect. 5, begins with but does not end with recognizing that our L-region and O-region are much more thermodynamically atypical -with much lower entropy -than is required for us to exist even as hyper-extraordinary observers, as opposed to only one of us as a minimal extraordinary observer, let alone only one of us as a minimal Boltzmann brain. Our L-region and O-region are thermodynamically extremely atypical not merely with respect to all possible L-regions and O-regions. They are thermodynamically extremely atypical even with respect to the extremely tiny subset of already thermodynamically extremely atypical L-regions and O-regions that allow us to exist as hyper-extraordinary observers, as opposed to only one of us as a minimal extraordinary observer, let alone only one of us as a minimal Boltzmann brain. But now the link to inflation per se: As thermodynamically untypical as our L-region and O-region are today, they become as per Boltzmann's Prob (σ) = exp(−σ/k) exponentially ever more thermodynamically untypical as one considers them backwards in time [61,62]. Thus the disparity today by a factor of O ∼ 10 10 123 between the minimal-Boltzmann-brain or even minimal-extraordinary-observer hypothesis and observation becomes exponentially ever more severe as one considers our L-region and O-region backwards in time [61,62]. Thus the connection with inflation: Since inflation smooths out temperature differences and other nonuniformities, the very existence of temperature differences and other nonuniformities prior to inflation implies lower entropy than without such nonuniformities and hence renders the thermodynamic problem of origins worse not better [61,62]. In fact exponentially worse as per Boltzmann's exponential diminution Prob (σ) = exp(−σ/k) of probability with increasing negentropy ∆S [61,62]. As thermodynamically atypical and hence exponentially improbable as our Big Bang was, it must have been thermodynamically more atypical and hence exponentially more improbable if it was inflation-mediated than if it was not. This is the basic reason for Dr. Penrose's extremely strong inequality Prob 2 ≪ Prob 1.
(We should, however, cite the remark that prior to inflation there may have been little mass-energy to thermalize [129].) Nevertheless my question still persists: In infinite time, or even in a sufficiently long finite time, even the most improbable event (so long as its probability, however miniscule, is finitely greater than zero) not merely can occur but must occur. It has been noted "that whatever physics permitted one Big Bang to occur might well permit many repetitions [130]." But suppose that Universe creations can occur via both noninflationary and inflationary physics. Even if because Prob 2 ≪ Prob 1 there first occurred an enormous but finite number N 1 of noninflationary Big Bangs yielding Universes as far from thermodynamic equilibrium as ours, so long as Prob 2, however miniscule even compared to the already miniscule Prob 1, is finitely greater than zero, after a sufficiently enormous but finite number N 1 of such noninflationary Universe creations inflation must initiate. And it need initiate only once to kick-start the inflationary Multiverse. Thereafter the inflationary Multiverse rapidly attains overwhelming dominance over the noninflationary one -with the number N 2 of inflation-mediated Big Bangs yielding Universes as far from thermodynamic equilibrium as ours henceforth overwhelming the number N 1 of noninflationary ones by an ever-increasing margin. To reiterate, no matter how much smaller Prob 2 is than Prob 1 (so long as Prob 2, however miniscule even compared to the already miniscule Prob 1, is finitely greater than zero), in infinite time, or even in a sufficiently long finite time, inflation must eventually initiate once, kick-starting the inflationary Multiverse, which henceforth becomes ever-increasingly overwhelmingly dominant over the noninflationary one. But even if inflation is eternal, it did have a beginning [99], and hence so did the inflationary Multiverse [99].
While in this Sect. 6 the focus is on thermodynamic issues concerning inflation, we note that Dr. Penrose also considers nonthermodynamic issues, specifically the flatness problem [131]. are thermodynamically extremely atypical even with respect to the extremely tiny subset of already thermodynamically extremely atypical L-regions and O-regions that allow us to exist as hyper-extraordinary observers, as opposed to only one of us as a minimal extraordinary observer, let alone only one of us as a minimal Boltzmann brain. But now the link to inflation per se: As thermodynamically untypical as our L-region and O-region are today, they become as per Boltzmann's relation Prob (σ) = exp(−σ/k) exponentially ever more thermodynamically untypical as one considers them backwards in time [61,62]. Thus the disparity today by a factor of O ∼ 10 10 123 between the minimal-Boltzmann-brain or even minimal-extraordinary-observer hypothesis and observation becomes exponentially ever more severe as one considers our L-region and O-region backwards in time [61,62]. Thus the connection with inflation: Since inflation smooths out temperature differences and other nonuniformities, the very existence of temperature differences and other nonuniformities prior to inflation implies lower entropy than without such nonuniformities and hence renders the thermodynamic problem of origins worse not better [61,62]. In fact exponentially worse as per Boltzmann's exponential diminution Prob (σ) = exp(−σ/k) of probability with increasing negentropy σ [61,62]. As thermodynamically atypical and hence exponentially improbable as our Big Bang was, it must have been thermodynamically more atypical and hence exponentially more improbable if it was inflation-mediated than if it was not. This is the basic reason for Dr. Penrose's extremely strong inequality Prob 2 ≪ Prob 1.
(We should, however, cite the remark that prior to inflation there may have been little mass-energy to thermalize [129].) Nevertheless my question still persists: In infinite time, or even in a sufficiently long finite time, even the most improbable event (so long as its probability, however miniscule, is finitely greater than zero) not merely can occur but must occur. It has been noted "that whatever physics permitted one Big Bang to occur might well permit many repetitions [130]." But suppose that Universe creations can occur via both noninflationary and inflationary physics. Even if because Prob 2 ≪ Prob 1 there first occurred an enormous but finite number N 1 of noninflationary Big Bangs yielding Universes as far from thermodynamic equilibrium as ours, so long as Prob 2, however miniscule even compared to the already miniscule Prob 1, is finitely greater than zero, after a sufficiently enormous but finite number N 1 of such noninflationary Universe creations inflation must initiate. And it need initiate only once to kick-start the inflationary Multiverse. Thereafter the inflationary Multiverse rapidly attains overwhelming dominance over the noninflationary one -with the number N 2 of inflation-mediated Big Bangs yielding Universes as far from thermodynamic equilibrium as ours henceforth overwhelming the number N 1 of noninflationary ones by an ever-increasing margin. To reiterate, no matter how much smaller Prob 2 is than Prob 1 (so long as Prob 2, however miniscule even compared to the already miniscule Prob 1, is finitely greater than zero), in infinite time, or even in a sufficiently long finite time, inflation must eventually initiate once, kick-starting the inflationary Multiverse, which henceforth becomes ever-increasingly overwhelmingly dominant over the noninflationary one. But even if inflation is eternal, it did have a beginning [99], and hence so did the inflationary Multiverse [99].
While in this Sect. 6 the focus is on thermodynamic issues concerning inflation, we note that Dr. Penrose also considers nonthermodynamic issues, specifically the smoothness and flatness problems [131].
Kinetic control versus both heat death and Boltzmann brains?
A tentative solution to the thermodynamic problem of origins, namely dominance of kinetic over thermodynamic control [72][73][74][75][76][77] has already been proposed, as a reasonable guess, for the special cases of Planck-power input throughout Sect. 4 and Everett-Universe creation in the last paragraph of Sect. 4. We would now like to consider this issue somewhat more generally.
A generalized form of this prima facie perhaps reasonable guess might include: (a) Creation in general, by whatever method, both initial via Big Bang with or without inflation, etc. [26][27][28][29][30][31], via Everett [96][97][98], and sustained via Planck-power (or other [33][34][35]) input of equal nonzero quantities of both positive mass-energy and negative gravitational (or other negative [33][34][35]) energy starting from (zero positive energy + zero negative energy = zero total energy) entails an initial entropy of zero -the entropy of (zero positive energy + zero negative energy = zero total energy) is perforce zero. (b) Creation in general, by whatever method, both initial via Big Bang with or without inflation, etc. [26][27][28][29][30][31], via Everett [96][97][98], and sustained via Planck-power (or other [33][34][35]) input of equal nonzero quantities of both positive mass-energy and negative gravitational (or other negative [33][34][35]) energy starting from (zero positive energy + zero negative energy = zero total energy) is a nonequilibrium process. These processes do not allow enough time for complete thermalization of the input from the initial value of zero entropy of (zero positive energy + zero negative energy = zero total energy) to the maximum possible positive entropy of (nonzero positive energy + nonzero negative energy = zero total energy). Thus even though, thermodynamically, exponentially the most probable creation, initial or sustained, by any method, would yield a maximum-entropy Universe with exponentially the most probable observer a minimal Boltzmann brain, kinetically the reaction zero positive energy + zero negative energy = zero total energy −→ nonzero positive energy + nonzero negative energy = zero total energy (8 (restated)) occurs too quickly to allow thermodynamic equilibrium = maximum entropy to be attained. Thus creation, initial or sustained, by whatever method, yields (nonzero positive energy + nonzero negative energy = zero total energy) at positive but far lass than maximum entropy, consistently with the Second Law of Thermodynamics but not with the heat death. Thus the basis of our proposed tentative solution to the thermodynamic problem of both initial and sustained-input origins: the reaction (rx) of Eq. (8) is kinetically rather than thermodynamically controlled [72][73][74][75][76][77]. This kinetic control does not defeat thermodynamics (specifically the Second Law of Thermodynamics) but it does defeat the heat death. Thus if the reaction of Eq. (8) is kinetically rather than thermodynamically controlled then the heat death is thwarted, but within the restrictions of the Second Law of Thermodynamics. This kinetic as opposed to thermodynamic control could similarly obtain at the initial creation in accordance with Eq. (8) of an oscillating Universe with two-time low-entropy boundary conditions at
Kinetic control versus both heat death and Boltzmann brains?
A tentative solution to the thermodynamic problem of origins, namely dominance of kinetic over thermodynamic control [72][73][74][75][76][77] has already been proposed, as a reasonable guess, for the special case of Planck-power input discussed in association with Eq. (8) in Sect. 4 and Everett-Universe creation in the last paragraph of Sect. 4. We would now like to consider this issue somewhat more generally.
A generalized form of this prima facie perhaps reasonable guess might include: (a) Creation in general, by whatever method, both initial via Big Bang with or without inflation, etc. [26][27][28][29][30][31], via Everett [96][97][98], and sustained via Planck-power (or other [33][34][35]) input of equal nonzero quantities of both positive mass-energy and negative gravitational (or other negative [33][34][35]) energy starting from (zero positive energy + zero negative energy = zero total energy) entails an initial entropy of zero -the entropy of (zero positive energy + zero negative energy = zero total energy) is perforce zero: recall the paragraph containing Eq. (8) in Sect. 4. (b) Creation in general, by whatever method, both initial via Big Bang with or without inflation, etc. [26][27][28][29][30][31], via Everett [96][97][98], and sustained via Planck-power (or other [33][34][35]) input of equal nonzero quantities of both positive mass-energy and negative gravitational (or other negative [33][34][35]) energy starting from (zero positive energy + zero negative energy = zero total energy) is a nonequilibrium process. These processes do not allow enough time for complete thermalization of the input from the initial value of zero entropy of (zero positive energy + zero negative energy = zero total energy) to the maximum possible positive entropy of (nonzero positive energy + nonzero negative energy = zero total energy). Thus even though, thermodynamically, exponentially the most probable creation, initial or sustained, by any method, would yield a maximum-entropy Universe with exponentially the most probable observer a minimal Boltzmann brain, kinetically the reaction zero positive energy + zero negative energy = zero total energy −→ nonzero positive energy + nonzero negative energy = zero total energy (8 (restated)) occurs too quickly to allow thermodynamic equilibrium = maximum entropy to be attained. Thus creation, initial or sustained, by whatever method, yields (nonzero positive energy + nonzero negative energy = zero total energy) at positive but far lass than maximum entropy, consistently with the Second Law of Thermodynamics but not with the heat death. Thus the basis of our proposed tentative solution to the thermodynamic problem of both initial and sustained-input origins: the reaction (rx) of Eq. (8) is kinetically rather than thermodynamically controlled [72][73][74][75][76][77]. This kinetic control does not defeat thermodynamics (specifically the Second Law of Thermodynamics) but it does defeat the heat death. Thus if the reaction of Eq. (8) is kinetically rather than thermodynamically controlled then the heat death is thwarted, but within the restrictions of the Second Law of Thermodynamics. This kinetic as opposed to thermodynamic control could similarly obtain at the initial creation in accordance with Eq. Let ∆S rx be the increase in entropy associated with the reaction (rx) of Eq. (8), with respect to our L-region. If 0 ∆S rx S max ∼ 10 123 k, then, on the one hand, the strong inequality 0 ∆S rx ensures an equilibrium constant K eq = exp(∆S rx /k) sufficiently large that the reverse reaction is forbidden for all practical purposes, thus stabilizing creation [72][73][74][75][76][77]. Thus the strong inequality 0 ∆S rx justifies the placement of only a forward arrow (no reverse arrow) at the beginning of the second line of Eq. (8) [72][73][74][75][76][77]. On the other hand, the strong inequality ∆S rx S max ∼ 10 123 k ensures against the doom and gloom that one would dread based solely on Boltzmann's relation Prob (∆S) = exp(−∆S/k). Note for example that even if ∆S rx = 10 120 k and hence for the reaction (rx) of Eq. (8) K eq = e 10 120 , the entropy of our L-region is still only O ∼ 10 −3 of that corresponding to thermodynamic equilibrium and hence still σ ∼ 10 123 k. [References [73][74][75][76][77] express the equilibrium constant as K eq = exp(−∆G rx /kT), where ∆G rx is the Gibbs free energy change associated with a reaction in the special case of a system maintained at constant temperature T and constant ambient pressure.
(To be precise, the ambient pressure must be maintained strictly constant during a reaction, but the temperature of the reactive system can vary in intermediate states so long as at the very least the initial and final states are at the same temperature, for this definition of ∆G rx to be valid [132][133][134][135]. 16 In this special case, |∆G rx | is the maximum work that a reaction can yield if ∆G rx < 0 and the minimum work required to enable it if ∆G rx > 0. But in this special case ∆G rx = −T∆S rx where ∆S rx is the total entropy change of the (system + surroundings). Hence K eq = exp(−∆G rx /kT) is the corresponding special case of K eq = exp(∆S rx /k). In this chapter ∆S and ∆S rx are always taken to be total entropy changes of the entire Universe or at least of our L-region thereof.] 16 (Re: Entry [132], Ref. [132]) One point: On p. 479 of Ref. [132], it is stated that in an adiabatic process all of the energy lost by a system can be converted to work, but that in a nonadiabatic process less than all of the energy lost by a system can be converted to work. But if the entropy of a system undergoing a nonadiabatic process increases, then more than all of the energy lost by this system can be converted to work, because energy extracted from the surroundings can then also contribute to the work output. In some such cases positive work output can be obtained at the expense of the surroundings even if the change in a system's energy is zero, indeed even if a system gains energy. Examples: (a) Isothermal expansion of an ideal gas is a thermodynamically spontaneous process, yielding work even though the energy change of the ideal gas is zero. (b) Evaporation of water into an unsaturated atmosphere (relative humidity less than 100%) is a thermodynamically spontaneous process, yielding work even though it costs heat, i.e., yielding work even though liquid water gains energy in becoming water vapor: see Refs. [133][134][135] concerning this point. Let ∆S rx be the increase in entropy associated with the reaction (rx) of Eq. (8), with respect to our L-region. If 0 ∆S rx S max ∼ 10 123 k, then, on the one hand, the strong inequality 0 ∆S rx ensures an equilibrium constant K eq = exp(∆S rx /k) sufficiently large that the reverse reaction is forbidden for all practical purposes, thus stabilizing creation [72][73][74][75][76][77]. Thus the strong inequality 0 ∆S rx justifies the placement of only a forward arrow (no reverse arrow) at the beginning of the second line of Eq. (8) [72][73][74][75][76][77]. On the other hand, the strong inequality ∆S rx S max ∼ 10 123 k ensures against the doom and gloom that one would dread based solely on Boltzmann's relations Prob (∆S) = exp(−∆S/k) and Prob (σ) = exp(−σ/k). Note for example that even if ∆S rx = 10 120 k and hence for the reaction (rx) of Eq. (8) K eq = e 10 120 , the entropy of our L-region is still only O ∼ 10 −3 of that corresponding to thermodynamic equilibrium and hence still σ ∼ 10 123 k − 10 120 k , which is for all practical purposes still σ ∼ 10 123 k. [References [73][74][75][76][77] express the equilibrium constant as K eq = exp(−∆G rx /kT), where ∆G rx is the Gibbs free energy change per molecular reaction in the special case of a system maintained at constant temperature T and constant ambient pressure. (To be precise, the ambient pressure must be maintained strictly constant during a reaction, but the temperature of the reactive system can vary in intermediate states so long as at the very least the initial and final states are at the same temperature, for this definition of ∆G rx to be valid [132][133][134][135]. 16 ) In this special case, |∆G rx | is the maximum work obtainable per molecular reaction if ∆G rx < 0 and the minimum work required to enable it if ∆G rx > 0. But in this special case ∆G rx = −T∆S rx where ∆S rx is the total entropy change of the (system + surroundings) per molecular reaction. Hence K eq = exp(−∆G rx /kT) is the corresponding special case of K eq = exp(∆S rx /k). In this chapter ∆S and ∆S rx are always taken to be total entropy changes of the entire Universe or at least of our L-region thereof.] 16 (Re: Entry [132], Ref. [132]) One point: On p. 479 of Ref. [132], it is stated that in an adiabatic process all of the energy lost by a system can be converted to work, but that in a nonadiabatic process less than all of the energy lost by a system can be converted to work. But if the entropy of a system undergoing a nonadiabatic process increases, then more than all of the energy lost by this system can be converted to work, because energy extracted from the surroundings can then also contribute to the work output. In some such cases positive work output can be obtained at the expense of the surroundings even if the change in a system's energy is zero, indeed even if a system gains energy. Examples: (a) Isothermal expansion of an ideal gas is a thermodynamically spontaneous process, yielding work even though the energy change of the ideal gas is zero. (b) Evaporation of water into an unsaturated atmosphere (relative humidity less than 100%) is a thermodynamically spontaneous process, yielding work even though it costs heat, i.e., yielding work even though liquid water gains energy in becoming water vapor: see Refs. [133][134][135] concerning this point.
As a brief aside, we note that many chemical reactions are similarly kinetically rather than thermodynamically controlled [72][73][74][75][76][77], in like manner as Eq. (8). While only chemical reactions are discussed in Refs. [72][73][74][75][76][77], the same principle likewise applies with respect to all kinetically rather than thermodynamically controlled processes, for example kinetically rather than thermodynamically controlled physical and nuclear reactions. As we discussed in Sect. 4 if nuclear reactions were thermodynamically rather than kinetically controlled then there would be nothing but (iron + equilibrium blackbody radiation) -an iron-dead Universe. [72][73][74][75][76][77] seems to be at least a reasonable tentative explanation of why we are privileged to be not merely minimal extraordinary observers but super-extraordinary observers -more correctly hyper-extraordinary observers -with an entire Universe to explore and enjoy [61,62]. By the cosmological principle [51] we may hope that this is true everywhere in the Multiverse.
A brief review concerning the Multiverse, and some alternative viewpoints
As a brief aside, we note that many chemical reactions are similarly kinetically rather than thermodynamically controlled [72][73][74][75][76][77], in like manner as Eq. (8). While only chemical reactions are discussed in Refs. [72][73][74][75][76][77], the same principle likewise applies with respect to all kinetically rather than thermodynamically controlled processes, for example kinetically rather than thermodynamically controlled physical and nuclear reactions. As we discussed in Sect. 4 if nuclear reactions were thermodynamically rather than kinetically controlled then there would be nothing but (iron + equilibrium blackbody radiation) -an iron-dead Universe. We should note that if conscious observers, also referred to as self-aware substructures (SASs) [143][144][145], are not merely self-aware but also have free will, then they have at least some degree of choice concerning creation of Level III Universes: They then have at least some freedom to choose whether or not to make a given observation or measurement, which observations and measurements to make, and when to make them. Even if the Everett interpretation [96-98] of quantum mechanics is incorrect [146] and Level III Universes exist only in potentiality until one and only one of them is actualized [146], say via wave-function collapse [147], then an SAS with free will still has this degree of choice.
A brief review concerning the Multiverse, and some alternative viewpoints
Even if the probabilities of the possible outcomes of any given observation or measurement cannot be altered, the set of possible outcomes on offer to Nature depends on which observations and measurements are chosen by an SAS with free will, and when they are on offer depends on when an SAS with free will chooses to observe or measure. Thus irrespective of the character of Level III Universes, if free will exists then there is this qualitative difference between unchosen observations and measurements made by Nature herself, say via decoherence [148,149], and chosen ones made by an SAS with free will. Moreover, "decoherence" is perhaps too strong a term; "delocalization of coherence" seems more correct. Since quantum-mechanical information in general cannot be destroyed, quantum-mechanical coherence in particular is never really destroyed, merely delocalized. As with any delocalization process there is an accompanying increase in entropy. But within a system of finite volume this increase in entropy is limited to a finite maximum value, implying recoherence, or more correctly relocalization of coherence, after a Poincaré recurrence time [108,150,151]. Of course, typical Poincaré recurrence times [108,150,151] of all but very small systems are inconceivably long, but in a very small system at least partial recoherence, or more correctly relocalization of coherence, may occur in a reasonable time. We should note that even before the term "decoherence" had been coined, some aspects of decoherence, or more correctly delocalization of coherence, had been partially anticipated [152,153]. For general reviews concerning the quantum-mechanical measurement problem see, for example, Refs. [149] and [152][153][154][155] [156]. Alex Vilenkin writes: "Beautiful mathematics combines simplicity with depth [156]." (But also that "simplicity" and "depth" are almost as difficult to define as "beauty [156].") But Dr. Alex Vilenkin also writes: "Mathematical beauty may be useful as a guide, but it is hard to imagine that it would suffice to select a unique theory out of We should note that if conscious observers, also referred to as self-aware substructures (SASs) [143][144][145], are not merely self-aware but also have free will, then they have at least some degree of choice concerning creation of Level III Universes: They then have at least some freedom to choose whether or not to make a given observation or measurement, which observations and measurements to make, and when to make them. Even if the Everett interpretation [96-98] of quantum mechanics is incorrect [146] and Level III Universes exist only in potentiality until one and only one of them is actualized [146], say via wave-function collapse [147], then an SAS with free will still has this degree of choice.
Even if the probabilities of the possible outcomes of any given observation or measurement cannot be altered, the set of possible outcomes on offer to Nature depends on which observations and measurements are chosen by an SAS with free will, and when they are on offer depends on when an SAS with free will chooses to observe or measure. Thus irrespective of the character of Level III Universes, if free will exists then there is this qualitative difference between unchosen observations and measurements made by Nature herself, say via decoherence [148,149], and chosen ones made by an SAS with free will. Furthermore, a choice made by an SAS with free will seems to be an initial condition on the future history of the Universe, or on the future history of the Level III Multiverse of quantum branches given the Everett scenario [96][97][98]. The question then arises of compatibility with the Mathematical Universe Hypothesis (MUH), according to which initial conditions cannot exist [150,151]. But both the very notion of choice [152] and exhortations to "Let's make a difference!" [153] seem incompatible with denial of free will. Moreover, "decoherence" is perhaps too strong a term; "delocalization of coherence" seems more correct. Since quantum-mechanical information in general cannot be destroyed, quantum-mechanical coherence in particular is never really destroyed, merely delocalized. As with any delocalization process there is an accompanying increase in entropy. But within a system of finite volume this increase in entropy is limited to a finite maximum value, implying recoherence, or more correctly relocalization of coherence, after a Poincaré recurrence time [108,154,155]. Of course, typical Poincaré recurrence times [108,150,151] of all but very small systems are inconceivably long, but in a very small system at least partial recoherence, or more correctly relocalization of coherence, may occur in a reasonable time. We should mention that even before the term "decoherence" had been coined, some aspects of decoherence, or more correctly delocalization of coherence, had been partially anticipated [156,157]. For general reviews concerning the quantum-mechanical measurement problem see, for example, Refs. [149] and [156][157][158][159] [99]. The laws of quantum mechanics -our laws of quantum mechanicsgoverned the initial tunneling event that created not merely our Universe but the Multiverse, at least through Level II [99,136]. Thus these laws, on whatever tablets they are written, must have existed before, and must exist independently of, the Multiverse at least through Level II [99,136] -not merely of our island Universe [99]. Concerning Level III, it seems that Levels I+II, or at the very least Level I, must exist first, because Levels I+II, or at the very least Level I, seems prerequisite for the existence of entities capable of executing Dr. Hugh Everett's program [96][97][98]. But might the prerequisites for a beginning and for the pre-existence of our laws of quantum mechanics be general, operative even at Level IV [136]? But if so then might Level IV -but not Levels I, II, and III -be more restricted than has been suggested [136]? For then might our laws of quantum mechanics be part of the one maximally-beautiful mathematical structure that maximally entails both simplicity and depth our fundamental (not merely effective) laws of physics [136] -after all? Then perhaps this one maximally beautiful mathematical structure, this maximal possible entailment of both simplicity and depth, is the only one realized via physically-existing Universes. But if this is the case then the question arises: Why does this one maximally beautiful mathematical structure permit life [159] (at the very least, carbon-based life as we know it on Earth)?
We must admit that in this chapter we have not even scratched the surface, as per this paragraph and the two immediately following ones. There are many alternative viewpoints concerning the Multiverse and related issues. We should at least mention a few of them that we have not mentioned until now. According to at least one of these viewpoints, inflation is eternal into the past as well as into the future, and hence has no beginning as well as no end [160][161][162]. But perhaps this is compatible with inflation having a beginning if regions of inflation in the forward and backward time directions are disjoint and incapable of any interaction with each other [163]. Then perhaps observers in both types of regions would consider their home region to be evolving forward, not backward, in time. According to other viewpoints, inflation not only has a beginning but also has an end -eternal inflation is impossible [164,165]. According to one of these viewpoints, the end of inflation is imposed by the increasingly fractal nature of spacetime [164,165]. We also note that Dr. Roger Penrose considered another difficulty associated with possible fractal nature of spacetime: inflation does not solve the smoothness and flatness problems if the structure of spacetime is fractal, , are equal: some are more beautiful and hence more equal than others [160]. Alex Vilenkin writes: "Beautiful mathematics combines simplicity with depth [160]." (But also that "simplicity" and "depth" are almost as difficult to define as "beauty [160].") But Dr. Alex Vilenkin also writes: "Mathematical beauty may be useful as a guide, but it is hard to imagine that it would suffice to select a unique theory out of the infinite number of possibilities [161]." These points are also considered by Dr. Roger Penrose [162]. Yet even so mathematical beauty should have at least some selective power. A case in point: Newton's laws have both simplicity and depth, and hence are beautiful. But Einstein's laws have both greater simplicity and greater depth, and hence are more beautiful.
The laws of motion have the same form in all reference frames in General Relativity but not in Newton's theory (for example, Newton's theory requires extra terms for centrifugal and Coriolis forces in rotating reference frames), thus General Relativity has greater simplicity; additionally, Newton's theory is a limiting case of Einstein's but not vice versa, thus General Relativity also has greater depth. Hence might a Universe wherein Newton's laws are the fundamental laws, not merely a limiting case of relativity and quantum mechanics, be denied physical existence in a Level IV Multiverse -because even though it is a beautiful mathematical structure, it is not the maximally-beautiful one that maximally entails both simplicity and depth? While (even neglecting quantum mechanics) we cannot be sure if General Relativity is the maximally-beautiful mathematical structure, we can be sure that Newtonian theory, while beautiful, is not maximally beautiful. Moreover, while the Multiverse is eternal, it nonetheless, at least below Level IV [136], did have a beginning [99]. The laws of quantum mechanics -our laws of quantum mechanics -governed the initial tunneling event that created not merely our Universe but the Multiverse, at least through Level II [99,136]. Thus these laws, on whatever tablets they are written, must have existed before, and must exist independently of, the Multiverse at least through Level II [99,136] not merely of our island Universe [99]. Concerning Level III, it seems that Levels I+II, or at the very least Level I, must exist first, because Levels I+II, or at the very least Level I, seems prerequisite for the existence of entities capable of executing Dr. Hugh Everett's program [96][97][98]. But might the prerequisites for a beginning and for the pre-existence of our laws of quantum mechanics be general, operative even at Level IV [136]? But if so then might Level IV -but not Levels I, II, and III -be more restricted than has been suggested [136]? For then might our laws of quantum mechanics be part of the one maximally-beautiful mathematical structure that maximally entails both simplicity and depth -our fundamental (not merely effective) laws of physics [136] -after all? Then perhaps this one maximally beautiful mathematical structure, this maximal possible entailment of both simplicity and depth, is the only one realized via physically-existing Universes. But if this is the case then the question arises: Why does this one maximally beautiful mathematical structure permit life [163] (at the very least, carbon-based life as we know it on Earth)?
We must admit that in this chapter we have not even scratched the surface, as per this paragraph and the two immediately following ones. There are many alternative viewpoints concerning the Multiverse and related issues. We should at least mention a few of them that we have not mentioned until now. According to at least one of these viewpoints, inflation is eternal into the past as well as into the future, and hence has no beginning as well as no end [164][165][166]. But perhaps this is compatible with inflation having a beginning if regions of inflation in the forward and backward time directions are disjoint and incapable of any interaction with each other [167]. Then perhaps observers in both types of regions would consider their home region to be evolving forward, not backward, in time. According to or worse than fractal [166]. According to another of these viewpoints, the end of inflation is imposed by the Big Snap, according to which expansion of space will eventually dilute the number of degrees of freedom per any unit volume, and specifically per Hubble volume, to less than one, although the Universe will probably be in trouble well before the number of degrees of freedom per Hubble volume is reduced to one [167,168]. But perhaps new degrees of freedom can be created to compensate [167,168]. Perhaps Planck-power input, if it exists, can, because it replenishes mass-energy, also replenish degrees of freedom -thereby precluding the Big Snap. In Dr. Max Tegmark's rubber-band analogy, this corresponds to new molecules of rubber being created as the rubber band stretches, thereby keeping the density of rubber constant [167,168]. But, with or without a Big Snap [167,168], if inflation does have an end for any reason whatsoever, then my question to Dr. Roger Penrose in Sect. 6 is answered negatively.
There are also many proposed solutions to the entropy problem (why there is so very much more than one minimal Boltzmann brain in our L-region and O-region), some of which we have already discussed and/or cited in Sects. 5-8, other than Planck-power input. But there are still other proposed solutions to the entropy problem. One other proposed solution that we have not yet cited entails quantum fluctuations ensuring that every baby Universe starts out with an unstable large cosmological-constant, which corresponds to low total entropy because it is thermodynamically favorable for the consequent high-energy false vacuum to decay spontaneously [169,170]. Yet another proposed solution that we have not yet cited entails observer-assisted low entropy [168].
There are also many alternative viewpoints concerning fine-tuning and life in the Universe. It has been noted that since physical parameters such as constants of nature, strengths of forces, masses of elementary particles, etc., all have real-number values, and the range of real numbers is infinite, then if the probability of occurrence of a given real-number value of a given parameter is uniform, or at least non-convergent, then there is only an infinitessimal probability of this value being within any finite range [171]. But if thee are an infinite number of L-regions and O-regions, this infinity may be as large or even larger. We should also note that while some scientists are favorable towards the idea of fine tuning [172], others are sceptical to the point of not requiring a Multiverse to explain it away, but stating that it is an invalid concept even if our O-region constituted the entire Universe [173][174][175]. Even this sceptical viewpoint admits that only a very small range of parameter space is consistent with carbon-based life as we know it on Earth [], but assumes that a much larger range of parameter space is consistent with life in general [174]. But life, at least chemically-based life, probably must be based on carbon, because no other element even comes close to matching carbon's ability to form highly complex, information-rich molecules. Even carbon's closest competitor, silicon, falls woefully short. Also, nucleosynthesis in stars forms carbon more easily that silicon [176], so carbon is more abundant [176].
other viewpoints, inflation not only has a beginning but also has an end -eternal inflation is impossible [168,169]. According to one of these viewpoints, the end of inflation is imposed by the increasingly fractal nature of spacetime [168,169]. We also note that Dr. Roger Penrose considered another difficulty associated with possible fractal nature of spacetime: inflation does not solve the smoothness and flatness problems if the structure of spacetime is fractal, let alone worse than fractal [170]. According to another of these viewpoints, the end of inflation is imposed by the Big Snap, according to which expansion of space will eventually dilute the number of degrees of freedom per any unit volume, and specifically per Hubble volume, to less than one, although the Universe will probably be in trouble well before the number of degrees of freedom per Hubble volume is reduced to one [171,172]. But perhaps new degrees of freedom can be created to compensate [171,172]. Perhaps Planck-power input, if it exists, can, because it replenishes mass-energy, also replenish degrees of freedom -thereby precluding the Big Snap. In Dr. Max Tegmark's rubber-band analogy, this corresponds to new molecules of rubber being created (at the expense of negative gravitational energy of a "rubber-band Universe") the rubber band stretches, thereby keeping the density of rubber constant [171,172]. But, with or without a Big Snap [171,172], if inflation does have an end for any reason whatsoever, then my question to Dr. Roger Penrose in Sect. 6 is answered negatively: then the vast majority of Big Bangs will be noninflationary.
There are also many proposed solutions to the entropy problem (why there is so very much more than one minimal Boltzmann brain in our L-region and O-region), some of which we have already discussed and/or cited in Sects. 5-8, other than Planck-power input. But there are still other proposed solutions to the entropy problem. One other proposed solution that we have not yet cited entails quantum fluctuations ensuring that every baby Universe starts out with an unstable large cosmological-constant, which corresponds to low total entropy because it is thermodynamically favorable for the consequent high-energy false vacuum to decay spontaneously [173,174]. Yet another proposed solution that we have not yet cited entails observer-assisted low entropy [172].
There are also many alternative viewpoints concerning fine-tuning and life in the Universe. It has been noted that since physical parameters such as constants of nature, strengths of forces, masses of elementary particles, etc., all have real-number, or perhaps rational-number, values. The range of real numbers, or even of rational numbers, is infinite. (The countable infinity of rational numbers is smaller than the uncountable infinity of real numbers, but even a countable infinity is still infinite.) Hence, if the probability of occurrence of a given real-number, or even rational-number, value of a given parameter is uniform, or at least non-convergent, then there is only an infinitessimal probability of this value being within any finite range [175,176]. But if thee are an infinite number of L-regions and O-regions, this infinity may be as large or even larger. We should also note that while some scientists are favorable towards the idea of fine tuning [177], others are skeptical to the point of not requiring a Multiverse to explain it away, but stating that it is an invalid concept even if our O-region constituted the entire Universe [178][179][180]. Even this skeptical viewpoint admits that only a very small range of parameter space is consistent with carbon-based life as we know it on Earth [180], but assumes that a much larger range of parameter space is consistent with life in general [180]. But life, at least chemically-based life, probably must be based on carbon, because no other element even comes close to matching carbon's ability to form highly complex, information-rich molecules. Even carbon's closest competitor, silicon, falls woefully short. Also, nucleosynthesis in stars forms carbon more easily than silicon [181], so the relation between thermodynamics on the one hand, and inflation and cosmology on the other, and to Dr carbon is more abundant in the Universe [181]. We conclude by citing a paper that, while favoring a purely materialistic viewpoint, discusses would be required to seriously question it [182], and a book that explores many topics and viewpoints [183]. Reference [183] considers not only many, probably most, of the topics and viewpoints also considered in references that we have previously cited, but also many additional topics and viewpoints. | 2018-10-31T22:21:00.118Z | 2015-12-21T00:00:00.000 | {
"year": 2015,
"sha1": "82270237e3e4ab0a017e910c905e479e2f1fe32b",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/49747",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "8a86fa44f8b7060f6aa95b38a48da2d3e277c4a8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
235648520 | pes2o/s2orc | v3-fos-license | Inconsistent Use of Resistance Exercise Names in Research Articles: A Brief Note
Abstract Nuzzo, JL. Inconsistent use of resistance exercise names in research articles: a brief note. J Strength Cond Res 35(12): 3518–3520, 2021—Academic fields require standard nomenclature to communicate concepts effectively. Previous research has documented resistance training exercises are named inconsistently. This inconsistent use has been observed among fitness professionals and within resistance training textbooks. The purpose of the current note was to explore inconsistent use of resistance training exercise names in scientific articles. Keyword searches were performed in PubMed to identify articles that referred to 4 different resistance training exercises. The search was limited to titles and abstracts of articles published between 1960 and 2020. For exercise 1, “shoulder press,” “overhead press,” and “military press” were searched. For exercise 2, “arm curl,” “bicep curl,” and “biceps curl” were searched. For exercise 3, “hamstring curl,” “leg curl,” and “knee curl” were searched. For exercise 4, “calf raise” and “heel raise” were searched. For exercise 1, 114 articles included “shoulder press” in their title or abstract, 42 articles included “overhead press,” and 45 articles included “military press.” For exercise 2, 244 articles included “arm curl,” 37 articles included “bicep curl,” and 177 articles included “biceps curl.” For exercise 3, 24 articles included “hamstring curl,” 159 articles included “leg curl,” and 7 articles included “knee curl.” For exercise 4, 68 articles included “calf raise” and 154 articles included “heel raise.” The results are evidence of inconsistent use of resistance training exercise names in scientific articles. A possible solution to inconsistent use of exercise names in research articles, educational texts, and clinical practice is a system that includes a standard exercise naming pattern and guidelines for communicating exercise names.
Introduction
Academic fields require standard nomenclature to communicate concepts effectively. The field of strength and conditioning is no exception. Results from 2 studies have documented resistance training exercises are named inconsistently (6,7). In 2013, Jackson et al. (6) showed 205 fitness professionals photographs of 10 resistance training exercises and asked them to name the exercises. Fitness professionals responded with dissimilar exercise names and appeared to use different naming patterns when referring to the exercises. In 2017, Nuzzo (7) analyzed the names of 57 exercises in a resistance training technique manual. The analysis revealed inconsistent uses of words and word patterns for exercise names. Thus, to date, previous research has documented inconsistent use of exercise names in educational texts and among fitness professionals. The purpose of the current note was to explore the possibility of inconsistent use of resistance training exercise names in scientific articles. Results from such work might have implications for how information about resistance training is communicated with students and the general public and also how it is communicated between health professionals and researchers.
Experimental Approach to the Problem
To explore potential inconsistent use of resistance exercise names in research, keyword searches were performed in PubMed to identify articles that referred to 4 different resistance training exercises.
Procedures
The PubMed search was limited to the titles and abstracts of articles published between January 1, 1960, and December 31, 2020. The TIAB term in PubMed limits searches to the titles and abstracts of articles. The DP term in PubMed limits searches to the range of dates entered.
For exercise 1, the names "shoulder press," "overhead press," and "military press" were searched (e.g., "shoulder press" [TIAB] 1960/01/01:2020/12/31 [DP]). For exercise 2, the names "arm curl," "bicep curl," and "biceps curl" were searched. For exercise 3, the names "hamstring curl," "leg curl," and "knee curl" were searched. "Knee flexion" was not searched for exercise 3 because its use is common outside of the context of physical exercise. For exercise 4, "calf raise" and "heel raise" were searched. These 4 exercises were selected because they were part of the previous content analysis of educational texts (7) and were hypothesized to show inconsistencies. Example photographs of the movement patterns usually associated with exercises 1, 2, 3, and 4 can be found on pages 394 (or 395), 367, 393, and 369 (or 370) of the Essentials of Strength Training and Conditioning (4th edition) textbook, respectively (3). Use of the names of these exercises was also specifically examined in the Journal of Strength and Conditioning Research (JSCR), as JSCR is one of the most notable journals for disseminating information on resistance training. In PubMed, the JOUR term was used to limits searches to JSCR (e.g., j strength cond res [JOUR] "shoulder press" [TIAB] 1960/01/01:2020/12/31 [DP]).
Statistical Analyses
Titles of the identified articles were screened to remove any articles that included use of the searched phrase in a way that was unrelated to physical exercise or muscle strength testing. Then, the number of articles that included the searched phrase in relation to physical exercise or muscle strength testing was recorded (i.e., frequency count). No further statistical analyses were performed.
Results
Of the articles identified, 4 were deemed ineligible because they did not relate to physical exercise or muscle strength testing. Two of the excluded articles were associated with "military press," and 2 were associated with "shoulder press." Table 1 displays results for the number of times exercise names were used in titles or abstracts of all articles indexed in PubMed. For exercise 1, the most commonly used name was "shoulder press." For exercise 2, "arm curl" was the most common. For exercise 3, "leg curl" was the most common. For exercise 4, "heel raise" was the most common. Table 2 displays the number of times exercise names were used in titles or abstracts of articles indexed in PubMed and published in JSCR. For exercise 1, "shoulder press" was the most common. For exercise 2, "biceps curl" was the most common. For exercise 3, "leg curl" was the most common. For exercise 4, "calf raise" and "heel raise" were used nearly the same number of times.
Discussion
Previous research has documented resistance exercises are named inconsistently in educational texts (7) and among health and fitness professionals (6). Results from the current analysis illustrate resistance exercises are also named Resistance Exercise Names (2021) 35:12 | www.nsca.com inconsistently in research articles. Thus, inconsistent use of exercise names is a pervasive issue. Results from the current analysis reveal inconsistent use of exercise names in research articles for 4 different exercises. The magnitude of the problem may be underestimated given the study methods. For example, only titles and abstracts of articles indexed in PubMed were searched. Inconsistent use of exercise names will likely be magnified when entire texts of articles indexed in PubMed and other databases are analyzed. Also, only 4 exercises were assessed, and not all potential names were searched for the 4 exercises. For example, although the name "knee flexion" is sometimes used in research to describe exercise 3 (8,9), the name was not searched because it is commonly used in research that is unrelated to physical exercise or muscle strength testing. Thus, inconsistent use of exercise names is likely to be magnified when more exercises and more name variations are searched.
Some differences existed between the use of the exercise names in JSCR compared with all journals. For exercise 2, articles published in JSCR were more likely to include "biceps curl" than "arm curl." For exercise 3, articles published in JSCR were equally likely to include "calf raise" and "heel raise," whereas in all journals, "heel raise" was more common. These findings suggest researchers in different professional groups might use different exercise names. For example, "arm curl" was used in 22 different articles published in the following 4 aging and geriatrics journals: Aging Clinical and Experimental Research, Archives of Gerontology and Geriatrics, Clinical Interventions in Aging, Geriatrics and Gerontology International. "Biceps curl" appeared in the titles or abstracts of only 2 articles published in these 4 journals. This observation can be explained, in part, by the use of the "arm curl test" of muscle strength and endurance in physical fitness batteries for older adults.
The current analysis does not reveal which exercise names are correct. To determine this, further research and discussion on exercise names are necessary. Survey research can be used to understand exercise names that are used most commonly among the general population and fitness professionals. Such results might then inform professional consensus on how exercises should be named and what names should be used in the future. Such a consensus might involve input from physical culture historians, personal trainers, strength and conditioning coaches, physical therapists, and exercise science researchers. Physical culture historians might describe changes in the use of exercise names over time. For example, they could describe why and when the "deep knee bend" (1,2,5) was renamed the "squat" and why and when the "prone press" (4) was renamed the "bench press." Personal trainers, strength and conditioning coaches, and physical therapists might describe what exercise names are most effective at communicating exercise information with clients, athletes, and patients. Exercise science researchers might contribute abstract knowledge associated with rules for naming exercises based on anatomical and biomechanical principles. For example, Jackson et al. (6) and Nuzzo (7) have quantified and discussed the general types of words used in exercise names. Further discussion about such information might help to develop a standard word pattern for naming exercises. Such information might also help journal editors decide if they want authors to adopt specific exercise names when publishing research articles in their journals.
Practical Applications
A solution to inconsistent use of exercise names in research and practice is the development of a taxonomy or system of naming exercises. Until such a system is developed, exercise information can be communicated in a way that balances scientific accuracy with communication effectiveness. This balance might depend on the context of the communication. For example, the name used to communicate an exercise might depend on the knowledge and previous experience of the individual being communicated with (e.g., general population, athlete, and other coaches or researchers) or the outlet for communication (e.g., a blog, television interview, student textbook, and research article). Also, until such a taxonomy is developed, researchers can consider adding photographs of exercises in appendices or supplemental materials to ensure readers are certain which exercises were completed by study subjects. When this is not possible, a textbook with relevant photographs can be cited and information on body posture (e.g., seated and standing) and type of equipment used (e.g., dumbbell, barbell, and machine) can be included in the article's text. Similarly, for coaches and fitness professionals who prescribe exercise remotely, photographs and videos of exercises can be included along with exercise names to ensure the prescribed version of the exercise is performed. Also, as individuals refer to exercises by different names, developers of fitness applications for mobile phones might consider indexing a given exercise under multiple names to ensure its discovery within the application. | 2021-06-27T06:16:30.980Z | 2021-06-22T00:00:00.000 | {
"year": 2021,
"sha1": "83f66a5084b94b6c7b6f4116589ea22a985ac0c6",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8608004",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "a6b30071746636eaa9285985d43735921752e93d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.